To see the other types of publications on this topic, follow the link: Theoretical sampling framework.

Journal articles on the topic 'Theoretical sampling framework'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Theoretical sampling framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qian, Hui, and Lun Yu. "Exponential Sampling: A Gibbs Phenomena Removal Model for Finite Rate of Innovation Sampling Framework." Applied Mechanics and Materials 130-134 (October 2011): 470–74. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.470.

Full text
Abstract:
We propose an exponential approximating function as a sampling kernel with finite rate of innovation. The performance of reconstruct non-bandlimited signals from its low frequency components would inevitably induce Gibbs phenomenon. This paper establishes the theoretical model on relationship between sampling kernel filter and parametric reconstruction method of non-bandlimited signals, and designs a new window function exponential sampling kernel filter to removal Gibbs Influence. Simulation results show that, compared to Sinc sampling kernel filter, the reconstruction ability of exponential filter based finite rate sampling system is improved under the white Gaussian noise environment.
APA, Harvard, Vancouver, ISO, and other styles
2

Jaki, Thomas, and Martin J. Wolfsegger. "A Theoretical Framework for Estimation of AUCs in Complete and Incomplete Sampling Designs." Statistics in Biopharmaceutical Research 1, no. 2 (May 2009): 176–84. http://dx.doi.org/10.1198/sbr.2009.0025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Støa, Bente, Rune Halvorsen, Sabrina Mazzoni, and Vladimir I. Gusarov. "Sampling bias in presence-only data used for species distribution modelling: theory and methods for detecting sample bias and its effects on models." Sommerfeltia 38, no. 1 (October 1, 2018): 1–53. http://dx.doi.org/10.2478/som-2018-0001.

Full text
Abstract:
Abstract This paper provides a theoretical understanding of sampling bias in presence-only data in the context of species distribution modelling. This understanding forms the basis for two integrated frameworks, one for detecting sampling bias of different kinds in presence-only data (the bias assessment framework) and one for assessing potential effects of sampling bias on species distribution models (the bias effects framework). We exemplify the use of these frameworks to museum data for nine insect species in Norway, for which the distribution along the two main bioclimatic gradients (related to oceanicity and temperatures) are modelled using the MaxEnt method. Models of different complexity (achieved by use of two different model selection procedures that represent spatial prediction or ecological response modelling purposes, respectively) were generated with different types of background data (uninformed and background-target-group [BTG]). The bias assessment framework made use of comparisons between observed and theoretical frequency-of-presence (FoP) curves, obtained separately for each combination of species and bioclimatic predictor, to identify potential sampling bias. The bias effects framework made use of comparisons between modelled response curves (predicted relative FoP curves) and the corresponding observed FoP curves for each combination of species and predictor. The extent to which the observed FoP curves deviated from the expected, smooth and unimodal theoretical FoP curve, varied considerably among the nine insect species. Among-curve differences were, in most cases, interpreted as indications of sampling bias. Using BTG-type background data in many cases introduced strong sampling bias. The predicted relative FoP curves from MaxEnt were, in general, similar to the corresponding observed FoP curves. This indicates that the main structure of the data-sets were adequately summarised by the MaxEnt models (with the options and settings used), in turn suggesting that shortcomings of input data such as sampling bias or omission of important predictors may overshadow the effect of modelling method on the predictive performance of distribution models. The examples indicate that the two proposed frameworks are useful for identification of sampling bias in presence-only data and for choosing settings for distribution modelling options such as the method for extraction of background data points and determining the appropriate level of model complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Abreu, Charlles R. A., and Fernando A. Escobedo. "A general framework for non-Boltzmann Monte Carlo sampling." Journal of Chemical Physics 124, no. 5 (February 7, 2006): 054116. http://dx.doi.org/10.1063/1.2165188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Swenson, David W. H., Jan-Hendrik Prinz, Frank Noe, John D. Chodera, and Peter G. Bolhuis. "OpenPathSampling: A Python Framework for Path Sampling Simulations. 1. Basics." Journal of Chemical Theory and Computation 15, no. 2 (October 18, 2018): 813–36. http://dx.doi.org/10.1021/acs.jctc.8b00626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tu, Hung-Ming. "The Attractiveness of Adaptive Heritage Reuse: A Theoretical Framework." Sustainability 12, no. 6 (March 18, 2020): 2372. http://dx.doi.org/10.3390/su12062372.

Full text
Abstract:
Adaptive heritage reuse is a useful method to bring new meaning into a culture, manage heritage sites, and promote tourism development. However, it is not always successful, and there is no theoretical framework to understand its attractiveness and value. This study aimed at developing such a theoretical framework based on the analysis of nine cases of adaptive heritage reuse in Taiwan. The probe question technique of qualitative interview was used to assess the attraction framework. A total of 90 respondents were interviewed based on constant comparative analysis with the sampling strategy of theoretical saturation. The results illustrate the heritage and activities of the reuse environments, including natural and regional environments. These environments produce recreational values, including self-growth, health benefits, and social benefits. As promoting activities is an important attraction for tourists in the heritage reuse environment, the natural environment can be used to plan and design heritage outdoor activities. Finally, the regional environment can be an important basis for assessing the feasibility of adaptive heritage reuse, including historical streets, surrounding tourist attractions, and high transportation accessibility. This theoretical framework can be used to achieve sustainable management of heritage sites.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Jie, Si Zhang, Edward Huang, Chun-Hung Chen, Loo Hay Lee, and Nurcin Celik. "MO2TOS: Multi-Fidelity Optimization with Ordinal Transformation and Optimal Sampling." Asia-Pacific Journal of Operational Research 33, no. 03 (June 2016): 1650017. http://dx.doi.org/10.1142/s0217595916500172.

Full text
Abstract:
Simulation optimization can be used to solve many complex optimization problems in automation applications such as job scheduling and inventory control. We propose a new framework to perform efficient simulation optimization when simulation models with different fidelity levels are available. The framework consists of two novel methodologies: ordinal transformation (OT) and optimal sampling (OS). The OT methodology uses the low-fidelity simulations to transform the original solution space into an ordinal space that encapsulates useful information from the low-fidelity model. The OS methodology efficiently uses high-fidelity simulations to sample the transformed space in search of the optimal solution. Through theoretical analysis and numerical experiments, we demonstrate the promising performance of the multi-fidelity optimization with ordinal transformation and optimal sampling (MO2TOS) framework.
APA, Harvard, Vancouver, ISO, and other styles
8

Qin, Guojun, and Jingfang Wang. "Random Sampling and Signal Bregman Reconstruction Based on Compressed Sensing." Indonesian Journal of Electrical Engineering and Computer Science 4, no. 2 (November 1, 2016): 365. http://dx.doi.org/10.11591/ijeecs.v4.i2.pp365-372.

Full text
Abstract:
Compressed sensing (CS) sampling is a sampling method which is based on the signal sparse. Much information can be extracted from as little as possible of the data by applying CS and this method is the idea of great theoretical and applied prospects. In the framework of compressed sensing theory, the sampling rate is no longer decided in the bandwidth of the signal, but it depends on the structure and content of the information in the signal. In this paper, the signal is the sparse in the Fourier transform and random sparse sampling is advanced by programing random observation matrix for peak random base. The signal is successfully restored by the use of Bregman algorithm. The signal is described in the transform space, and a theoretical framework is established with a new signal descriptions and processing. By making the case to ensure that the information loss, signal is sampled at much lower than the Nyquist sampling theorem requiring rate,but also the signal is completely restored in high probability. The random sampling has following advantages: alias-free sampling frequency need not obey the Nyquist limit and higher frequency resolution.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Guilian, Dongmei Huang, Changxin Cai, and Peng Wu. "A Novel Sparse Framework for Angle and Frequency Estimation." Sensors 22, no. 22 (November 9, 2022): 8633. http://dx.doi.org/10.3390/s22228633.

Full text
Abstract:
The topic of joint angle and frequency estimation (JAFE) has aroused extensive interests in the past decades. Current estimation algorithms mainly rely on the Nyquist sampling criterion. In order not to cause ambiguity for parameter estimation, the space–time intervals must be smaller than given thresholds, which results in complicated hardware costs and a huge computational burden. This paper aims to reduce the complexity for JAFE, and a novel sparsity-aware framework is proposed. Unlike the current uniform sampling architectures, the incoming narrow-band singles are sampled by a series of space–time coprime samplers. An improved rotational invariance estimator is introduced, which offers closed-form solutions for both angle and frequency estimation. The mathematical treatments indicate that our methodology is inherent in larger spatial/temporal aperture than the uniform sampling architectures; hence, it provides more accurate JAFE compared to alternative approaches relying on uniform sampling. Additionally, it attains nearly the same complexity as the current rotational invariance approach. Numerical results agree with the theoretical advantages of our methodology.
APA, Harvard, Vancouver, ISO, and other styles
10

Migliore, Marco. "Near Field Antenna Measurement Sampling Strategies: From Linear to Nonlinear Interpolation." Electronics 7, no. 10 (October 17, 2018): 257. http://dx.doi.org/10.3390/electronics7100257.

Full text
Abstract:
The aim of this review paper is to discuss some of the advanced sampling techniques proposed in the last decade in the framework of planar near-field measurements, clarifying the theoretical basis of the different techniques, and showing the advantages in terms of number of measurements. Instead of discussing the details of the techniques, the attention is focused on their theoretical bases to give a gentle introduction to the techniques. For each sampling method, examples on a liner array are discussed to clarify the advantages and disadvantages of the method.
APA, Harvard, Vancouver, ISO, and other styles
11

Alshqaq, Shokrya Saleh A., Abdullah Ali H. Ahmadini, and Irfan Ali. "Nonlinear Stochastic Multiobjective Optimization Problem in Multivariate Stratified Sampling Design." Mathematical Problems in Engineering 2022 (August 29, 2022): 1–16. http://dx.doi.org/10.1155/2022/2502346.

Full text
Abstract:
Decision-making in survey sampling planning is a tricky situation; sometimes it involves multiple objectives, with various decision variables emanating from heterogeneous and homogeneous populations. Dealing with the entire population under study and its uncertain nature becomes a challenging issue for researchers and policymakers. Hence, an appropriate sampling design and optimization methodology are imperative. The study presents a useful discussion on stochastic multiobjective multivariate stratified sampling (MSS) models theoretically, and the concepts are illustrated with numerical examples. Also, it has been found that the linearization of sampling variance in survey sampling does not help determine the optimal sampling allocation problem with minimum variability. Optimal allocation problems under the weighted goal programming, stochastic goal programming, and Chebyshev goal programming methods are also discussed with numerical examples. Finally, the study discussed the linear approximation of the MSS problem with examples. The study is a conceptual and theoretical framework for MSS under a stochastic environment. The numerical data is simulated using the stratifyR package.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Jiaru, Yang Hua, Tao Song, Hao Wang, Zhengui Xue, Ruhui Ma, and Haibing Guan. "Improving Bayesian Neural Networks by Adversarial Sampling." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (June 28, 2022): 10110–17. http://dx.doi.org/10.1609/aaai.v36i9.21250.

Full text
Abstract:
Bayesian neural networks (BNNs) have drawn extensive interest due to the unique probabilistic representation framework. However, Bayesian neural networks have limited publicized deployments because of the relatively poor model performance in real-world applications. In this paper, we argue that the randomness of sampling in Bayesian neural networks causes errors in the updating of model parameters during training and some sampled models with poor performance in testing. To solve this, we propose to train Bayesian neural networks with Adversarial Distribution as a theoretical solution. To avoid the difficulty of calculating Adversarial Distribution analytically, we further present the Adversarial Sampling method as an approximation in practice. We conduct extensive experiments with multiple network structures on different datasets, e.g., CIFAR-10 and CIFAR-100. Experimental results validate the correctness of the theoretical analysis and the effectiveness of the Adversarial Sampling on improving model performance. Additionally, models trained with Adversarial Sampling still keep their ability to model uncertainties and perform better when predictions are retained according to the uncertainties, which further verifies the generality of the Adversarial Sampling approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Pérez, Adrià, Pablo Herrera-Nieto, Stefan Doerr, and Gianni De Fabritiis. "AdaptiveBandit: A Multi-armed Bandit Framework for Adaptive Sampling in Molecular Simulations." Journal of Chemical Theory and Computation 16, no. 7 (June 15, 2020): 4685–93. http://dx.doi.org/10.1021/acs.jctc.0c00205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kingston, Zachary, Mark Moll, and Lydia E. Kavraki. "Exploring implicit spaces for constrained sampling-based planning." International Journal of Robotics Research 38, no. 10-11 (August 26, 2019): 1151–78. http://dx.doi.org/10.1177/0278364919868530.

Full text
Abstract:
We present a review and reformulation of manifold constrained sampling-based motion planning within a unifying framework, IMACS (implicit manifold configuration space). IMACS enables a broad class of motion planners to plan in the presence of manifold constraints, decoupling the choice of motion planning algorithm and method for constraint adherence into orthogonal choices. We show that implicit configuration spaces defined by constraints can be presented to sampling-based planners by addressing two key fundamental primitives, sampling and local planning, and that IMACS preserves theoretical properties of probabilistic completeness and asymptotic optimality through these primitives. Within IMACS, we implement projection- and continuation-based methods for constraint adherence, and demonstrate the framework on a range of planners with both methods in simulated and realistic scenarios. Our results show that the choice of method for constraint adherence depends on many factors and that novel combinations of planners and methods of constraint adherence can be more effective than previous approaches. Our implementation of IMACS is open source within the Open Motion Planning Library and is easily extended for novel planners and constraint spaces.
APA, Harvard, Vancouver, ISO, and other styles
15

P. Lim, George Michael, and Nenita I. Prado. "Sustainable framework for hospital response during health emergencies." South Florida Journal of Health 3, no. 2 (June 24, 2022): 183–92. http://dx.doi.org/10.46981/sfjhv3n2-013.

Full text
Abstract:
COVID-19 pandemic poses challenges to the healthcare systems worldwide, which soon exponentially strained hospitals' systems and resources, leading them to challenge dilemmas. Meanwhile, the Philippines has made remarkable emergency response efforts against this pandemic but was exhausted due to the multiple healthcare concerns that needed to be addressed. Listening to hospitals' heads and nurse managers about their strategies drove this study to realize its aim to develop a framework for hospital responses during health emergencies. Triangulation was used in data gathering anchored on Glaser and Strauss and Charmaz's constructivist grounded theory. Careful analysis, constant data comparison, theoretical sampling to ensure the saturation of categories, and generating theory intimately linked to and grounded in the data are imperative for qualitative research. Concurrent collection and data analysis assure mutual interaction between what is known and what one needs to know, along with theoretical thinking, which provides emerging ideas that are reconfirmed as new data. The research questions were answered by documentary analysis, ZOOM's virtual interviews, and focus group discussions. A purposive sampling of thirty participants comprised ten participants from hospitals in the three main islands of the Philippines, wherein eventually, thirty participants were reduced to fifteen. Hospitals have a high level of preparedness for health emergencies, and a sustainable framework for the pandemic and other health emergencies is developed in a 5-point domain: operations, morale, infrastructure, finances, and innovations.
APA, Harvard, Vancouver, ISO, and other styles
16

Ning, Nianwen, Yilin Yang, Chenguang Song, and Bin Wu. "An adaptive node embedding framework for multiplex networks." Intelligent Data Analysis 25, no. 2 (March 4, 2021): 483–503. http://dx.doi.org/10.3233/ida-195065.

Full text
Abstract:
Network Embedding (NE) has emerged as a powerful tool in many applications. Many real-world networks have multiple types of relations between the same entities, which are appropriate to be modeled as multiplex networks. However, at random walk-based embedding study for multiplex networks, very little attention has been paid to the problems of sampling bias and imbalanced relation types. In this paper, we propose an Adaptive Node Embedding Framework (ANEF) based on cross-layer sampling strategies of nodes for multiplex networks. ANEF is the first framework to focus on the bias issue of sampling strategies. Through metropolis hastings random walk (MHRW) and forest fire sampling (FFS), ANEF is less likely to be trapped in local structure with high degree nodes. We utilize a fixed-length queue to record previously visited layers, which can balance the edge distribution over different layers in sampled node sequence processes. In addition, to adaptively sample the cross-layer context of nodes, we also propose a node metric called Neighbors Partition Coefficient (NPC). Experiments on real-world networks in diverse fields show that our framework outperforms the state-of-the-art methods in application tasks such as cross-domain link prediction and mutual community detection.
APA, Harvard, Vancouver, ISO, and other styles
17

Sbert, Mateu, and Víctor Elvira. "Generalizing the Balance Heuristic Estimator in Multiple Importance Sampling." Entropy 24, no. 2 (January 27, 2022): 191. http://dx.doi.org/10.3390/e24020191.

Full text
Abstract:
In this paper, we propose a novel and generic family of multiple importance sampling estimators. We first revisit the celebrated balance heuristic estimator, a widely used Monte Carlo technique for the approximation of intractable integrals. Then, we establish a generalized framework for the combination of samples simulated from multiple proposals. Our approach is based on considering as free parameters both the sampling rates and the combination coefficients, which are the same in the balance heuristics estimator. Thus our novel framework contains the balance heuristic as a particular case. We study the optimal choice of the free parameters in such a way that the variance of the resulting estimator is minimized. A theoretical variance study shows the optimal solution is always better than the balance heuristic estimator (except in degenerate cases where both are the same). We also give sufficient conditions on the parameter values for the new generalized estimator to be better than the balance heuristic estimator, and one necessary and sufficient condition related to χ2 divergence. Using five numerical examples, we first show the gap in the efficiency of both new and classical balance heuristic estimators, for equal sampling and for several state of the art sampling rates. Then, for these five examples, we find the variances for some notable selection of parameters showing that, for the important case of equal count of samples, our new estimator with an optimal selection of parameters outperforms the classical balance heuristic. Finally, new heuristics are introduced that exploit the theoretical findings.
APA, Harvard, Vancouver, ISO, and other styles
18

Garrett, Caelan Reed, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. "Sampling-based methods for factored task and motion planning." International Journal of Robotics Research 37, no. 13-14 (October 10, 2018): 1796–825. http://dx.doi.org/10.1177/0278364918802962.

Full text
Abstract:
This paper presents a general-purpose formulation of a large class of discrete-time planning problems, with hybrid state and control-spaces, as factored transition systems. Factoring allows state transitions to be described as the intersection of several constraints each affecting a subset of the state and control variables. Robotic manipulation problems with many movable objects involve constraints that only affect several variables at a time and therefore exhibit large amounts of factoring. We develop a theoretical framework for solving factored transition systems with sampling-based algorithms. The framework characterizes conditions on the submanifold in which solutions lie, leading to a characterization of robust feasibility that incorporates dimensionality-reducing constraints. It then connects those conditions to corresponding conditional samplers that can be composed to produce values on this submanifold. We present two domain-independent, probabilistically complete planning algorithms that take, as input, a set of conditional samplers. We demonstrate the empirical efficiency of these algorithms on a set of challenging task and motion planning problems involving picking, placing, and pushing.
APA, Harvard, Vancouver, ISO, and other styles
19

Luo, Yuanfu, Haoyu Bai, David Hsu, and Wee Sun Lee. "Importance sampling for online planning under uncertainty." International Journal of Robotics Research 38, no. 2-3 (June 19, 2018): 162–81. http://dx.doi.org/10.1177/0278364918780322.

Full text
Abstract:
The partially observable Markov decision process (POMDP) provides a principled general framework for robot planning under uncertainty. Leveraging the idea of Monte Carlo sampling, recent POMDP planning algorithms have scaled up to various challenging robotic tasks, including, real-time online planning for autonomous vehicles. To further improve online planning performance, this paper presents IS-DESPOT, which introduces importance sampling to DESPOT, a state-of-the-art sampling-based POMDP algorithm for planning under uncertainty. Importance sampling improves DESPOT’s performance when there are critical, but rare events, which are difficult to sample. We prove that IS-DESPOT retains the theoretical guarantee of DESPOT. We demonstrate empirically that importance sampling significantly improves the performance of online POMDP planning for suitable tasks. We also present a general method for learning the importance sampling distribution.
APA, Harvard, Vancouver, ISO, and other styles
20

Shimadzu, Hideyasu, and Ross Darnell. "Attenuation of species abundance distributions by sampling." Royal Society Open Science 2, no. 4 (April 2015): 140219. http://dx.doi.org/10.1098/rsos.140219.

Full text
Abstract:
Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies.
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Gang, Qinghua Wang, and Jianpeng Ma. "OPUS-Fold: An Open-Source Protein Folding Framework Based on Torsion-Angle Sampling." Journal of Chemical Theory and Computation 16, no. 6 (April 23, 2020): 3970–76. http://dx.doi.org/10.1021/acs.jctc.0c00186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yan, Bo, Na Xu, Wenbo Zhao, Muqing Li, and Luping Xu. "An Efficient Extended Targets Detection Framework Based on Sampling and Spatio-Temporal Detection." Sensors 19, no. 13 (July 1, 2019): 2912. http://dx.doi.org/10.3390/s19132912.

Full text
Abstract:
Excellent performance, real-time and low memory requirement are three vital requirements for target detection in high resolution marine radar system. Unfortunately, many current state-of-the-art methods merely achieve excellent performance when coping with highly complex scenes. In fact, a common problem is that real-time processing, low memory requirement and remarkable detection ability are difficult to coordinate. To address this issue, we propose a novel detection framework which bases its principle on sampling and spatiotemporal detection. The framework consists of two stages, coarse detection and fine detection. Sampling-based coarse detection is designed to guarantee the real-time processing and low memory requirements by locating the area where targets may exist in advance. Different from former detection methods, multi-scan video data are utilized. In the stage of fine detection, the candidate areas are grouped into three categories: single target, dense targets and sea clutter. Different approaches for processing the different categories are implemented to achieve excellent performance. The superiority of the proposed framework beyond state-of-the-art baselines is well substantiated in this work. Low memory requirement of the proposed framework was verified by theoretical analysis. Real-time processing capability was verified by the video data of two real scenarios. Synthetic data were tested to show the improvement in tracking performance by using the proposed detection framework.
APA, Harvard, Vancouver, ISO, and other styles
23

NOLL, JENNIFER. "GRADUATE TEACHING ASSISTANTS’ STATISTICAL CONTENT KNOWLEDGE OF SAMPLING." STATISTICS EDUCATION RESEARCH JOURNAL 10, no. 2 (November 30, 2011): 48–74. http://dx.doi.org/10.52041/serj.v10i2.347.

Full text
Abstract:
Research investigating graduate teaching assistants’ (TAs’) knowledge of fundamental statistics concepts is sparse at best; yet at many universities, TAs play a substantial role in the teaching of undergraduate statistics courses. This paper provides a framework for characterizing TAs’ content knowledge in a sampling context and endeavors to raise new questions about TAs’ content knowledge and its potential impact on the teaching of undergraduate statistics. The participants in this study were sixty-eight TAs from 18 universities across the United States. These TAs demonstrated considerable knowledge of theoretical probability distributions. However, they experienced tensions when attempting to quantify expected statistical variability in an empirical sampling situation and had difficulty explaining conceptual ideas of variability. First published November 2011 at Statistics Education Research Journal: Archives
APA, Harvard, Vancouver, ISO, and other styles
24

Gamon, J. A. "Optical sampling of the flux tower footprint." Biogeosciences Discussions 12, no. 6 (March 30, 2015): 4973–5014. http://dx.doi.org/10.5194/bgd-12-4973-2015.

Full text
Abstract:
Abstract. The purpose of this review is to address the reasons and methods for conducting optical remote sensing within the flux tower footprint. Fundamental principles and conclusions gleaned from over two decades of proximal remote sensing at flux tower sites are reviewed. An organizing framework is the light-use efficiency (LUE) model, both because it is widely used, and because it provides a useful theoretical construct for integrating optical remote sensing with flux measurements. Multiple ways of driving this model, ranging from meteorological measurements to remote sensing, have emerged in recent years, making it a convenient conceptual framework for comparative experimental studies. New interpretations of established optical sampling methods, including the Photochemical Reflectance Index (PRI) and Solar-Induced Fluorescence (SIF), are discussed within the context of the LUE model. Multi-scale analysis across temporal and spatial axes is a central theme, because such scaling can provide links between ecophysiological mechanisms detectable at the level of individual organisms and broad patterns emerging at larger scales, enabling evaluation of emergent properties and extrapolation to the flux footprint and beyond. Proper analysis of sampling scale requires an awareness of sampling context that is often essential to the proper interpretation of optical signals. Additionally, the concept of optical types, vegetation exhibiting contrasting optical behavior in time and space, is explored as a way to frame our understanding of the controls on surface–atmosphere fluxes. Complementary NDVI and PRI patterns across ecosystems are offered as an example of this hypothesis, with the LUE model and light-response curve providing an integrating framework. We conclude that experimental approaches allowing systematic exploration of plant optical behavior in the context of the flux tower network provides a unique way to improve our understanding of environmental constraints and ecophysiological function. In addition to an enhanced mechanistic understanding of ecosystem processes, this integration of remote sensing with flux measurements offers many rich opportunities for upscaling, satellite validation, and informing practical management objectives ranging form assessing ecosystem health and productivity to quantifying biospheric carbon sequestration.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Xiangdong, Ying Yang, Hong Liu, and Yueliang Qian. "The PIC-TDD Framework of Test Data Design for Pattern Recognition Systems." International Journal of Advanced Pervasive and Ubiquitous Computing 6, no. 4 (October 2014): 43–62. http://dx.doi.org/10.4018/ijapuc.2014100104.

Full text
Abstract:
In this paper, a new approach is proposed for the design of test data for pattern recognition systems. In the theoretical framework put forward, performance on the population of data is viewed as expectation of a random variable, and the purpose of test is to estimate the parameter. While the most popular method of test data design is random sampling, a novel approach based on performance influencing classes is proposed, which can achieve unbiased estimation and the variance of estimation is much lower than that from random sample. The method is applied to the evaluation of systems for broadcasting news segmentation, and experimental results show the advantages over the random sampling approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Morrissey, Mark L., and J. Scott Greene. "A theoretical framework for the sampling error variance for three-dimensional climate averages of ICOADS monthly ship data." Theoretical and Applied Climatology 96, no. 3-4 (May 29, 2008): 235–48. http://dx.doi.org/10.1007/s00704-008-0027-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

CHANG, VANESSA. "Records that play: the present past in sampling practice." Popular Music 28, no. 2 (May 2009): 143–59. http://dx.doi.org/10.1017/s0261143009001755.

Full text
Abstract:
AbstractMuch of the discourse surrounding sampling practice has been couched in an archaic rhetoric of originality and creativity, predicated on the reproductive mode of creation rather than the practice's own creative logic. These notions have emerged from a metaphysics which privileges the origin as the centre of semantic production. However, this discursive preoccupation with the past is not entirely irrelevant in sampling practice. Rather, the historically inscribed aura of the original holds a redefined, but necessary, place in the practice.This essay examines the theoretical underpinnings of the discussion so far, then reconciling these with the specific culture of sample-based hip-hop production. Through close readings of some musical examples, it posits a theoretical framework for sampling practice which takes its unique properties into account. By mapping the trajectory of several samples from source to new incarnation, the sample is revealed as the space of simultaneous play and rupture, where the past both defines the present and is effaced by it. As such, sampling creates a tradition that involves the past without deferring to its structures and limitations, restoring a revised mode of agency to the practice.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Xiaorong, and Jianhan Chen. "HyRes: a coarse-grained model for multi-scale enhanced sampling of disordered protein conformations." Physical Chemistry Chemical Physics 19, no. 48 (2017): 32421–32. http://dx.doi.org/10.1039/c7cp06736d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Low, Kian Hsiang, John Dolan, and Pradeep Khosla. "Information-Theoretic Approach to Efficient Adaptive Path Planning for Mobile Robotic Environmental Sensing." Proceedings of the International Conference on Automated Planning and Scheduling 19 (October 16, 2009): 233–40. http://dx.doi.org/10.1609/icaps.v19i1.13344.

Full text
Abstract:
Recent research in robot exploration and mapping has focused on sampling environmental hotspot fields. This exploration task is formalized by Low, Dolan, and Khosla (2008) in a sequential decision-theoretic planning under uncertainty framework called MASP. The time complexity of solving MASP approximately depends on the map resolution, which limits its use in large-scale, high-resolution exploration and mapping. To alleviate this computational difficulty, this paper presents an information-theoretic approach to MASP (iMASP) for efficient adaptive path planning; by reformulating the cost-minimizing iMASP as a reward-maximizing problem, its time complexity becomes independent of map resolution and is less sensitive to increasing robot team size as demonstrated both theoretically and empirically. Using the reward-maximizing dual, we derive a novel adaptive variant of maximum entropy sampling, thus improving the induced exploration policy performance. It also allows us to establish theoretical bounds quantifying the performance advantage of optimal adaptive over non-adaptive policies and the performance quality of approximately optimal vs. optimal adaptive policies. We show analytically and empirically the superior performance of iMASP-based policies for sampling the log-Gaussian process to that of policies for the widely-used Gaussian process in mapping the hotspot field. Lastly, we provide sufficient conditions that, when met, guarantee adaptivity has no benefit under an assumed environment model.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Zaiwei. "A Unified Lyapunov Framework for Finite-Sample Analysis of Reinforcement Learning Algorithms." ACM SIGMETRICS Performance Evaluation Review 50, no. 3 (December 30, 2022): 12–15. http://dx.doi.org/10.1145/3579342.3579346.

Full text
Abstract:
Reinforcement learning (RL) is a paradigm where an agent learns to accomplish tasks by interacting with the environment, similar to how humans learn. RL is therefore viewed as a promising approach to achieve artificial intelligence, as evidenced by the remarkable empirical successes. However, many RL algorithms are theoretically not well-understood, especially in the setting where function approximation and off-policy sampling are employed. My thesis [1] aims at developing thorough theoretical understanding to the performance of various RL algorithms through finite-sample analysis. Since most of the RL algorithms are essentially stochastic approximation (SA) algorithms for solving variants of the Bellman equation, the first part of thesis is dedicated to the analysis of general SA involving a contraction operator, and under Markovian noise. We develop a Lyapunov approach where we construct a novel Lyapunov function called the generaled Moreau envelope. The results on SA enable us to establish finite-sample bounds of various RL algorithms in the tabular setting (cf. Part II of the thesis) and when using function approximation (cf. Part III of the thesis), which in turn provide theoretical insights to several important problems in the RL community, such as the efficiency of bootstrapping, the bias-variance trade-off in off-policy learning, and the stability of off-policy control. The main body of this document provides an overview of the contributions of my thesis.
APA, Harvard, Vancouver, ISO, and other styles
31

van Stekelenburg, Jacquelien, Stefaan Walgrave, Bert Klandermans, and Joris Verhulst. "Contextualizing Contestation: Framework, Design, and Data." Mobilization: An International Quarterly 17, no. 3 (September 1, 2012): 249–62. http://dx.doi.org/10.17813/maiq.17.3.a4418x2q772153x2.

Full text
Abstract:
This article presents the theoretical underpinnings, design, methods, and measures of the project, Caught in the Act of Protest: Contextualizing Contestation. This effort examines street demonstrations that vary in atmosphere, organization, and target. The project particularly focuses on participants, exploring who participates, and why and how people got involved. Data are collected before, during, and after a number of demonstrations, and captures the entire "demonstration moment." We develop standardized measures and techniques for sampling and data collection at the individual demonstrator level and at the contexual level. Evidence was gathered not only from the demonstrators but also from police, organizers, and the mass media. Data-gathering efforts were standardized through identical methods, questionnaires, fact sheets, and content analysis protocols. The CCC project examines demonstrations in Belgium, the Netherlands, the United Kingdom, Spain, Switzerland, and Sweden between 2009 and 2012. Teams from Italy, Mexico, and the Czech Republic joined the project at a later stage. The project has covered 61 demonstrations and 12,993 questionnaires have been completed to date.
APA, Harvard, Vancouver, ISO, and other styles
32

Gamon, J. A. "Reviews and Syntheses: optical sampling of the flux tower footprint." Biogeosciences 12, no. 14 (July 30, 2015): 4509–23. http://dx.doi.org/10.5194/bg-12-4509-2015.

Full text
Abstract:
Abstract. The purpose of this review is to address the reasons and methods for conducting optical remote sensing within the flux tower footprint. Fundamental principles and conclusions gleaned from over 2 decades of proximal remote sensing at flux tower sites are reviewed. The organizing framework used here is the light-use efficiency (LUE) model, both because it is widely used, and because it provides a useful theoretical construct for integrating optical remote sensing with flux measurements. Multiple ways of driving this model, ranging from meteorological measurements to remote sensing, have emerged in recent years, making it a convenient conceptual framework for comparative experimental studies. New interpretations of established optical sampling methods, including the photochemical reflectance index (PRI) and solar-induced chlorophyll fluorescence (SIF), are discussed within the context of the LUE model. Multi-scale analysis across temporal and spatial axes is a central theme because such scaling can provide links between ecophysiological mechanisms detectable at the level of individual organisms and broad patterns emerging at larger scales, enabling evaluation of emergent properties and extrapolation to the flux footprint and beyond. Proper analysis of the sampling scale requires an awareness of sampling context that is often essential to the proper interpretation of optical signals. Additionally, the concept of optical types, vegetation exhibiting contrasting optical behavior in time and space, is explored as a way to frame our understanding of the controls on surface–atmosphere fluxes. Complementary normalized difference vegetation index (NDVI) and PRI patterns across ecosystems are offered as an example of this hypothesis, with the LUE model and light-response curve providing an integrating framework. I conclude that experimental approaches allowing systematic exploration of plant optical behavior in the context of the flux tower network provides a unique way to improve our understanding of environmental constraints and ecophysiological function. In addition to an enhanced mechanistic understanding of ecosystem processes, this integration of remote sensing with flux measurements offers many rich opportunities for upscaling, satellite validation, and informing practical management objectives ranging from assessing ecosystem health and productivity to quantifying biospheric carbon sequestration.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhuravlev, Pavel I., and Garegin A. Papoian. "Protein functional landscapes, dynamics, allostery: a tortuous path towards a universal theoretical framework." Quarterly Reviews of Biophysics 43, no. 3 (August 2010): 295–332. http://dx.doi.org/10.1017/s0033583510000119.

Full text
Abstract:
AbstractEnergy landscape theories have provided a common ground for understanding the protein folding problem, which once seemed to be overwhelmingly complicated. At the same time, the native state was found to be an ensemble of interconverting states with frustration playing a more important role compared to the folding problem. The landscape of the folded protein – the native landscape – is glassier than the folding landscape; hence, a general description analogous to the folding theories is difficult to achieve. On the other hand, the native basin phase volume is much smaller, allowing a protein to fully sample its native energy landscape on the biological timescales. Current computational resources may also be used to perform this sampling for smaller proteins, to build a ‘topographical map’ of the native landscape that can be used for subsequent analysis. Several major approaches to representing this topographical map are highlighted in this review, including the construction of kinetic networks, hierarchical trees and free energy surfaces with subsequent structural and kinetic analyses. In this review, we extensively discuss the important question of choosing proper collective coordinates characterizing functional motions. In many cases, the substates on the native energy landscape, which represent different functional states, can be used to obtain variables that are well suited for building free energy surfaces and analyzing the protein's functional dynamics. Normal mode analysis can provide such variables in cases where functional motions are dictated by the molecule's architecture. Principal component analysis is a more expensive way of inferring the essential variables from the protein's motions, one that requires a long molecular dynamics simulation. Finally, the two popular models for the allosteric switching mechanism, ‘preexisting equilibrium’ and ‘induced fit’, are interpreted within the energy landscape paradigm as extreme points of a continuum of transition mechanisms. Some experimental evidence illustrating each of these two models, as well as intermediate mechanisms, is presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Yulong, Yonggang Zhang, Ning Li, and Lin Zhao. "Improved square-root cubature information filter." Transactions of the Institute of Measurement and Control 39, no. 4 (July 22, 2016): 579–88. http://dx.doi.org/10.1177/0142331215608428.

Full text
Abstract:
In this paper, a theoretical comparison between existing the sigma-point information filter (SPIF) framework and the unscented information filter (UIF) framework is presented. It is shown that the SPIF framework is identical to the sigma-point Kalman filter (SPKF). However, the UIF framework is not identical to the classical SPKF due to the neglect of one-step prediction errors of measurements in the calculation of state estimation error covariance matrix. Thus SPIF framework is more reasonable as compared with UIF framework. According to the theoretical comparison, an improved cubature information filter (CIF) is derived based on the superior SPIF framework. Square-root CIF (SRCIF) is also developed to improve the numerical accuracy and stability of the proposed CIF. The proposed SRCIF is applied to a target tracking problem with large sampling interval and high turn rate, and its performance is compared with the existing SRCIF. The results show that the proposed SRCIF is more reliable and stable as compared with the existing SRCIF. Note that it is impractical for information filters in large-scale applications due to the enormous computational complexity of large-scale matrix inversion, and advanced techniques need to be further considered.
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, Gang, Qinghua Wang, and Jianpeng Ma. "OPUS-Refine: A Fast Sampling-Based Framework for Refining Protein Backbone Torsion Angles and Global Conformation." Journal of Chemical Theory and Computation 16, no. 2 (January 14, 2020): 1359–66. http://dx.doi.org/10.1021/acs.jctc.9b01054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhou, Bin, Kun Li, and Guang Xu. "Sparse Representation Based on Principal Component for Compressed Sensing Hyperspectral Images." Applied Mechanics and Materials 321-324 (June 2013): 1154–57. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.1154.

Full text
Abstract:
Compressed sensing (CS) is a new developed theoretical framework for information acquisition and processing that breaks through the conventional Nyquist sampling limit. This paper proposes a sparse representation schemes based on principal component analysis (PCA) for CS that will be used for hyperspectral images compressed sampling. This scheme employs the prediction transform matrix to remove the correlations among successive hyperspectral measurement vectors. Experiment processes using the hyperspectral image from Earth Observing One (EO-1), and it shows a desired result both at reconstruction and denoising.
APA, Harvard, Vancouver, ISO, and other styles
37

Lahoche, Vincent, Dine Ousmane Samary, and Mohamed Tamaazousti. "Field Theoretical Approach for Signal Detection in Nearly Continuous Positive Spectra I: Matricial Data." Entropy 23, no. 9 (August 31, 2021): 1132. http://dx.doi.org/10.3390/e23091132.

Full text
Abstract:
Renormalization group techniques are widely used in modern physics to describe the relevant low energy aspects of systems involving a large number of degrees of freedom. Those techniques are thus expected to be a powerful tool to address open issues in data analysis when datasets are highly correlated. Signal detection and recognition for a covariance matrix having a nearly continuous spectra is currently one of these opened issues. First, investigations in this direction have been proposed in recent investigations from an analogy between coarse-graining and principal component analysis (PCA), regarding separation of sampling noise modes as a UV cut-off for small eigenvalues of the covariance matrix. The field theoretical framework proposed in this paper is a synthesis of these complementary point of views, aiming to be a general and operational framework, both for theoretical investigations and for experimental detection. Our investigations focus on signal detection. They exhibit numerical investigations in favor of a connection between symmetry breaking and the existence of an intrinsic detection threshold.
APA, Harvard, Vancouver, ISO, and other styles
38

Chi, Ming, Xu-Long Wang, Ding-Xin He, and Zhi-Wei Liu. "Multiconsensus of Second-Order Multiagent Networks via Pulse-Modulated Intermittent Control." Complexity 2020 (April 7, 2020): 1–8. http://dx.doi.org/10.1155/2020/1059026.

Full text
Abstract:
This paper studies the multiconsensus problem of multiagent networks based on sampled data information via the pulse-modulated intermittent control (PMIC) which is a general control framework unifying impulsive control, intermittent control, and sampling control. Two kinds of multiconsensus, including stationary multiconsensus and dynamic multiconsensus of multiagent networks, are taken into consideration in such control framework. Based on the eigenvalue analysis and algebraic graph theory, some necessary and sufficient conditions on the feedback gains and the control period are established to ensure the multiconsensus. Finally, several simulation results are included to show the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
39

Aaronson, S. "Quantum lower bound for recursive Fourier sampling." Quantum Information and Computation 3, no. 2 (March 2003): 165–74. http://dx.doi.org/10.26421/qic3.2-7.

Full text
Abstract:
We revisit the oft-neglected `recursive Fourier sampling' (RFS) problem, introduced by Bernstein and Vazirani to prove an oracle separation between BPP and BQP. We show that the known quantum algorithm for RFS is essentially optimal, despite its seemingly wasteful need to uncompute information. This implies that, to place \mathsf{BQP} outside of PH[\log] relative to an oracle, one would need to go outside the RFS framework. Our proof argues that, given any variant of RFS, either the adversary method of Ambainis yields a good quantum lower bound, or else there is an efficient classical algorithm. This technique may be of independent interest.
APA, Harvard, Vancouver, ISO, and other styles
40

Swenson, David W. H., Jan-Hendrik Prinz, Frank Noe, John D. Chodera, and Peter G. Bolhuis. "OpenPathSampling: A Python Framework for Path Sampling Simulations. 2. Building and Customizing Path Ensembles and Sample Schemes." Journal of Chemical Theory and Computation 15, no. 2 (October 25, 2018): 837–56. http://dx.doi.org/10.1021/acs.jctc.8b00627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nelson, James. "Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research." Qualitative Research 17, no. 5 (December 14, 2016): 554–70. http://dx.doi.org/10.1177/1468794116679873.

Full text
Abstract:
Saturation remains a problematic concept within the field of qualitative research, particularly with regard to issues of definition and process. This article sets out some of the common problems with saturation and, with reference to one research study, assesses the value of adopting a range of ‘conceptual depth criteria’ to address problems of definition and process when seeking to establish saturation within a grounded theory approach. It is suggested that the criteria can act as a test to measure the progress of the theoretical sampling and thus ascertain the readiness of the research for the final analytical stages and theory building. Moreover, the application of ‘conceptual depth criteria’ provides the researcher with an evaluative framework and a tool for producing a structured evidence base to substantiate choices made during the theoretical sampling process.
APA, Harvard, Vancouver, ISO, and other styles
42

You, Yang, Yujing Lou, Qi Liu, Yu-Wing Tai, Lizhuang Ma, Cewu Lu, and Weiming Wang. "Pointwise Rotation-Invariant Network with Adaptive Sampling and 3D Spherical Voxel Convolution." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12717–24. http://dx.doi.org/10.1609/aaai.v34i07.6965.

Full text
Abstract:
Point cloud analysis without pose priors is very challenging in real applications, as the orientations of point clouds are often unknown. In this paper, we propose a brand new point-set learning framework PRIN, namely, Pointwise Rotation-Invariant Network, focusing on rotation-invariant feature extraction in point clouds analysis. We construct spherical signals by Density Aware Adaptive Sampling to deal with distorted point distributions in spherical space. In addition, we propose Spherical Voxel Convolution and Point Re-sampling to extract rotation-invariant features for each point. Our network can be applied to tasks ranging from object classification, part segmentation, to 3D feature matching and label alignment. We show that, on the dataset with randomly rotated point clouds, PRIN demonstrates better performance than state-of-the-art methods without any data augmentation. We also provide theoretical analysis for the rotation-invariance achieved by our methods.
APA, Harvard, Vancouver, ISO, and other styles
43

Ou, Mingdong, Nan Li, Cheng Yang, Shenghuo Zhu, and Rong Jin. "Semi-Parametric Sampling for Stochastic Bandits with Many Arms." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7933–40. http://dx.doi.org/10.1609/aaai.v33i01.33017933.

Full text
Abstract:
We consider the stochastic bandit problem with a large candidate arm set. In this setting, classic multi-armed bandit algorithms, which assume independence among arms and adopt non-parametric reward model, are inefficient, due to the large number of arms. By exploiting arm correlations based on a parametric reward model with arm features, contextual bandit algorithms are more efficient, but they can also suffer from large regret in practical applications, due to the reward estimation bias from mis-specified model assumption or incomplete features. In this paper, we propose a novel Bayesian framework, called Semi-Parametric Sampling (SPS), for this problem, which employs semi-parametric function as the reward model. Specifically, the parametric part of SPS, which models expected reward as a parametric function of arm feature, can efficiently eliminate poor arms from candidate set. The non-parametric part of SPS, which adopts nonparametric reward model, revises the parametric estimation to avoid estimation bias, especially on the remained candidate arms. We give an implementation of SPS, Linear SPS (LSPS), which utilizes linear function as the parametric part. In semi-parametric environment, theoretical analysis shows that LSPS achieves better regret bound (i.e. O̴(√N1−α dα √T) with α ∈ [0, 1])) than existing approaches. Also, experiments demonstrate the superiority of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Seung Eun, Linda D. Scott, V. Susan Dahinten, Catherine Vincent, Karen Dunn Lopez, and Chang Gi Park. "Safety Culture, Patient Safety, and Quality of Care Outcomes: A Literature Review." Western Journal of Nursing Research 41, no. 2 (December 15, 2017): 279–304. http://dx.doi.org/10.1177/0193945917747416.

Full text
Abstract:
This integrative literature review was conducted to examine the relationships between safety culture and patient safety and quality of care outcomes in hospital settings and to identify directions for future research. Using a search of six electronic databases, 17 studies that met the study criteria were selected for review. This review revealed semantic inconsistencies, infrequent use of a theory or theoretical framework, limited discussions of validity of instruments used, and significant methodological variations. Most notably, this review identified a large array of nonsignificant and inconsistent relationships between safety culture and patient safety and quality of care outcomes. To improve understanding of the relationships, investigators should consider using a theoretical framework and valid measures of the key concepts. Researchers should also give more attention to selecting appropriate sampling and data collection methods, units of analysis, levels of data measurement and aggregation, and statistical analyses.
APA, Harvard, Vancouver, ISO, and other styles
45

Zaborski, Mateusz, Michał Okulewicz, and Jacek Mańdziuk. "Analysis of statistical model-based optimization enhancements in Generalized Self-Adapting Particle Swarm Optimization framework." Foundations of Computing and Decision Sciences 45, no. 3 (September 1, 2020): 233–54. http://dx.doi.org/10.2478/fcds-2020-0013.

Full text
Abstract:
AbstractThis paper presents characteristics of model-based optimization methods utilized within the Generalized Self-Adapting Particle Swarm Optimization (GA– PSO) – a hybrid global optimization framework proposed by the authors. GAPSO has been designed as a generalization of a Particle Swarm Optimization (PSO) algorithm on the foundations of a large degree of independence of individual particles. GAPSO serves as a platform for studying optimization algorithms in the context of the following research hypothesis: (1) it is possible to improve the performance of an optimization algorithm through utilization of more function samples than standard PSO sample-based memory, (2) combining specialized sampling methods (i.e. PSO, Differential Evolution, model-based optimization) will result in a better algorithm performance than using each of them separately. The inclusion of model-based enhancements resulted in the necessity of extending the GAPSO framework by means of an external samples memory - this enhanced model is referred to as M-GAPSO in the paper.We investigate the features of two model-based optimizers: one utilizing a quadratic function and the other one utilizing a polynomial function. We analyze the conditions under which those model-based approaches provide an effective sampling strategy. Proposed model-based optimizers are evaluated on the functions from the COCO BBOB benchmark set.
APA, Harvard, Vancouver, ISO, and other styles
46

McDonagh, Lorraine K., Hannah Harwood, John M. Saunders, Jackie A. Cassell, and Greta Rait. "How to increase chlamydia testing in primary care: a qualitative exploration with young people and application of a meta-theoretical model." Sexually Transmitted Infections 96, no. 8 (May 29, 2020): 571–81. http://dx.doi.org/10.1136/sextrans-2019-054309.

Full text
Abstract:
ObjectiveThe objective of this study was to explore young people’s perspectives barriers to chlamydia testing in general practice and potential intervention functions and implementation strategies to overcome identified barriers, using a meta-theoretical framework (the Behaviour Change Wheel (BCW)).MethodsTwenty-eight semistructured individual interviews were conducted with 16–24 year olds from across the UK. Purposive and convenience sampling methods were used (eg, youth organisations, charities, online platforms and chain-referrals). An inductive thematic analysis was first conducted, followed by thematic categorisation using the BCW.ResultsParticipants identified several barriers to testing: conducting self-sampling inaccurately (physical capability); lack of information and awareness (psychological capability); testing not seen as a priority and perceived low risk (reflective motivation); embarrassment, fear and guilt (automatic motivation); the UK primary care context and location of toilets (physical opportunity) and stigma (social opportunity). Potential intervention functions raised by participants included education (eg, increase awareness of chlamydia); persuasion (eg, use of imagery/data to alter beliefs); environmental restructuring (eg, alternative sampling methods) and modelling (eg, credible sources such as celebrities). Potential implementation strategies and policy categories discussed were communication and marketing (eg, social media); service provision (eg, introduction of a young person’s health-check) and guidelines (eg, standard questions for healthcare providers).ConclusionsThe BCW provided a useful framework for conceptually exploring the wide range of barriers to testing identified and possible intervention functions and policy categories to overcome said barriers. While greater education and awareness and expanded opportunities for testing were considered important, this alone will not bring about dramatic increases in testing. A societal and structural shift towards the normalisation of chlamydia testing is needed, alongside approaches which recognise the heterogeneity of this population. To ensure optimal and inclusive healthcare, researchers, clinicians and policy makers alike must consider patient diversity and the wider health issues affecting all young people.
APA, Harvard, Vancouver, ISO, and other styles
47

Bernardara, P., F. Mazas, X. Kergadallan, and L. Hamm. "A two-step framework for over-threshold modelling of environmental extremes." Natural Hazards and Earth System Sciences 14, no. 3 (March 20, 2014): 635–47. http://dx.doi.org/10.5194/nhess-14-635-2014.

Full text
Abstract:
Abstract. The evaluation of the probability of occurrence of extreme natural events is important for the protection of urban areas, industrial facilities and others. Traditionally, the extreme value theory (EVT) offers a valid theoretical framework on this topic. In an over-threshold modelling (OTM) approach, Pickands' theorem, (Pickands, 1975) states that, for a sample composed by independent and identically distributed (i.i.d.) values, the distribution of the data exceeding a given threshold converges through a generalized Pareto distribution (GPD). Following this theoretical result, the analysis of realizations of environmental variables exceeding a threshold spread widely in the literature. However, applying this theorem to an auto-correlated time series logically involves two successive and complementary steps: the first one is required to build a sample of i.i.d. values from the available information, as required by the EVT; the second to set the threshold for the optimal convergence toward the GPD. In the past, the same threshold was often employed both for sampling observations and for meeting the hypothesis of extreme value convergence. This confusion can lead to an erroneous understanding of methodologies and tools available in the literature. This paper aims at clarifying the conceptual framework involved in threshold selection, reviewing the available methods for the application of both steps and illustrating it with a double threshold approach.
APA, Harvard, Vancouver, ISO, and other styles
48

Devaurs, Didier, Dinler Antunes, and Lydia Kavraki. "Revealing Unknown Protein Structures Using Computational Conformational Sampling Guided by Experimental Hydrogen-Exchange Data." International Journal of Molecular Sciences 19, no. 11 (October 31, 2018): 3406. http://dx.doi.org/10.3390/ijms19113406.

Full text
Abstract:
Both experimental and computational methods are available to gather information about a protein’s conformational space and interpret changes in protein structure. However, experimentally observing and computationally modeling large proteins remain critical challenges for structural biology. Our work aims at addressing these challenges by combining computational and experimental techniques relying on each other to overcome their respective limitations. Indeed, despite its advantages, an experimental technique such as hydrogen-exchange monitoring cannot produce structural models because of its low resolution. Additionally, the computational methods that can generate such models suffer from the curse of dimensionality when applied to large proteins. Adopting a common solution to this issue, we have recently proposed a framework in which our computational method for protein conformational sampling is biased by experimental hydrogen-exchange data. In this paper, we present our latest application of this computational framework: generating an atomic-resolution structural model for an unknown protein state. For that, starting from an available protein structure, we explore the conformational space of this protein, using hydrogen-exchange data on this unknown state as a guide. We have successfully used our computational framework to generate models for three proteins of increasing size, the biggest one undergoing large-scale conformational changes.
APA, Harvard, Vancouver, ISO, and other styles
49

Pagani, Alessio, Zhuangkun Wei, Ricardo Silva, and Weisi Guo. "Neural Network Approximation of Graph Fourier Transform for Sparse Sampling of Networked Dynamics." ACM Transactions on Internet Technology 22, no. 1 (February 28, 2022): 1–18. http://dx.doi.org/10.1145/3461838.

Full text
Abstract:
Infrastructure monitoring is critical for safe operations and sustainability. Like many networked systems, water distribution networks (WDNs) exhibit both graph topological structure and complex embedded flow dynamics. The resulting networked cascade dynamics are difficult to predict without extensive sensor data. However, ubiquitous sensor monitoring in underground situations is expensive, and a key challenge is to infer the contaminant dynamics from partial sparse monitoring data. Existing approaches use multi-objective optimization to find the minimum set of essential monitoring points but lack performance guarantees and a theoretical framework. Here, we first develop a novel Graph Fourier Transform (GFT) operator to compress networked contamination dynamics to identify the essential principal data collection points with inference performance guarantees. As such, the GFT approach provides the theoretical sampling bound. We then achieve under-sampling performance by building auto-encoder (AE) neural networks (NN) to generalize the GFT sampling process and under-sample further from the initial sampling set, allowing a very small set of data points to largely reconstruct the contamination dynamics over real and artificial WDNs. Various sources of the contamination are tested, and we obtain high accuracy reconstruction using around 5%–10% of the network nodes for known contaminant sources, and 50%–75% for unknown source cases, which although larger than that of the schemes for contaminant detection and source identifications, is smaller than the current sampling schemes for contaminant data recovery. This general approach of compression and under-sampled recovery via NN can be applied to a wide range of networked infrastructures to enable efficient data sampling for digital twins.
APA, Harvard, Vancouver, ISO, and other styles
50

Cavallera, Vanessa, Mark Tomlinson, James Radner, Bronwynè Coetzee, Bernadette Daelmans, Rob Hughes, Rafael Pérez-Escamilla, Karlee L. Silver, and Tarun Dua. "Scaling early child development: what are the barriers and enablers?" Archives of Disease in Childhood 104, Suppl 1 (March 18, 2019): S43—S50. http://dx.doi.org/10.1136/archdischild-2018-315425.

Full text
Abstract:
The Sustainable Development Goals, Global Strategy for Women’s, Children’s and Adolescents’ Health (2016–2030) and Nurturing Care Framework all include targets to ensure children thrive. However, many projects to support early childhood development (ECD) do not ‘scale well’ and leave large numbers of children unreached. This paper is the fifth in a series examining effective scaling of ECD programmes. This qualitative study explored experiences of scaling-up among purposively recruited implementers of ECD projects in low- and middle-income countries. Participants were sampled, by means of snowball sampling, from existing networks notably through Saving Brains®, Grand Challenges Canada®. Findings of a recent literature review on scaling-up frameworks, by the WHO, informed the development of a semistructured interview schedule. All interviews were conducted in English, via Skype, audio recorded and transcribed verbatim. Interviews were analysed using framework analysis. Framework analysis identified six major themes based on a standard programme cycle: planning and strategic choices, project design, human resources, financing and resource mobilisation, monitoring and evaluation, and leadership and partnerships. Key informants also identified an overarching theme regarding what scaling-up means. Stakeholders have not found existing literature and available frameworks helpful in guiding them to successful scale-up. Our research suggests that rather than proposing yet more theoretical guidelines or frameworks, it would be better to support stakeholders in developing organisational leadership capacity and partnership strategies to enable them to effectively apply a practical programme cycle or systematic process in their own contexts.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography