Academic literature on the topic 'Low sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Low sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Low sample size"

1

Litwin, Samuel, Stanley Basickes, and Eric A. Ross. "Two-sample binary phase 2 trials with low type I error and low sample size." Statistics in Medicine 36, no. 9 (2017): 1383–94. http://dx.doi.org/10.1002/sim.7226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Litwin, Samuel, Eric Ross, and Stanley Basickes. "Two-sample binary phase 2 trials with low type I error and low sample size." Statistics in Medicine 36, no. 21 (2017): 3439. http://dx.doi.org/10.1002/sim.7358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alban, Eduardo X., Mario E. Magaña, and Harry Skinner. "A Low Sample Size Estimator for K Distributed Noise." Journal of Signal and Information Processing 03, no. 03 (2012): 293–307. http://dx.doi.org/10.4236/jsip.2012.33039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jung, Sungkyu, and J. S. Marron. "PCA consistency in high dimension, low sample size context." Annals of Statistics 37, no. 6B (2009): 4104–30. http://dx.doi.org/10.1214/09-aos709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Yi-Hui, and J. S. Marron. "High dimension low sample size asymptotics of robust PCA." Electronic Journal of Statistics 9, no. 1 (2015): 204–18. http://dx.doi.org/10.1214/15-ejs992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aoshima, Makoto, and Kazuyoshi Yata. "Statistical inference for high-dimension, low-sample-size data." Sugaku Expositions 30, no. 2 (2017): 137–58. http://dx.doi.org/10.1090/suga/421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hall, Peter, J. S. Marron, and Amnon Neeman. "Geometric representation of high dimension, low sample size data." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, no. 3 (2005): 427–44. http://dx.doi.org/10.1111/j.1467-9868.2005.00510.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aoshima, Makoto, Dan Shen, Haipeng Shen, Kazuyoshi Yata, Yi-Hui Zhou, and J. S. Marron. "A survey of high dimension low sample size asymptotics." Australian & New Zealand Journal of Statistics 60, no. 1 (2018): 4–19. http://dx.doi.org/10.1111/anzs.12212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shan, Guogen. "Comments on ‘Two-sample binary phase 2 trials with low type I error and low sample size’." Statistics in Medicine 36, no. 21 (2017): 3437–38. http://dx.doi.org/10.1002/sim.7359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sarkar, Soham, Rahul Biswas, and Anil K. Ghosh. "On some graph-based two-sample tests for high dimension, low sample size data." Machine Learning 109, no. 2 (2019): 279–306. http://dx.doi.org/10.1007/s10994-019-05857-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Low sample size"

1

Ahn, Jeongyoun Marron James Stephen. "High dimension, low sample size data analysis." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,375.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.<br>Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Myung Hee Marron James Stephen. "Continuum direction vectors in high dimensional low sample size data." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1132.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.<br>Title from electronic title page (viewed Mar. 27, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research Statistics." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
APA, Harvard, Vancouver, ISO, and other styles
3

Song, Juhee. "Bootstrapping in a high dimensional but very low sample size problem." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3853.

Full text
Abstract:
High Dimension, Low Sample Size (HDLSS) problems have received much attention recently in many areas of science. Analysis of microarray experiments is one such area. Numerous studies are on-going to investigate the behavior of genes by measuring the abundance of mRNA (messenger RiboNucleic Acid), gene expression. HDLSS data investigated in this dissertation consist of a large number of data sets each of which has only a few observations. We assume a statistical model in which measurements from the same subject have the same expected value and variance. All subjects have the same distribution up to location and scale. Information from all subjects is shared in estimating this common distribution. Our interest is in testing the hypothesis that the mean of measurements from a given subject is 0. Commonly used tests of this hypothesis, the t-test, sign test and traditional bootstrapping, do not necessarily provide reliable results since there are only a few observations for each data set. We motivate a mixture model having C clusters and 3C parameters to overcome the small sample size problem. Standardized data are pooled after assigning each data set to one of the mixture components. To get reasonable initial parameter estimates when density estimation methods are applied, we apply clustering methods including agglomerative and K-means. Bayes Information Criterion (BIC) and a new criterion, WMCV (Weighted Mean of within Cluster Variance estimates), are used to choose an optimal number of clusters. Density estimation methods including a maximum likelihood unimodal density estimator and kernel density estimation are used to estimate the unknown density. Once the density is estimated, a bootstrapping algorithm that selects samples from the estimated density is used to approximate the distribution of test statistics. The t-statistic and an empirical likelihood ratio statistic are used, since their distributions are completely determined by the distribution common to all subject. A method to control the false discovery rate is used to perform simultaneous tests on all small data sets. Simulated data sets and a set of cDNA (complimentary DeoxyriboNucleic Acid) microarray experiment data are analyzed by the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Von, Borries George Freitas. "Partition clustering of High Dimensional Low Sample Size data based on P-Values." Diss., Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ahlers, Zachary. "Estimating the necessary sample size for a binomial proportion confidence interval with low success probabilities." Kansas State University, 2017. http://hdl.handle.net/2097/35762.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Christopher Vahl<br>Among the most used statistical concepts and techniques, seen even in the most cursory of introductory courses, are the confidence interval, binomial distribution, and sample size estimation. This paper investigates a particular case of generating a confidence interval from a binomial experiment in the case where zero successes are expected. Several current methods of generating a binomial proportion confidence interval are examined by means of large-scale simulations and compared in order to determine an ad-hoc method for generating a confidence interval with coverage as close as possible to nominal while minimizing width. This is then used to construct a formula which allows for the estimation of a sample size necessary to obtain a sufficiently narrow confidence interval (with some predetermined probability of success) using the ad-hoc method given a prior estimate of the probability of success for a single trial. With this formula, binomial experiments could potentially be planned more efficiently, allowing researchers to plan only for the amount of precision they deem necessary, rather than trying to work with methods of producing confidence intervals that result in inefficient or, at worst, meaningless bounds.
APA, Harvard, Vancouver, ISO, and other styles
6

Gui, Jiang. "Regularized estimation in the high-dimension and low-sample size settings, with applications to genomic data /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lartigue, Thomas. "Mixtures of Gaussian Graphical Models with Constraints Gaussian Graphical Model exploration and selection in high dimension low sample size setting." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX034.

Full text
Abstract:
La description des co-variations entre plusieurs variables aléatoires observées est un problème délicat. Les réseaux de dépendance sont des outils populaires qui décrivent les relations entre les variables par la présence ou l’absence d’arêtes entre les nœuds d’un graphe. En particulier, les graphes de corrélations conditionnelles sont utilisés pour représenter les corrélations “directes” entre les nœuds du graphe. Ils sont souvent étudiés sous l’hypothèse gaussienne et sont donc appelés “modèles graphiques gaussiens” (GGM). Un seul réseau peut être utilisé pour représenter les tendances globales identifiées dans un échantillon de données. Toutefois, lorsque les données observées sont échantillonnées à partir d’une population hétérogène, il existe alors différentes sous-populations qui doivent toutes être décrites par leurs propres graphes. De plus, si les labels des sous populations (ou “classes”) ne sont pas disponibles, des approches non supervisées doivent être mises en œuvre afin d’identifier correctement les classes et de décrire chacune d’entre elles avec son propre graphe. Dans ce travail, nous abordons le problème relativement nouveau de l’estimation hiérarchique des GGM pour des populations hétérogènes non labellisées. Nous explorons plusieurs axes clés pour améliorer l’estimation des paramètres du modèle ainsi que l’identification non supervisee des sous-populations. ´ Notre objectif est de s’assurer que les graphes de corrélations conditionnelles inférés sont aussi pertinents et interprétables que possible. Premièrement - dans le cas d’une population simple et homogène - nous développons une méthode composite qui combine les forces des deux principaux paradigmes de l’état de l’art afin d’en corriger les faiblesses. Pour le cas hétérogène non labellisé, nous proposons d’estimer un mélange de GGM avec un algorithme espérance-maximisation (EM). Afin d’améliorer les solutions de cet algorithme EM, et d’éviter de tomber dans des extrema locaux sous-optimaux quand les données sont en grande dimension, nous introduisons une version tempérée de cet algorithme EM, que nous étudions théoriquement et empiriquement. Enfin, nous améliorons le clustering de l’EM en prenant en compte l’effet que des cofacteurs externes peuvent avoir sur la position des données observées dans leur espace<br>Describing the co-variations between several observed random variables is a delicate problem. Dependency networks are popular tools that depict the relations between variables through the presence or absence of edges between the nodes of a graph. In particular, conditional correlation graphs are used to represent the “direct” correlations between nodes of the graph. They are often studied under the Gaussian assumption and consequently referred to as “Gaussian Graphical Models” (GGM). A single network can be used to represent the overall tendencies identified within a data sample. However, when the observed data is sampled from a heterogeneous population, then there exist different sub-populations that all need to be described through their own graphs. What is more, if the sub-population (or “class”) labels are not available, unsupervised approaches must be implemented in order to correctly identify the classes and describe each of them with its own graph. In this work, we tackle the fairly new problem of Hierarchical GGM estimation for unlabelled heterogeneous populations. We explore several key axes to improve the estimation of the model parameters as well as the unsupervised identification of the sub-populations. Our goal is to ensure that the inferred conditional correlation graphs are as relevant and interpretable as possible. First - in the simple, homogeneous population case - we develop a composite method that combines the strengths of the two main state of the art paradigms to correct their weaknesses. For the unlabelled heterogeneous case, we propose to estimate a Mixture of GGM with an Expectation Maximisation (EM) algorithm. In order to improve the solutions of this EM algorithm, and avoid falling for sub-optimal local extrema in high dimension, we introduce a tempered version of this EM algorithm, that we study theoretically and empirically. Finally, we improve the clustering of the EM by taking into consideration the effect of external co-features on the position in space of the observed data
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Hongliu. "Forêt aléatoire pour l'apprentissage multi-vues basé sur la dissimilarité : Application à la Radiomique." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR073/document.

Full text
Abstract:
Les travaux de cette thèse ont été initiés par des problèmes d’apprentissage de données radiomiques. La Radiomique est une discipline médicale qui vise l’analyse à grande échelle de données issues d’imageries médicales traditionnelles, pour aider au diagnostic et au traitement des cancers. L’hypothèse principale de cette discipline est qu’en extrayant une grande quantité d’informations des images, on peut caractériser de bien meilleure façon que l’œil humain les spécificités de cette pathologie. Pour y parvenir, les données radiomiques sont généralement constituées de plusieurs types d’images et/ou de plusieurs types de caractéristiques (images, cliniques, génomiques). Cette thèse aborde ce problème sous l’angle de l’apprentissage automatique et a pour objectif de proposer une solution générique, adaptée à tous problèmes d’apprentissage du même type. Nous identifions ainsi en Radiomique deux problématiques d’apprentissage: (i) l’apprentissage de données en grande dimension et avec peu d’instances (high dimension, low sample size, a.k.a.HDLSS) et (ii) l’apprentissage multi-vues. Les solutions proposées dans ce manuscrit exploitent des représentations de dissimilarités obtenues à l’aide des Forêts Aléatoires. L’utilisation d’une représentation par dissimilarité permet de contourner les difficultés inhérentes à l’apprentissage en grande dimension et facilite l’analyse conjointe des descriptions multiples (les vues). Les contributions de cette thèse portent sur l’utilisation de la mesure de dissimilarité embarquée dans les méthodes de Forêts Aléatoires pour l’apprentissage multi-vue de données HDLSS. En particulier, nous présentons trois résultats: (i) la démonstration et l’analyse de l’efficacité de cette mesure pour l’apprentissage multi-vue de données HDLSS; (ii) une nouvelle méthode pour mesurer les dissimilarités à partir de Forêts Aléatoires, plus adaptée à ce type de problème d’apprentissage; et (iii) une nouvelle façon d’exploiter l’hétérogénèité des vues, à l’aide d’un mécanisme de combinaison dynamique. Ces résultats ont été obtenus sur des données radiomiques mais aussi sur des problèmes multi-vue classiques<br>The work of this thesis was initiated by a Radiomic learning problem. Radiomics is a medical discipline that aims at the large-scale analysis of data from traditional medical imaging to assist in the diagnosis and treatment of cancer. The main hypothesis of this discipline is that by extracting a large amount of information from the images, we can characterize the specificities of this pathology in a much better way than the human eye. To achieve this, Radiomics data are generally based on several types of images and/or several types of features (from images, clinical, genomic). This thesis approaches this problem from the perspective of Machine Learning (ML) and aims to propose a generic solution, adapted to any similar learning problem. To do this, we identify two types of ML problems behind Radiomics: (i) learning from high dimension, low sample size (HDLSS) and (ii) multiview learning. The solutions proposed in this manuscript exploit dissimilarity representations obtained using the Random Forest method. The use of dissimilarity representations makes it possible to overcome the well-known difficulties of learning high dimensional data, and to facilitate the joint analysis of the multiple descriptions, i.e. the views.The contributions of this thesis focus on the use of the dissimilarity easurement embedded in the Random Forest method for HDLSS multi-view learning. In particular, we present three main results: (i) the demonstration and analysis of the effectiveness of this measure for HDLSS multi-view learning; (ii) a new method for measuring dissimilarities from Random Forests, better adapted to this type of learning problem; and (iii) a new way to exploit the heterogeneity of views, using a dynamic combination mechanism. These results have been obtained on radiomic data but also on classical multi-view learning problems
APA, Harvard, Vancouver, ISO, and other styles
9

Wood, Scott William. "Differential item functioning procedures for polytomous items when examinee sample sizes are small." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1110.

Full text
Abstract:
As part of test score validity, differential item functioning (DIF) is a quantitative characteristic used to evaluate potential item bias. In applications where a small number of examinees take a test, statistical power of DIF detection methods may be affected. Researchers have proposed modifications to DIF detection methods to account for small focal group examinee sizes for the case when items are dichotomously scored. These methods, however, have not been applied to polytomously scored items. Simulated polytomous item response strings were used to study the Type I error rates and statistical power of three popular DIF detection methods (Mantel test/Cox's β, Liu-Agresti statistic, HW3) and three modifications proposed for contingency tables (empirical Bayesian, randomization, log-linear smoothing). The simulation considered two small sample size conditions, the case with 40 reference group and 40 focal group examinees and the case with 400 reference group and 40 focal group examinees. In order to compare statistical power rates, it was necessary to calculate the Type I error rates for the DIF detection methods and their modifications. Under most simulation conditions, the unmodified, randomization-based, and log-linear smoothing-based Mantel and Liu-Agresti tests yielded Type I error rates around 5%. The HW3 statistic was found to yield higher Type I error rates than expected for the 40 reference group examinees case, rendering power calculations for these cases meaningless. Results from the simulation suggested that the unmodified Mantel and Liu-Agresti tests yielded the highest statistical power rates for the pervasive-constant and pervasive-convergent patterns of DIF, as compared to other DIF method alternatives. Power rates improved by several percentage points if log-linear smoothing methods were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. Power rates did not improve if Bayesian methods or randomization tests were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. ANOVA tests showed that statistical power was higher when 400 reference examinees were used versus 40 reference examinees, when impact was present among examinees versus when impact was not present, and when the studied item was excluded from the anchor test versus when the studied item was included in the anchor test. Statistical power rates were generally too low to merit practical use of these methods in isolation, at least under the conditions of this study.
APA, Harvard, Vancouver, ISO, and other styles
10

Cook, James Allen. "A decompositional investigation of 3D face recognition." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16653/.

Full text
Abstract:
Automated Face Recognition is the process of determining a subject's identity from digital imagery of their face without user intervention. The term in fact encompasses two distinct tasks; Face Verficiation is the process of verifying a subject's claimed identity while Face Identification involves selecting the most likely identity from a database of subjects. This dissertation focuses on the task of Face Verification, which has a myriad of applications in security ranging from border control to personal banking. Recently the use of 3D facial imagery has found favour in the research community due to its inherent robustness to the pose and illumination variations which plague the 2D modality. The field of 3D face recognition is, however, yet to fully mature and there remain many unanswered research questions particular to the modality. The relative expense and specialty of 3D acquisition devices also means that the availability of databases of 3D face imagery lags significantly behind that of standard 2D face images. Human recognition of faces is rooted in an inherently 2D visual system and much is known regarding the use of 2D image information in the recognition of individuals. The corresponding knowledge of how discriminative information is distributed in the 3D modality is much less well defined. This dissertations addresses these issues through the use of decompositional techniques. Decomposition alleviates the problems associated with dimensionality explosion and the Small Sample Size (SSS) problem and spatial decomposition is a technique which has been widely used in face recognition. The application of decomposition in the frequency domain, however, has not received the same attention in the literature. The use of decomposition techniques allows a map ping of the regions (both spatial and frequency) which contain the discriminative information that enables recognition. In this dissertation these techniques are covered in significant detail, both in terms of practical issues in the respective domains and in terms of the underlying distributions which they expose. Significant discussion is given to the manner in which the inherent information of the human face is manifested in the 2D and 3D domains and how these two modalities inter-relate. This investigation is extended to cover also the manner in which the decomposition techniques presented can be recombined into a single decision. Two new methods for learning the weighting functions for both the sum and product rules are presented and extensive testing against established methods is presented. Knowledge acquired from these examinations is then used to create a combined technique termed Log-Gabor Templates. The proposed technique utilises both the spatial and frequency domains to extract superior performance to either in isolation. Experimentation demonstrates that the spatial and frequency domain decompositions are complimentary and can combined to give improved performance and robustness.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Low sample size"

1

Champlain, Andre F. De. Assessing the dimensionality of simulated LSAT item response matrices with small sample sizes and short test lengths. Law School Admission Council, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weiss, Helen. Design issues in global mental health trials in low-resource settings. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199680467.003.0004.

Full text
Abstract:
In this chapter we outline the key principles in design and analysis of trials for mental health. The chapter focuses on randomized controlled trials as these are the gold-standard trial design, which minimizes confounding due to other factors and enables us to draw conclusions about the effectiveness of the intervention. Other key principles of trial design discussed in the chapter include methods to develop a clearly stated, testable research hypothesis, definition of well-defined outcomes, appropriate choice of the control condition, masking of providers and participants where possible, realistic sample size estimates, and appropriate data monitoring and statistical analysis plans. The chapter also outlines alternatives to the parallel arm superiority trial design, such as equivalence and non-inferiority trials, cross-over, stepped wedge, fixed adaptive, and patient preference trial designs.
APA, Harvard, Vancouver, ISO, and other styles
3

Yousefshahi, Fardin, Giuliano Michelagnoli, and Juan Francisco Asenjo. Ketamine Use and Opioid-Tolerant Cancer Patients. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190271787.003.0031.

Full text
Abstract:
Pain occurs in up to 70% of cancer patients and it can be challenging to manage. The standard for analgesic therapy is the World Health Organization ladder; however, up to 25% of patients don’t reach a level of comfort using this approach. Ketamine has been recognized as an excellent adjuvant for cancer pain treatment, especially when other analgesics have failed. Some randomized clinical trials have confirmed ketamine’s efficacy in refractory cancer pain, but most had small sample sizes and low power. Some publications have confirmed the beneficial effect of oral, intranasal, subcutaneous, or intravenous ketamine in treatment of refractory chronic cancer pain, while others are less conclusive. While ketamine is rapidly gaining ground as an adjuvant in treating pain in patients with cancers refractory to conventional therapy and/or patients with opioid tolerance, care should be taken to identify patients with ketamine contraindications in order to offer the greatest benefit with the lowest risk of side effects.
APA, Harvard, Vancouver, ISO, and other styles
4

Rensmann, Thilo, ed. Small and Medium-Sized Enterprises in International Economic Law. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198795650.001.0001.

Full text
Abstract:
While international trade and investment is still dominated by larger multinational enterprises (MNEs), small and medium-sized enterprises (SMEs) are increasingly reaching out beyond their traditional domestic habitat. A significant number of SMEs today are engaged in transboundary trade and investment and in the wake of the digital revolution the phenomenon of ‘born global’ SMEs can be increasingly observed. In addition, many SMEs enter the global economy indirectly via global value chains. International economic law, with its traditional focus on MNEs and their interests, is only slowly waking up to this new reality. At the same time, it is increasingly recognized that the internationalization of SMEs provides the key to creating more sustainable and inclusive global economic growth. The 2015 UN Sustainable Development Goals, for example, expressly call for the facilitation of increased access for SMEs to international trade and investment. This book undertakes a first attempt at systematically analysing the interaction between SMEs and international economic law. The analysis covers a broad spectrum of international trade and investment law focusing on issues of particular interest to SMEs, such as trade in services, government procurement, and trade facilitation. Salient regional and transregional developments are taken into account, including the implications of the TPP and the TTIP negotiations for SMEs. Close attention is also devoted to the concern of many states that further liberalization of international trade and investment would unduly restrict the regulatory space necessary to protect and promote the legitimate interests of domestic SMEs.
APA, Harvard, Vancouver, ISO, and other styles
5

Anderson, James A. Computing Hardware. Oxford University Press, 2018. http://dx.doi.org/10.1093/acprof:oso/9780199357789.003.0003.

Full text
Abstract:
Digital computers are built from hardware of great simplicity. First, they are built from devices with two states: on or off, one or zero, high voltage or low voltage, or logical TRUE or FALSE. Second, the devices are connected with extremely fine connections, currently on the order of size of a large virus. Their utility, value, and perceived extreme complexity lie in the software controlling them. Different devices have been used to build computers: relays, vacuum tubes, transistors, and integrated circuits. Theoretically, all can run the same software, only slower or faster. More exotic technologies have not proved commercially viable. Digital computer hardware has increased in power by roughly a factor of 2 every 2 years for five decades, an observation called Moore’s Law. Engineering problems with very small devices, such as quantum effects, heat, and difficulty of fabrication, are increasing and may soon end Moore’s Law.
APA, Harvard, Vancouver, ISO, and other styles
6

Stahle, David W., Dorian J. Burnette, Daniel Griffin, and Edward R. Cook. Thirteenth Century AD. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199329199.003.0009.

Full text
Abstract:
The hypothesis that a prolonged drought across southwestern North America in the late thirteenth century contributed to the abandonment of the region by Ancestral Pueblo populations, ultimately including the depopulation of the Mesa Verde region, continues to be a focus of archaeological research in the Pueblo region. We address the hypothesis through the re-measurement of tree-ring specimens from living trees and archaeological wood at Mesa Verde, Colorado, to derive chronologies of earlywood, latewood, and total ring width. The three chronology types all date from AD 480 to 2008 and were used to separately reconstruct cool and early warm season effective moisture and total water-year precipitation for Chapin Mesa near many of the major prehistoric archaeological sites. The new reconstructions indicate three simultaneous cool and early growing season droughts during the twelfth and thirteenth centuries that may have contributed to the environmental and social factors behind Ancestral Pueblo migrations over this sector of the Colorado Plateau. These sustained inter-seasonal droughts included the “Great Drought” of the late-thirteenth century, which is estimated to have been one of the most severe regimes of cool and early summer drought in the last 1,500-years and coincided with the end of Puebloan occupations at Mesa Verde. The elevation of the 30 cm isohyet of water-year precipitation reconstructed for southwestern Colorado from the new ring-width data is mapped from AD 1276–1280 and identifies areas where dry-land cultivation of maize may not have been practical during the driest years of the Great Drought. There is no doubt about the exact dating of the tree-ring chronologies, but the low sample size of dated specimens from Mesa Verde during the late-thirteenth and fourteenth centuries contributes uncertainty to these environmental reconstructions at the time of abandonment.
APA, Harvard, Vancouver, ISO, and other styles
7

Colognesi, Luigi Capogrossi. Ownership and Power in Roman Law. Edited by Paul J. du Plessis, Clifford Ando, and Kaius Tuori. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780198728689.013.40.

Full text
Abstract:
Private ownership played a central role in all periods of Roman society. In its early development, the Roman law of property knew two different ways in which private ownership of res mancipi and res nec mancipi could be transferred. In the late third century BC, the Roman jurists and the praetor were able to distinguish clearly between simple possession and full ownership: dominium ex iure Quiritium. Later on, they separated from this same dominium certain entitlements to use and enjoyment, which they classified as iura in re aliena. On one side, the original bundle of powers of the owner was hived off to constitute the usufruct, which coincided with what we might refer to as the “ordinary enjoyment” of the object. On the other side many praedial servitudes were created which allowed a landowner to make a limited use of another’s land. This chapter surveys that process.
APA, Harvard, Vancouver, ISO, and other styles
8

Sperling, George, and Zhong-Lin Lu. Objectless Motion. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0079.

Full text
Abstract:
The sum of two sine waves of the same frequency is yet another sine wave. When a moving sinewave grating (e.g., continuously translating from left to right) is added to (superimposed on) a stationary sinewave grating (the pedestal) with twice the amplitude, the sum is a sine-wave grating that wobbles back and forth. Remarkably, the left–right direction of the moving grating can be perceived just as accurately in pedestalled motion as in normal motion. At temporal frequencies of 10 Hz and greater, the wobble is too quick to be perceived. The moving pedestalled sine-wave grating is perceived as an invisible left-to-right horizontal wind above the summed sine-wave grating that wobbles back and forth at low temporal frequencies of motion but appears to be absolutely stationary at high temporal frequencies.
APA, Harvard, Vancouver, ISO, and other styles
9

Oliva, Aude, and Philippe G. Schyns. Hybrid Image Illusion. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780199794607.003.0111.

Full text
Abstract:
Artists, designers, photographers, and visual scientists are routinely looking for ways to create, out of a single image, the feeling that there is more to see than what meets the eye. Many well-known visual illusions are dual in nature, causing the viewer to experience two different interpretations of the same image. Hybrid images illustrate a double-image illusion, where different images are perceived depending on viewing distance, viewing duration, or image size: one that appears when the image is viewed up-close (displaying high spatial frequencies) and another that appears from afar (showing low spatial frequencies). This method can be used to create compelling dual images in which the observer experiences different percepts when interacting with the image.
APA, Harvard, Vancouver, ISO, and other styles
10

Harris, Edward M., and Mirko Canevaro, eds. The Oxford Handbook of Ancient Greek Law. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199599257.001.0001.

Full text
Abstract:
This handbook is currently in development, with individual articles publishing online in advance of print publication. At this time, we cannot add information about unpublished articles in this handbook, however the table of contents will continue to grow as additional articles pass through the review process and are added to the site. Please note that the online publication date for this handbook is the date that the first article in the title was published online. For more information, please read the site FAQs. The Oxford Handbook of Ancient Greek Law is a general introduction to the law and legal procedure of Greece from the Archaic period to the Roman conquest. The handbook provides a reliable survey of the evidence and a critical evaluation of recent trends in scholarship. Among the contributors are some of the foremost experts in the field. It covers all aspects of ancient Greek law and the major topics of scholarly debate and reviews the status of the available evidence, especially the epigraphical material. As a whole, the handbook offers new perspectives, while at the same time discussing important avenues for future research. The volume attempts to do justice to the local features of the legal system of the numerous Greek city-states, while at the same time outlining the general legal principles that bound the Greek cities together. Some chapters examine individual poleis (Athens, Sparta, Gortyn, Ptolemaic Egypt), whole others are devoted to comparative studies of specific topics in the field: constitutional law, citizenship, marriage law, control of magistrates, law and economy, slavery and manumission, interstate relations, and amnesties aimed at ending stasis. Several chapters also examine the connection between law and political philosophy in the ancient Greek world. Each chapter starts by placing the topic within the larger historical context, then provides an overview of the evidence and methodological issues, detailed discussion of major topcis, and a critical evaluation of recent trends in scholarship.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Low sample size"

1

Christoph, Gerd, and Vladimir V. Ulyanov. "Random Dimension Low Sample Size Asymptotics." In Recent Developments in Stochastic Methods and Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-83266-7_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chow, Shein-Chung, Jun Shao, Hansheng Wang, and Yuliya Lokhnygina. "Sample Size for Clinical Trials with Extremely Low Incidence Rate." In Sample Size Calculations in Clinical Research: Third Edition. Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/9781315183084-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thaher, Thaer, Ali Asghar Heidari, Majdi Mafarja, Jin Song Dong, and Seyedali Mirjalili. "Binary Harris Hawks Optimizer for High-Dimensional, Low Sample Size Feature Selection." In Algorithms for Intelligent Systems. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9990-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chow, Shein-Chung, Yuanyuan Kong, and Shih-Ting Chiu. "Sample Size for HIV-1 Vaccine Clinical Trials with Extremely Low Incidence Rate." In Quantitative Methods for HIV/AIDS Research. Chapman and Hall/CRC, 2017. http://dx.doi.org/10.1201/9781315120805-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Marriott, Paul, Radka Sabolova, Germain Van Bever, and Frank Critchley. "Geometry of Goodness-of-Fit Testing in High Dimensional Low Sample Size Modelling." In Lecture Notes in Computer Science. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25040-3_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ishii, Aki, Kazuyoshi Yata, and Makoto Aoshima. "A Quadratic Classifier for High-Dimension, Low-Sample-Size Data Under the Strongly Spiked Eigenvalue Model." In Springer Proceedings in Mathematics & Statistics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28665-1_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krickeberg, Klaus, Van Trong Pham, and Thi My Hanh Pham. "The Normal Law and Applications: Sample Size, Confidence Intervals, Tests." In Statistics for Biology and Health. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1205-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Krickeberg, Klaus, Pham Van Trong, and Pham Thi My Hanh. "The Normal Law and Applications: Sample Size, Confidence Intervals, Tests." In Statistics for Biology and Health. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16368-6_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Dousari, Ali, and Muntha Bahbahani. "Mineralogy (XRD)." In Atlas of Fallen Dust in Kuwait. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66977-5_4.

Full text
Abstract:
Abstract The two main particle size components of the dust samples were subjected to mineralogical analysis to identify the mineral constituents and determine their frequency percentage in each textural class; the fine sand (particle size between 0.125 and 0.063 mm) and Mud (less than 0.063 mm). The average percentage of minerals was mapped out for each season i.e. March, June, September and December 2010 showing the high and low mineral concentration in areas in Kuwait covering the mineral concentrations of Calcite, Carbonate, clay minerals, dolomite, feldspars, and quartz.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Ziyuan, Tonghui Wang, David Trafimow, and Zhaohui Xu. "A Different Kind of Effect Size Based on Samples from Two Populations with Delta Log-Skew-Normal Distributions." In Prediction and Causality in Econometrics and Related Topics. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77094-5_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Low sample size"

1

Ye, Jieping, and Tie Wang. "Regularized discriminant analysis for high dimensional, low sample size data." In the 12th ACM SIGKDD international conference. ACM Press, 2006. http://dx.doi.org/10.1145/1150402.1150453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Bo, Ying Wei, Yu Zhang, and Qiang Yang. "Deep Neural Networks for High Dimension, Low Sample Size Data." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/318.

Full text
Abstract:
Deep neural networks (DNN) have achieved breakthroughs in applications with large sample size. However, when facing high dimension, low sample size (HDLSS) data, such as the phenotype prediction problem using genetic data in bioinformatics, DNN suffers from overfitting and high-variance gradients. In this paper, we propose a DNN model tailored for the HDLSS data, named Deep Neural Pursuit (DNP). DNP selects a subset of high dimensional features for the alleviation of overfitting and takes the average over multiple dropouts to calculate gradients with low variance. As the first DNN method applied on the HDLSS data, DNP enjoys the advantages of the high nonlinearity, the robustness to high dimensionality, the capability of learning from a small number of samples, the stability in feature selection, and the end-to-end training. We demonstrate these advantages of DNP via empirical results on both synthetic and real-world biological datasets.
APA, Harvard, Vancouver, ISO, and other styles
3

He, Qi, Yao Lee, Dongmei Huang, Shengqi He, Wei Song, and Yanling Du. "Multi-modal Remote Sensing Image Classification for Low Sample Size Data." In 2018 International Joint Conference on Neural Networks (IJCNN). IEEE, 2018. http://dx.doi.org/10.1109/ijcnn.2018.8489351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Langenborg, Erik, Kevin Sun, Lingfeng Cao, Christopher Overall, and Abigail Flower. "Gene set creation algorithm for microarray studies with low sample size." In 2017 Systems and Information Engineering Design Symposium (SIEDS). IEEE, 2017. http://dx.doi.org/10.1109/sieds.2017.7937725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mestre, Xavier, Ben A. Johnson, and Yuri I. Abramovich. "Source Power Estimation for Array Processing Applications under Low Sample Size Constraints." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tiomoko, Malik, and Romain Couillet. "Estimation of Covariance Matrix Distances in the High Dimension Low Sample Size Regime." In 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE, 2019. http://dx.doi.org/10.1109/camsap45676.2019.9022663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chowdhury, Shanta, Xishuang Dong, and Xiangfang Li. "Recurrent Neural Network Based Feature Selection for High Dimensional and Low Sample Size Micro-array Data." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dernoncourt, David, Blaise Hanczar, and Jean-Daniel Zucker. "Experimental analysis of feature selection stability for high-dimension and low-sample size gene expression classification task." In 2012 IEEE 12th International Conference on Bioinformatics & Bioengineering (BIBE). IEEE, 2012. http://dx.doi.org/10.1109/bibe.2012.6399649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Filippi, R. G., C. Christiansen, B. Li, et al. "Electromigration Results with Large Sample Size for Dual Damascene Structures in a Copper/CVD Low-k Dielectric Technology." In Proceedings of the IEEE 2006 International Interconnect Technology Conference. IEEE, 2006. http://dx.doi.org/10.1109/iitc.2006.1648657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sato-Ilic, Mika. "Structural classification based correlation and its application to principal component analysis for high-dimension low-sample size data." In 2012 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2012. http://dx.doi.org/10.1109/fuzz-ieee.2012.6251200.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Low sample size"

1

Malej, Matt, and Fengyan Shi. Suppressing the pressure-source instability in modeling deep-draft vessels with low under-keel clearance in FUNWAVE-TVD. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40639.

Full text
Abstract:
This Coastal and Hydraulics Engineering Technical Note (CHETN) documents the development through verification and validation of three instability-suppressing mechanisms in FUNWAVE-TVD, a Boussinesq-type numerical wave model, when modeling deep-draft vessels with a low under-keel clearance (UKC). Many large commercial ports and channels (e.g., Houston Ship Channel, Galveston, US Army Corps of Engineers [USACE]) are traveled and affected by tens of thousands of commercial vessel passages per year. In a series of recent projects undertaken for the Galveston District (USACE), it was discovered that when deep-draft vessels are modeled using pressure-source mechanisms, they can suffer from model instabilities when low UKC is employed (e.g., vessel draft of 12 m¹ in a channel of 15 m or less of depth), rendering a simulation unstable and obsolete. As an increasingly large number of deep-draft vessels are put into service, this problem is becoming more severe. This presents an operational challenge when modeling large container-type vessels in busy shipping channels, as these often will come as close as 1 m to the bottom of the channel, or even touch the bottom. This behavior would subsequently exhibit a numerical discontinuity in a given model and could severely limit the sample size of modeled vessels. This CHETN outlines a robust approach to suppressing such instability without compromising the integrity of the far-field vessel wave/wake solution. The three methods developed in this study aim to suppress high-frequency spikes generated nearfield of a vessel. They are a shock-capturing method, a friction method, and a viscosity method, respectively. The tests show that the combined shock-capturing and friction method is the most effective method to suppress the local high-frequency noises, while not affecting the far-field solution. A strong test, in which the target draft is larger than the channel depth, shows that there are no high-frequency noises generated in the case of ship squat as long as the shock-capturing method is used.
APA, Harvard, Vancouver, ISO, and other styles
2

McDonagh, Marian, Andrea C. Skelly, Amy Hermesch, et al. Cervical Ripening in the Outpatient Setting. Agency for Healthcare Research and Quality (AHRQ), 2021. http://dx.doi.org/10.23970/ahrqepccer238.

Full text
Abstract:
Objectives. To assess the comparative effectiveness and potential harms of cervical ripening in the outpatient setting (vs. inpatient, vs. other outpatient intervention) and of fetal surveillance when a prostaglandin is used for cervical ripening. Data sources. Electronic databases (Ovid® MEDLINE®, Embase®, CINAHL®, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews) to July 2020; reference lists; and a Federal Register notice. Review methods. Using predefined criteria and dual review, we selected randomized controlled trials (RCTs) and cohort studies of cervical ripening comparing prostaglandins and mechanical methods in outpatient versus inpatient settings; one outpatient method versus another (including placebo or expectant management); and different methods/protocols for fetal surveillance in cervical ripening using prostaglandins. When data from similar study designs, populations, and outcomes were available, random effects using profile likelihood meta-analyses were conducted. Inconsistency (using I2) and small sample size bias (publication bias, if ≥10 studies) were assessed. Strength of evidence (SOE) was assessed. All review methods followed Agency for Healthcare Research and Quality Evidence-based Practice Center methods guidance. Results. We included 30 RCTs and 10 cohort studies (73% fair quality) involving 9,618 women. The evidence is most applicable to women aged 25 to 30 years with singleton, vertex presentation and low-risk pregnancies. No studies on fetal surveillance were found. The frequency of cesarean delivery (2 RCTs, 4 cohort studies) or suspected neonatal sepsis (2 RCTs) was not significantly different using outpatient versus inpatient dinoprostone for cervical ripening (SOE: low). In comparisons of outpatient versus inpatient single-balloon catheters (3 RCTs, 2 cohort studies), differences between groups on cesarean delivery, birth trauma (e.g., cephalohematoma), and uterine infection were small and not statistically significant (SOE: low), and while shoulder dystocia occurred less frequently in the outpatient group (1 RCT; 3% vs. 11%), the difference was not statistically significant (SOE: low). In comparing outpatient catheters and inpatient dinoprostone (1 double-balloon and 1 single-balloon RCT), the difference between groups for both cesarean delivery and postpartum hemorrhage was small and not statistically significant (SOE: low). Evidence on other outcomes in these comparisons and for misoprostol, double-balloon catheters, and hygroscopic dilators was insufficient to draw conclusions. In head to head comparisons in the outpatient setting, the frequency of cesarean delivery was not significantly different between 2.5 mg and 5 mg dinoprostone gel, or latex and silicone single-balloon catheters (1 RCT each, SOE: low). Differences between prostaglandins and placebo for cervical ripening were small and not significantly different for cesarean delivery (12 RCTs), shoulder dystocia (3 RCTs), or uterine infection (7 RCTs) (SOE: low). These findings did not change according to the specific prostaglandin, route of administration, study quality, or gestational age. Small, nonsignificant differences in the frequency of cesarean delivery (6 RCTs) and uterine infection (3 RCTs) were also found between dinoprostone and either membrane sweeping or expectant management (SOE: low). These findings did not change according to the specific prostaglandin or study quality. Evidence on other comparisons (e.g., single-balloon catheter vs. dinoprostone) or other outcomes was insufficient. For all comparisons, there was insufficient evidence on other important outcomes such as perinatal mortality and time from admission to vaginal birth. Limitations of the evidence include the quantity, quality, and sample sizes of trials for specific interventions, particularly rare harm outcomes. Conclusions. In women with low-risk pregnancies, the risk of cesarean delivery and fetal, neonatal, or maternal harms using either dinoprostone or single-balloon catheters was not significantly different for cervical ripening in the outpatient versus inpatient setting, and similar when compared with placebo, expectant management, or membrane sweeping in the outpatient setting. This evidence is low strength, and future studies are needed to confirm these findings.
APA, Harvard, Vancouver, ISO, and other styles
3

Bhattarai, Rabin, Yufan Zhang, and Jacob Wood. Evaluation of Various Perimeter Barrier Products. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-009.

Full text
Abstract:
Construction activities entail substantial disturbance of topsoil and vegetative cover. As a result, stormwater runoff and erosion rates are increased significantly. If the soil erosion and subsequently generated sediment are not contained within the site, they would have a negative off-site impact as well as a detrimental influence on the receiving water body. In this study, replicable large-scale tests were used to analyze the ability of products to prevent sediment from exiting the perimeter of a site via sheet flow. The goal of these tests was to compare products to examine how well they retain sediment and how much ponding occurs upstream, as well as other criteria of interest to the Illinois Department of Transportation. The products analyzed were silt fence, woven monofilament geotextile, Filtrexx Siltsoxx, ERTEC ProWattle, triangular silt dike, sediment log, coconut coir log, Siltworm, GeoRidge, straw wattles, and Terra-Tube. Joint tests and vegetated buffer strip tests were also conducted. The duration of each test was 30 minutes, and 116 pounds of clay-loam soil were mixed with water in a 300 gallon tank. The solution was continuously mixed throughout the test. The sediment-water slurry was uniformly discharged over an 8 ft by 20 ft impervious 3:1 slope. The bottom of the slope had a permeable zone (8 ft by 8 ft) constructed from the same soil used in the mixing. The product was installed near the center of this zone. Water samples were collected at 5 minute intervals upstream and downstream of the product. These samples were analyzed for total sediment concentration to determine the effectiveness of each product. The performance of each product was evaluated in terms of sediment removal, ponding, ease of installation, and sustainability.
APA, Harvard, Vancouver, ISO, and other styles
4

Akinleye, Taiwo, Idil Deniz Akin, Amanda Hohner, et al. Evaluation of Electrochemical Treatment for Removal of Arsenic and Manganese from Field Soil. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-019.

Full text
Abstract:
Soils containing inorganic compounds are frequently encountered by transportation agencies during construction within the right-of-way, and they pose a threat to human health and the environment. As a result, construction activities may experience project delays and increased costs associated with management of inorganic compounds containing soils required to meet environmental regulations. Recalcitrance of metal-contaminated soils toward conventional treatment technologies is exacerbated in clay or organic content-rich fine-grained soils with low permeability and high sorption capacity because of increased treatment complexity, cost, and duration. The objective of this study was to develop an accelerated in situ electrochemical treatment approach to extract inorganic compounds from fine-grained soils, with the treatment time comparable to excavation and off-site disposal. Three reactor experiments were conducted on samples collected from two borehole locations from a field site in Illinois that contained arsenic (As)(~7.4 mg/kg) and manganese (Mn)(~700 mg/kg). A combination of hydrogen peroxide (H2O2) and/or citrate buffer solution was used to treat the soils. A low-intensity electrical field was applied to soil samples using a bench-scale reactor that resembles field-scale in situ electrochemical systems. For the treatment using 10% H2O2 and citrate buffer solution, average removal of 23% and 8% were achieved for Mn and As, respectively. With 4% H2O2 and citrate buffer, 39% and 24% removal were achieved for Mn and As; while using only citrate buffer as the electrolyte, 49% and 9% removal were achieved for Mn and As, respectively. All chemical regimes adopted in this study reduced the inorganic compound concentrations to below the maximum allowable concentration for Illinois as specified by the Illinois Environmental Protection Agency. The results from this work indicate that electrochemical systems that leverage low concentrations of hydrogen peroxide and citrate buffer can be effective for remediating soils containing manganese and arsenic.
APA, Harvard, Vancouver, ISO, and other styles
5

Fehey, Kristina, and Dustin Perkins. Invasive exotic plant monitoring in Capitol Reef National Park: 2019 field season, Scenic Drive and Cathedral Valley Road. Edited by Alice Wondrak Biel. National Park Service, 2021. http://dx.doi.org/10.36967/nrr-2286627.

Full text
Abstract:
Invasive exotic plant (IEP) species are a significant threat to natural ecosystem integrity and biodiversity, and controlling them is a high priority for the National Park Service. The North-ern Colorado Plateau Network (NCPN) selected the early detection of IEPs as one of 11 monitoring protocols to be implemented as part of its long-term monitoring program. From May 30 to June 1, 2019, network staff conducted surveys for priority IEP species along the Scenic Drive and Cathedral Valley Road monitoring routes at Capitol Reef National Park. We detected 119 patches of six priority IEP species along 34 kilometers of the two monitor-ing routes. There were more patches of IEPs, and a higher percentage of large patches, than in previous years. This indicates that previously identified infestations have expanded and grown. The most common (47.1%) patch size among priority species was 1,000–2,000 m2 (0.25–0.5 acre). The vast majority (93.2%) of priority patches ranked either low (58.8%) or very low (34.4%) on the patch management index scale. Tamarisk (Tamarix sp., 72 patches) was the most prevalent priority IEP species. African mustard (Malcolmia africana, 32 patch-es), field bindweed (Convolvulus arvensis, 9 patches), and Russian olive (Elaeagnus angusti-folia, 3 patches) occurred less commonly. Together, these four species represented 97.5% of all patches recorded in 2019. Four IEP species were found on the monitored routes for the first time: Russian olive (Elaeagnus angustifolia), quackgrass (Elymus repens), Siberian elm (Ulmus pumila), and African mustard (Malcolmia africana, not on the priority species list before 2019). Cathedral Valley Road had higher IEP priority patches per kilometer (5.68) than the Scenic Drive (2.05). IEP species were found on 37.9% (25 of 66) of monitored transects. Almost all these detections were Russian thistle (Salsola sp.). Russian thistle was widespread, present in 33.3% of transects, with an estimated cover of 0.2% across all transects sampled. Across routes monitored in all three rotations (2012, 2015, and 2019), Russian thistle has increased in frequency. However, its frequency remained about the same from 2015 to 2019, and percent cover remains low. Tamarisk and field bindweed have both increased in preva-lence since monitoring began, with tamarisk showing a dramatic increase in the number and size of patches. Immediate control of tamarisk and these other species is recommended to reduce their numbers on these routes. The NCPN plans to Capitol Reef in 2020 to monitor Oak and Pleasant creeks, completing the third rotation of invasive plant monitoring.
APA, Harvard, Vancouver, ISO, and other styles
6

Perkins, Dustin. Invasive exotic plant monitoring at Colorado National Monument: 2019 field season. Edited by Alice Wondrak Biel. National Park Service, 2021. http://dx.doi.org/10.36967/nrr-2286650.

Full text
Abstract:
Invasive exotic plant (IEP) species are a significant threat to natural ecosystem integrity and biodiversity, and controlling them is a high priority for the National Park Service. The North-ern Colorado Plateau Network (NCPN) selected the early detection of IEPs as one of 11 monitoring protocols to be implemented as part of its long-term monitoring program. This report represents work completed at Colorado National Monument during 2019. During monitoring conducted June 12–19, a total of 20 IEP species were detected on monitoring routes and transects. Of these, 12 were priority species that accounted for 791 separate IEP patches. IEPs were most prevalent along riparian areas. Yellow sweetclover (Melilotis officinale) and yellow salsify (Tragopogon dubius) were the most commonly detected priority IEPs along monitoring routes, representing 73% of all priority patches. Patches of less than 40 m2 were typical of nearly all priority IEP species except yellow sweetclover. A patch management index (PMI) was created by combining patch size class and percent cover for each patch. In 2019, a large majority of priority IEP patches were assigned a PMI score of low (46%) or very low (50%), indicating small and/or sparse patches where control is generally still feasible. This is similar to the numbers for 2017, when 99% of patches scored low or very low in PMI. Seventy-eight percent of tree patches were classified as seedlings or saplings, which require less effort to control than mature trees. Cheatgrass (Anisantha tectorum) was the most common IEP recorded in transects, found in 30–77% of transects across the different routes. It was the only species found in transects on all monitoring routes. When treated and untreated extra areas near the West Entrance were compared, the treated area had comparable or higher lev-els of IEPs than the untreated area. When segments of monitoring routes conducted between 2003 and 2019 were compared, results were mixed, due to the different species monitored in different time periods. But in general, the number of IEPs per 100 meters is increasing or remaining constant over time. There were notable increases in IEP patches per 100 meters on several routes in 2019: field bindweed (Convolvulus arvensis) along East Glade Park Road; Siberian elm (Ulmus pumila) in Red Canyon; yellow salsify along East Glade Park Road, No Thoroughfare Canyon, No Thoroughfare Trail, and Red Canyon; and yellow sweetclover in No Thoroughfare Canyon and Red Canyon. Network staff will return to re-sample monitoring routes in 2021.
APA, Harvard, Vancouver, ISO, and other styles
7

Roschelle, Jeremy, Britte Haugan Cheng, Nicola Hodkowski, Julie Neisler, and Lina Haldar. Evaluation of an Online Tutoring Program in Elementary Mathematics. Digital Promise, 2020. http://dx.doi.org/10.51388/20.500.12265/94.

Full text
Abstract:
Many students struggle with mathematics in late elementary school, particularly on the topic of fractions. In a best evidence syntheses of research on increasing achievement in elementary school mathematics, Pelligrini et al. (2018) highlighted tutoring as a way to help students. Online tutoring is attractive because costs may be lower and logistics easier than with face-to-face tutoring. Cignition developed an approach that combines online 1:1 tutoring with a fractions game, called FogStone Isle. The game provides students with additional learning opportunities and provides tutors with information that they can use to plan tutoring sessions. A randomized controlled trial investigated the research question: Do students who participate in online tutoring and a related mathematical game learn more about fractions than students who only have access to the game? Participants were 144 students from four schools, all serving low-income students with low prior mathematics achievement. In the Treatment condition, students received 20-25 minute tutoring sessions twice per week for an average of 18 sessions and also played the FogStone Isle game. In the Control condition, students had access to the game, but did not play it often. Control students did not receive tutoring. Students were randomly assigned to condition after being matched on pre-test scores. The same diagnostic assessment was used as a pre-test and as a post-test. The planned analysis looked for differences in gain scores ( post-test minus pre-test scores) between conditions. We conducted a t-test on the aggregate gain scores, comparing conditions; the results were statistically significant (t = 4.0545, df = 132.66, p-value &lt; .001). To determine an effect size, we treated each site as a study in a meta-analysis. Using gain scores, the effect size was g=+.66. A more sophisticated treatment of the pooled standard deviation resulted in a corrected effect size of g=.46 with a 95% confidence interval of [+.23,+.70]. Students who received online tutoring and played the related Fog Stone Isle game learned more; our research found the approach to be efficacious. The Pelligrini et al. (2018) meta-analysis of elementary math tutoring programs found g = .26 and was based largely on face-to-face tutoring studies. Thus, this study compares favorably to prior research on face-to-face mathematics tutoring with elementary students. Limitations are discussed; in particular, this is an initial study of an intervention under development. Effects could increase or decrease as development continues and the program scales. Although this study was planned long before the current pandemic, results are particularly timely now that many students are at home under shelter-in-place orders due to COVID-19. The approach taken here is feasible for students at home, with tutors supporting them from a distance. It is also feasible in many other situations where equity could be addressed directly by supporting students via online tutors.
APA, Harvard, Vancouver, ISO, and other styles
8

McCarthy, Noel, Eileen Taylor, Martin Maiden, et al. Enhanced molecular-based (MLST/whole genome) surveillance and source attribution of Campylobacter infections in the UK. Food Standards Agency, 2021. http://dx.doi.org/10.46756/sci.fsa.ksj135.

Full text
Abstract:
This human campylobacteriosis sentinel surveillance project was based at two sites in Oxfordshire and North East England chosen (i) to be representative of the English population on the Office for National Statistics urban-rural classification and (ii) to provide continuity with genetic surveillance started in Oxfordshire in October 2003. Between October 2015 and September 2018 epidemiological questionnaires and genome sequencing of isolates from human cases was accompanied by sampling and genome sequencing of isolates from possible food animal sources. The principal aim was to estimate the contributions of the main sources of human infection and to identify any changes over time. An extension to the project focussed on antimicrobial resistance in study isolates and older archived isolates. These older isolates were from earlier years at the Oxfordshire site and the earliest available coherent set of isolates from the national archive at Public Health England (1997/8). The aim of this additional work was to analyse the emergence of the antimicrobial resistance that is now present among human isolates and to describe and compare antimicrobial resistance in recent food animal isolates. Having identified the presence of bias in population genetic attribution, and that this was not addressed in the published literature, this study developed an approach to adjust for bias in population genetic attribution, and an alternative approach to attribution using sentinel types. Using these approaches the study estimated that approximately 70% of Campylobacter jejuni and just under 50% of C. coli infection in our sample was linked to the chicken source and that this was relatively stable over time. Ruminants were identified as the second most common source for C. jejuni and the most common for C. coli where there was also some evidence for pig as a source although less common than ruminant or chicken. These genomic attributions of themselves make no inference on routes of transmission. However, those infected with isolates genetically typical of chicken origin were substantially more likely to have eaten chicken than those infected with ruminant types. Consumption of lamb’s liver was very strongly associated with infection by a strain genetically typical of a ruminant source. These findings support consumption of these foods as being important in the transmission of these infections and highlight a potentially important role for lamb’s liver consumption as a source of Campylobacter infection. Antimicrobial resistance was predicted from genomic data using a pipeline validated by Public Health England and using BIGSdb software. In C. jejuni this showed a nine-fold increase in resistance to fluoroquinolones from 1997 to 2018. Tetracycline resistance was also common, with higher initial resistance (1997) and less substantial change over time. Resistance to aminoglycosides or macrolides remained low in human cases across all time periods. Among C. jejuni food animal isolates, fluoroquinolone resistance was common among isolates from chicken and substantially less common among ruminants, ducks or pigs. Tetracycline resistance was common across chicken, duck and pig but lower among ruminant origin isolates. In C. coli resistance to all four antimicrobial classes rose from low levels in 1997. The fluoroquinolone rise appears to have levelled off earlier and among animals, levels are high in duck as well as chicken isolates, although based on small sample sizes, macrolide and aminoglycoside resistance, was substantially higher than for C. jejuni among humans and highest among pig origin isolates. Tetracycline resistance is high in isolates from pigs and the very small sample from ducks. Antibiotic use following diagnosis was relatively high (43.4%) among respondents in the human surveillance study. Moreover, it varied substantially across sites and was highest among non-elderly adults compared to older adults or children suggesting opportunities for improved antimicrobial stewardship. The study also found evidence for stable lineages over time across human and source animal species as well as some tighter genomic clusters that may represent outbreaks. The genomic dataset will allow extensive further work beyond the specific goals of the study. This has been made accessible on the web, with access supported by data visualisation tools.
APA, Harvard, Vancouver, ISO, and other styles
9

Vargas-Herrera, Hernando, Juan Jose Ospina-Tejeiro, Carlos Alfonso Huertas-Campos, et al. Monetary Policy Report - April de 2021. Banco de la República de Colombia, 2021. http://dx.doi.org/10.32468/inf-pol-mont-eng.tr2-2021.

Full text
Abstract:
1.1 Macroeconomic summary Economic recovery has consistently outperformed the technical staff’s expectations following a steep decline in activity in the second quarter of 2020. At the same time, total and core inflation rates have fallen and remain at low levels, suggesting that a significant element of the reactivation of Colombia’s economy has been related to recovery in potential GDP. This would support the technical staff’s diagnosis of weak aggregate demand and ample excess capacity. The most recently available data on 2020 growth suggests a contraction in economic activity of 6.8%, lower than estimates from January’s Monetary Policy Report (-7.2%). High-frequency indicators suggest that economic performance was significantly more dynamic than expected in January, despite mobility restrictions and quarantine measures. This has also come amid declines in total and core inflation, the latter of which was below January projections if controlling for certain relative price changes. This suggests that the unexpected strength of recent growth contains elements of demand, and that excess capacity, while significant, could be lower than previously estimated. Nevertheless, uncertainty over the measurement of excess capacity continues to be unusually high and marked both by variations in the way different economic sectors and spending components have been affected by the pandemic, and by uneven price behavior. The size of excess capacity, and in particular the evolution of the pandemic in forthcoming quarters, constitute substantial risks to the macroeconomic forecast presented in this report. Despite the unexpected strength of the recovery, the technical staff continues to project ample excess capacity that is expected to remain on the forecast horizon, alongside core inflation that will likely remain below the target. Domestic demand remains below 2019 levels amid unusually significant uncertainty over the size of excess capacity in the economy. High national unemployment (14.6% for February 2021) reflects a loose labor market, while observed total and core inflation continue to be below 2%. Inflationary pressures from the exchange rate are expected to continue to be low, with relatively little pass-through on inflation. This would be compatible with a negative output gap. Excess productive capacity and the expectation of core inflation below the 3% target on the forecast horizon provide a basis for an expansive monetary policy posture. The technical staff’s assessment of certain shocks and their expected effects on the economy, as well as the presence of several sources of uncertainty and related assumptions about their potential macroeconomic impacts, remain a feature of this report. The coronavirus pandemic, in particular, continues to affect the public health environment, and the reopening of Colombia’s economy remains incomplete. The technical staff’s assessment is that the COVID-19 shock has affected both aggregate demand and supply, but that the impact on demand has been deeper and more persistent. Given this persistence, the central forecast accounts for a gradual tightening of the output gap in the absence of new waves of contagion, and as vaccination campaigns progress. The central forecast continues to include an expected increase of total and core inflation rates in the second quarter of 2021, alongside the lapse of the temporary price relief measures put in place in 2020. Additional COVID-19 outbreaks (of uncertain duration and intensity) represent a significant risk factor that could affect these projections. Additionally, the forecast continues to include an upward trend in sovereign risk premiums, reflected by higher levels of public debt that in the wake of the pandemic are likely to persist on the forecast horizon, even in the context of a fiscal adjustment. At the same time, the projection accounts for the shortterm effects on private domestic demand from a fiscal adjustment along the lines of the one currently being proposed by the national government. This would be compatible with a gradual recovery of private domestic demand in 2022. The size and characteristics of the fiscal adjustment that is ultimately implemented, as well as the corresponding market response, represent another source of forecast uncertainty. Newly available information offers evidence of the potential for significant changes to the macroeconomic scenario, though without altering the general diagnosis described above. The most recent data on inflation, growth, fiscal policy, and international financial conditions suggests a more dynamic economy than previously expected. However, a third wave of the pandemic has delayed the re-opening of Colombia’s economy and brought with it a deceleration in economic activity. Detailed descriptions of these considerations and subsequent changes to the macroeconomic forecast are presented below. The expected annual decline in GDP (-0.3%) in the first quarter of 2021 appears to have been less pronounced than projected in January (-4.8%). Partial closures in January to address a second wave of COVID-19 appear to have had a less significant negative impact on the economy than previously estimated. This is reflected in figures related to mobility, energy demand, industry and retail sales, foreign trade, commercial transactions from selected banks, and the national statistics agency’s (DANE) economic tracking indicator (ISE). Output is now expected to have declined annually in the first quarter by 0.3%. Private consumption likely continued to recover, registering levels somewhat above those from the previous year, while public consumption likely increased significantly. While a recovery in investment in both housing and in other buildings and structures is expected, overall investment levels in this case likely continued to be low, and gross fixed capital formation is expected to continue to show significant annual declines. Imports likely recovered to again outpace exports, though both are expected to register significant annual declines. Economic activity that outpaced projections, an increase in oil prices and other export products, and an expected increase in public spending this year account for the upward revision to the 2021 growth forecast (from 4.6% with a range between 2% and 6% in January, to 6.0% with a range between 3% and 7% in April). As a result, the output gap is expected to be smaller and to tighten more rapidly than projected in the previous report, though it is still expected to remain in negative territory on the forecast horizon. Wide forecast intervals reflect the fact that the future evolution of the COVID-19 pandemic remains a significant source of uncertainty on these projections. The delay in the recovery of economic activity as a result of the resurgence of COVID-19 in the first quarter appears to have been less significant than projected in the January report. The central forecast scenario expects this improved performance to continue in 2021 alongside increased consumer and business confidence. Low real interest rates and an active credit supply would also support this dynamic, and the overall conditions would be expected to spur a recovery in consumption and investment. Increased growth in public spending and public works based on the national government’s spending plan (Plan Financiero del Gobierno) are other factors to consider. Additionally, an expected recovery in global demand and higher projected prices for oil and coffee would further contribute to improved external revenues and would favor investment, in particular in the oil sector. Given the above, the technical staff’s 2021 growth forecast has been revised upward from 4.6% in January (range from 2% to 6%) to 6.0% in April (range from 3% to 7%). These projections account for the potential for the third wave of COVID-19 to have a larger and more persistent effect on the economy than the previous wave, while also supposing that there will not be any additional significant waves of the pandemic and that mobility restrictions will be relaxed as a result. Economic growth in 2022 is expected to be 3%, with a range between 1% and 5%. This figure would be lower than projected in the January report (3.6% with a range between 2% and 6%), due to a higher base of comparison given the upward revision to expected GDP in 2021. This forecast also takes into account the likely effects on private demand of a fiscal adjustment of the size currently being proposed by the national government, and which would come into effect in 2022. Excess in productive capacity is now expected to be lower than estimated in January but continues to be significant and affected by high levels of uncertainty, as reflected in the wide forecast intervals. The possibility of new waves of the virus (of uncertain intensity and duration) represents a significant downward risk to projected GDP growth, and is signaled by the lower limits of the ranges provided in this report. Inflation (1.51%) and inflation excluding food and regulated items (0.94%) declined in March compared to December, continuing below the 3% target. The decline in inflation in this period was below projections, explained in large part by unanticipated increases in the costs of certain foods (3.92%) and regulated items (1.52%). An increase in international food and shipping prices, increased foreign demand for beef, and specific upward pressures on perishable food supplies appear to explain a lower-than-expected deceleration in the consumer price index (CPI) for foods. An unexpected increase in regulated items prices came amid unanticipated increases in international fuel prices, on some utilities rates, and for regulated education prices. The decline in annual inflation excluding food and regulated items between December and March was in line with projections from January, though this included downward pressure from a significant reduction in telecommunications rates due to the imminent entry of a new operator. When controlling for the effects of this relative price change, inflation excluding food and regulated items exceeds levels forecast in the previous report. Within this indicator of core inflation, the CPI for goods (1.05%) accelerated due to a reversion of the effects of the VAT-free day in November, which was largely accounted for in February, and possibly by the transmission of a recent depreciation of the peso on domestic prices for certain items (electric and household appliances). For their part, services prices decelerated and showed the lowest rate of annual growth (0.89%) among the large consumer baskets in the CPI. Within the services basket, the annual change in rental prices continued to decline, while those services that continue to experience the most significant restrictions on returning to normal operations (tourism, cinemas, nightlife, etc.) continued to register significant price declines. As previously mentioned, telephone rates also fell significantly due to increased competition in the market. Total inflation is expected to continue to be affected by ample excesses in productive capacity for the remainder of 2021 and 2022, though less so than projected in January. As a result, convergence to the inflation target is now expected to be somewhat faster than estimated in the previous report, assuming the absence of significant additional outbreaks of COVID-19. The technical staff’s year-end inflation projections for 2021 and 2022 have increased, suggesting figures around 3% due largely to variation in food and regulated items prices. The projection for inflation excluding food and regulated items also increased, but remains below 3%. Price relief measures on indirect taxes implemented in 2020 are expected to lapse in the second quarter of 2021, generating a one-off effect on prices and temporarily affecting inflation excluding food and regulated items. However, indexation to low levels of past inflation, weak demand, and ample excess productive capacity are expected to keep core inflation below the target, near 2.3% at the end of 2021 (previously 2.1%). The reversion in 2021 of the effects of some price relief measures on utility rates from 2020 should lead to an increase in the CPI for regulated items in the second half of this year. Annual price changes are now expected to be higher than estimated in the January report due to an increased expected path for fuel prices and unanticipated increases in regulated education prices. The projection for the CPI for foods has increased compared to the previous report, taking into account certain factors that were not anticipated in January (a less favorable agricultural cycle, increased pressure from international prices, and transport costs). Given the above, year-end annual inflation for 2021 and 2022 is now expected to be 3% and 2.8%, respectively, which would be above projections from January (2.3% and 2,7%). For its part, expected inflation based on analyst surveys suggests year-end inflation in 2021 and 2022 of 2.8% and 3.1%, respectively. There remains significant uncertainty surrounding the inflation forecasts included in this report due to several factors: 1) the evolution of the pandemic; 2) the difficulty in evaluating the size and persistence of excess productive capacity; 3) the timing and manner in which price relief measures will lapse; and 4) the future behavior of food prices. Projected 2021 growth in foreign demand (4.4% to 5.2%) and the supposed average oil price (USD 53 to USD 61 per Brent benchmark barrel) were both revised upward. An increase in long-term international interest rates has been reflected in a depreciation of the peso and could result in relatively tighter external financial conditions for emerging market economies, including Colombia. Average growth among Colombia’s trade partners was greater than expected in the fourth quarter of 2020. This, together with a sizable fiscal stimulus approved in the United States and the onset of a massive global vaccination campaign, largely explains the projected increase in foreign demand growth in 2021. The resilience of the goods market in the face of global crisis and an expected normalization in international trade are additional factors. These considerations and the expected continuation of a gradual reduction of mobility restrictions abroad suggest that Colombia’s trade partners could grow on average by 5.2% in 2021 and around 3.4% in 2022. The improved prospects for global economic growth have led to an increase in current and expected oil prices. Production interruptions due to a heavy winter, reduced inventories, and increased supply restrictions instituted by producing countries have also contributed to the increase. Meanwhile, market forecasts and recent Federal Reserve pronouncements suggest that the benchmark interest rate in the U.S. will remain stable for the next two years. Nevertheless, a significant increase in public spending in the country has fostered expectations for greater growth and inflation, as well as increased uncertainty over the moment in which a normalization of monetary policy might begin. This has been reflected in an increase in long-term interest rates. In this context, emerging market economies in the region, including Colombia, have registered increases in sovereign risk premiums and long-term domestic interest rates, and a depreciation of local currencies against the dollar. Recent outbreaks of COVID-19 in several of these economies; limits on vaccine supply and the slow pace of immunization campaigns in some countries; a significant increase in public debt; and tensions between the United States and China, among other factors, all add to a high level of uncertainty surrounding interest rate spreads, external financing conditions, and the future performance of risk premiums. The impact that this environment could have on the exchange rate and on domestic financing conditions represent risks to the macroeconomic and monetary policy forecasts. Domestic financial conditions continue to favor recovery in economic activity. The transmission of reductions to the policy interest rate on credit rates has been significant. The banking portfolio continues to recover amid circumstances that have affected both the supply and demand for loans, and in which some credit risks have materialized. Preferential and ordinary commercial interest rates have fallen to a similar degree as the benchmark interest rate. As is generally the case, this transmission has come at a slower pace for consumer credit rates, and has been further delayed in the case of mortgage rates. Commercial credit levels stabilized above pre-pandemic levels in March, following an increase resulting from significant liquidity requirements for businesses in the second quarter of 2020. The consumer credit portfolio continued to recover and has now surpassed February 2020 levels, though overall growth in the portfolio remains low. At the same time, portfolio projections and default indicators have increased, and credit establishment earnings have come down. Despite this, credit disbursements continue to recover and solvency indicators remain well above regulatory minimums. 1.2 Monetary policy decision In its meetings in March and April the BDBR left the benchmark interest rate unchanged at 1.75%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography