To see the other types of publications on this topic, follow the link: Count.

Dissertations / Theses on the topic 'Count'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Count.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dingle, Mia. "Soul Count." Digital Commons at Loyola Marymount University and Loyola Law School, 2018. https://digitalcommons.lmu.edu/etd/498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Scott, Kerry M., and University of Lethbridge Faculty of Arts and Science. "A contemporary winter count." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Native American Studies, 2006, 2006. http://hdl.handle.net/10133/1302.

Full text
Abstract:
The past is the prologue. We must understand where we have been before we can understand where we are going. To understand the Blackfoot Nation and how we have come to where we are today, this thesis examines our history through Indian eyes from time immemorial to the present, using traditional narratives, writings of early European explorers and personal experience. The oral tradition of the First Nations people was a multi-media means of communication. Similarly, this thesis uses the media of the written word and a series of paintings to convey the story of the Blackfoot people. This thesis provides background and support, from the artist’s perspective, for the paintings that tell the story of the Blackfoot people and the events that contributed to the downfall of the once-powerful Nation. With the knowledge of where we have been, we can learn how to move forward.<br>x, 153 leaves : col. ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
3

Dix, Annika. "Count on the brain." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17411.

Full text
Abstract:
Wir können Mathematikleistungen über fluide Intelligenz (FI) vorhersagen. Der Einfluss von FI auf kognitive Prozesse und neuronale Mechanismen, die mathematischen Fähigkeiten in verschiedenen Teildisziplinen zugrunde liegen, ist jedoch wenig verstanden. Vorliegende Arbeit spezifiziert FI-bezogene Unterschiede in diesen Prozessen und Mechanismen beim Lösen von Geometrie-, Arithmetik- und Algebra-Aufgaben. Mithilfe eines multimethodalen Ansatzes beleuchtet sie das Zusammenspiel zwischen FI, Leistung und Faktoren wie Aufgabenkomplexität, Lernen und Strategiewahl, die kognitive Prozesse und Anforderungen beim Problemlösen beeinflussen. Leistungsunterschiede wurden durch Messung von Reaktionszeiten und Fehlerraten, Strategien durch Augenbewegungsanalyse erfasst. Als Indikator kortikaler Aktivität diente die ereigniskorrelierte (De-)Synchronisation (ERD/ERS) im Alpha-Band. Um kognitive Prozesse zu unterscheiden, haben wir die ERD/ERS im Theta-Band und den Alpha-Unterbändern einbezogen. Beim Lösen unvertrauter geometrischer Analogien zeichnete sich hohe FI durch verstärkte Verarbeitung visuell-räumlicher Informationen zum Repräsentieren von Merkmalszusammenhängen aus. Schüler mit hoher FI passten ihre Strategiewahl den Anforderungen flexibler an. Erstmals konnten wir durch trialweise Identifikation von Strategien FI-bezogene Unterschiede in der neuronalen Effizienz der Strategieausführung feststellen. Beim Lösen vertrauter arithmetischer und algebraischer Terme zeigten sich bei Schülern mit hoher im Vergleich zu Schülern mit durchschnittlicher FI geringere Anforderungen zur Aktualisierung numerischer Repräsentationen und eine bessere Leistung in komplexen Aufgaben. Weitere Analysen legen nahe, dass Schüler mit hoher FI Zusammenhänge in der Aufgabenstruktur besser erkennen und passende Routinen abrufen können. Die Fähigkeit Zusammenhangsrepräsentationen zu bilden könnte demnach ein Schlüsselaspekt zur Erklärung FI-abhängiger Unterschiede in mathematischen Fähigkeiten sein.<br>Fluid intelligence (FI) is a strong predictor of mathematical performance. However, the impact of FI on cognitive processes and neural mechanisms underlying differences in mathematical abilities across different subdivisions is not well understood. The present work specifies FI-related differences in these processes and mechanisms for students solving geometric, arithmetic, and algebraic problems. We chose a multi-methodological approach to shed light on the interplay between FI, performance, and factors such as task complexity, learning, and strategy selection that influence cognitive processes and task demands in problem-solving. We measured response times and error rates to evaluate performance, eye movements to identify solution strategies, and the event-related (de-)synchronization (ERD/ERS) in the broad alpha band as indicator of general cortical activity. Further, we considered the ERD/ERS in the theta band and the alpha sub-bands to distinguish between associated cognitive processes. For unfamiliar geometric analogy tasks, students with high FI built relational representations based on a more intense processing of spatial information. Strategy analyses revealed a more adaptive strategy choice in response to increasing task demands compared to students with average FI. Further, we conducted the first study identifying strategies and related cortical activity trial-wise and thereby identified FI-related differences in the neural efficiency of strategy execution. For solving familiar arithmetic and algebraic problems, high compared to average FI was associated with lower demands on the updating of numbers leading to a better performance in complex tasks. Further analyses suggest that students with high FI had an advantage to identify the relational structure of the problems and to retrieve routines that match this structure. Thus, the ability to build relational representations might be one key aspect explaining FI-related difference in mathematical abilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Ackles, Nancy M. "Historical syntax of the English articles in relation to the count/non-count distinction /." Thesis, Connect to this title online; UW restricted, 1996. http://hdl.handle.net/1773/8405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Osuna, Echavarría Leyre Estíbaliz. "Semiparametric Bayesian Count Data Models." Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Patuzzi, Ilaria. "16S rRNA gene sequencing sparse count matrices: a count data simulator and optimal pre-processing pipelines." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3426369.

Full text
Abstract:
The study of microbial communities has deeply changed since it was firstly introduced in the 17th century. In the late 1970s, a breakthrough in the way bacterial communities were studied was brought by the discovery that ribosomal RNA (rRNA) genes could be used as molecular markers to perform organisms classification. Some decades later, the advent of DNA sequencing technology revolutionized the study of microbial communities, permitting a culture-independent view on the overall community contained within a sample. Today, one of the most widely used approaches for microbial communities profiling is based on the sequencing of the gene that codes for the 16S subunit of prokaryotic ribosome (16S rRNA gene), that being ubiquitous to all bacteria, but having an exact DNA sequence unique to each species, is used as a sort of molecular fingerprint for assigning to each community member a taxonomic characterization. The advent of Next-Generation Sequencing (NGS) platforms ensured 16S rRNA gene sequencing (16S rDNA-Seq) an increasing growth in election rate as preferred methodology to perform microbiome studies. Despite this, the continuous development of both experimental and computational procedures for 16S rDNA-Seq caused an unavoidable lack in standardization concerning sequencing output data treatment and analysis. This is further complicated by the very peculiar characteristics that distinguish the matrix in which samples information is summarized after sequencing. In fact, the instrumental limit on the maximum number of obtainable sequences makes 16S rDNA-Seq data compositional, i.e. they are data in which the detected abundance of each bacterial species is dependent from the level of presence of other populations in the sample. Additionally, 16S rDNA-Seq-derived matrices are typically highly sparse (70-95% of null values). These peculiarities make the commonly adopted loan of bulk RNA sequencing tools and approaches inappropriate for 16S rDNA-Seq count matrices analyses. In particular, unspecific pre-processing steps, such as normalization, risk to introduce biases in case of highly sparse matrices. The main objective of this thesis was to identify optimal pipelines that filled the above gaps in order to assure solid and reliable conclusions from 16S rRNA-Seq data analyses. Among all the analysis steps included in a typical pipeline, this project was focused on the pre-processing of count data matrices obtained from 16S rDNA-Seq experiments. This task was carried out through several steps. first, state of the art methods for 16S rDNA-Seq count data pre-processing were identified performing a thorough literature search, which revealed a minimal availability of specific tools and the complete lack in the usual 16S rDNA-Seq analysis pipeline of a pre-processing step in which the information loss due to sequencing is recovered (zero-imputation). At the same time, the literature search highlighted that no specific simulators were available to directly obtain synthetic 16S rDNA-Seq count data on which perform the analysis to identift optimal pre-processing pipelines. Thus, a 16S rDNA-Seq sparse count matrices simulator that considers the compositional nature of this data was developed. Then, a comprehensive benchmark analysis of forty-nine pre-processing pipelines was designed and performed to assess currently used and most-recen tpre-processing approaches performance and to test for appropriateness in including zero-imputation step into 16S rDNA-Seq analysis framework. Overall, this thesis considers the 16S rDNA-Seq data pre-processing problem and provide a useful guide for a robust data pre-processing when performing a 16S rDNA-Seq analysis. Additionally, the simulator proposed in this work could be a spur and valuable tool for researchers involved in developing and testing bioinformatics methods, thus helping in filling the lack of specific tools for 16S rDNA-Seq data.<br>Lo studio delle comunità microbiche è profondamente cambiato da quando fu per la prima volta proposto nel XVII secolo. Quando il ruolo fondamentale dei microbi nel regolare e causare malattie umane divenne evidente, i ricercatori iniziarono a sviluppare una varietà di tecniche per isolare e coltivare i batteri in laboratorio con l'obiettivo di caratterizzarli e classificarli. Alla fine degli anni '70, una svolta in come venivano studiate le comunità batteriche fu apportata dalla scoperta che i geni che codificano per l'RNA ribosomale (rRNA) potevano essere utilizzati come marcatori molecolari per la classificazione degli organismi. Alcuni decenni più tardi, l'avvento della tecnologia di sequenziamento del DNA ha rivoluzionato lo studio delle comunità microbiche, consentendo una visione complessiva coltura-indipendente della comunità contenuta in un campione. Oggi, uno degli approcci più diffusi per profilazione di comunità microbiche si basa sul sequenziamento del gene che codifica per la subunità 16S del ribosoma procariotico (gene dell'rRNA 16S). Poiché il ribosoma svolge un ruolo essenziale nella vita procariotica, esso è onnipresente in tutti i batteri, ma la sua esatta sequenza di DNA è unica per ogni specie. Per questo motivo, esso viene utilizzato come una sorta di impronta molecolare per assegnare a ciascun membro della comunità una caratterizzazione tassonomica. L'avvento delle piattaforme di Next Generation Sequencing (NGS), in grado di produrre un'enorme mole di dati riducendo tempi e costi, ha assicurato alla tecnica di sequenziamento del gene rRNA 16S (16S rDNA-Seq) una crescita nel tasso di elezione come metodologia preferita per eseguire studi sul microbioma. Nonostante ciò, il continuo sviluppo di procedure sia sperimentali che computazionali per 16S rDNA-Seq ha causato una inevitabile mancanza di standardizzazione riguardo al trattamento e all'analisi dei dati di sequenziamento. Ciò è ulteriormente complicato dalle caratteristiche molto peculiari che contraddistinguono la matrice in cui tipicamente le informazioni dei campioni sono riassunte dopo il sequenziamento. Infatti, il limite strumentale sul numero massimo di sequenze ottenibili rende i dati 16S rDNA-Seq composizionali, cioè dati in cui l'abbondanza rilevata di ogni specie batterica dipende dal livello di presenza di altre popolazioni nel campione. Inoltre, le matrici derivate da 16S rDNA-Seq sono in genere molto sparse (70-95% di valori nulli). Ciò è dovuto sia alla diversità biologica tra i campioni sia alla perdita di informazione sulle specie rare durante il sequenziamento, un effetto che è fortemente dipendente sia dalla distribuzione solitamente asimmetrica delle abbondanze delle specie presenti nei microbiomi, sia dal numero di campioni sequenziati nella stessa corsa di sequenziamento (il cosiddetto livello di multiplexing). Le suddette peculiarità rendono la comunemente adottata mutuazione di tool e approcci dall’ambito del sequenziamento di tipo bulk RNA inadeguata per analisi di matrici di conte derivanti da 16S rDNA-Seq. In particolare, fasi di pre-elaborazione non specifiche, come la normalizzazione, rischiano di introdurre forti bias in caso di matrici molto sparse. L'obiettivo principale di questa tesi era quello di identificare delle pipeline di analisi ottimali che riempissero le suddette lacune al fine di ottenere conclusioni solide e affidabili dall'analisi dei dati dell'rRNA-Seq 16S. Tra tutte le fasi di analisi incluse in una tipica pipeline, questo progetto si è concentrato sulla pre-elaborazione di matrici di conte ottenute da esperimenti di 16S rDNA-Seq. Questo scopo è stato raggiunto attraverso diversi passaggi. In primo luogo, sono stati identificati metodi all'avanguardia per la pre-elaborazione dei dati di conte di 16S rDNA-Seq eseguendo un'accurata ricerca bibliografica, che ha rivelato una minima disponibilità di strumenti specifici e la completa mancanza nella consueta pipeline di analisi 16S rDNA-Seq di una fase di pre-elaborazione in cui venga recuperata la perdita di informazioni dovuta al sequenziamento (zero-imputation). Allo stesso tempo, la ricerca bibliografica ha evidenziato che non erano disponibili simulatori specifici per ottenere direttamente dati di conte 16S rDNA-Seq sintetici su cui eseguire l'analisi per identificare pipeline di pre-elaborazione ottimali. Di consequenza, è stato sviluppato un simulatore di matrici di conte sparse derivanti da 16S rDNA-Seq che considera la natura composizionale di questi dati. In seguito, un'analisi comparativa completa di quarantanove pipeline di pre-elaborazione è stata progettata ed eseguita con lo scopo di valutare le prestazioni degli approcci di pre-elaborazione più comunemente utilizzati e più recenti e per verificare l’appropriatezza dell’inclusione di una fase di zero-imputation nel contesto delle analisi di 16S rDNA-Seq. Nel complesso, questa tesi considera il problema della pre-elaborazione dei dati provenienti da 16S rDNA-Seq e fornisce una guida utile per una pre-elaborazione dei dati robusta quando durante un'analisi 16S rDNA-Seq. Inoltre, il simulatore proposto in questo lavoro potrebbe essere uno stimolo e uno strumento prezioso per i ricercatori coinvolti nello sviluppo e nel test dei metodi di bioinformatica, contribuendo così a colmare la mancanza di strumenti specifici per i dati di rDNA-Seq 16S.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Xin. "Semiparametric analysis of panel count data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4774.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on November 27, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Quoreshi, Shahiduzzaman. "Modelling high frequency financial count data /." Umeå : Umeå University, 2005. http://swopec.hhs.se/umnees/abs/umnees0656.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hellström, Jörgen. "Count data modelling and tourism demand." Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-82168.

Full text
Abstract:
This thesis consists of four papers concerning modelling of count data and tourism demand. For three of the papers the focus is on the integer-valued autoregressive moving average model class (INARMA), and especially on the ENAR(l) model. The fourth paper studies the interaction between households' choice of number of leisure trips and number of overnight stays within a bivariate count data modelling framework. Paper [I] extends the basic INAR(1) model to enable more flexible and realistic empirical economic applications. The model is generalized by relaxing some of the model's basic independence assumptions. Results are given in terms of first and second conditional and unconditional order moments. Extensions to general INAR(p), time-varying, multivariate and threshold models are also considered. Estimation by conditional least squares and generalized method of moments techniques is feasible. Monte Carlo simulations for two of the extended models indicate reasonable estimation and testing properties. An illustration based on the number of Swedish mechanical paper and pulp mills is considered. Paper[II] considers the robustness of a conventional Dickey-Fuller (DF) test for the testing of a unit root in the INAR(1) model. Finite sample distributions for a model with Poisson distributed disturbance terms are obtained by Monte Carlo simulation. These distributions are wider than those of AR(1) models with normal distributed error terms. As the drift and sample size, respectively, increase the distributions appear to tend to T-2) and standard normal distributions. The main results are summarized by an approximating equation that also enables calculation of critical values for any sample and drift size. Paper[III] utilizes the INAR(l) model to model the day-to-day movements in the number of guest nights in hotels. By cross-sectional and temporal aggregation an INARMA(1,1) model for monthly data is obtained. The approach enables easy interpretation and econometric modelling of the parameters, in terms of daily mean check-in and check-out probability. Empirically approaches accounting for seasonality by dummies and using differenced series, as well as forecasting, are studied for a series of Norwegian guest nights in Swedish hotels. In a forecast evaluation the improvements by introducing economic variables is minute. Paper[IV] empirically studies household's joint choice of the number of leisure trips and the total night to stay on these trips. The paper introduces a bivariate count hurdle model to account for the relative high frequencies of zeros. A truncated bivariate mixed Poisson lognormal distribution, allowing for both positive as well as negative correlation between the count variables, is utilized. Inflation techniques are used to account for clustering of leisure time to weekends. Simulated maximum likelihood is used as estimation method. A small policy study indicates that households substitute trips for nights as the travel costs increase.<br><p>Härtill 4 uppsatser.</p><br>digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
10

Wan, Chung-him, and 溫仲謙. "Analysis of zero-inflated count data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43703719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chanialidis, Charalampos. "Bayesian mixture models for count data." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6371/.

Full text
Abstract:
Regression models for count data are usually based on the Poisson distribution. This thesis is concerned with Bayesian inference in more flexible models for count data. Two classes of models and algorithms are presented and studied in this thesis. The first employs a generalisation of the Poisson distribution called the COM-Poisson distribution, which can represent both overdispersed data and underdispersed data. We also propose a density regression technique for count data, which, albeit centered around the Poisson distribution, can represent arbitrary discrete distributions. The key contribution of this thesis are MCMC-based methods for posterior inference in these models. One key challenge in COM-Poisson-based models is the fact that the normalisation constant of the COM-Poisson distribution is not known in closed form. We propose two exact MCMC algorithms which address this problem. One is based on the idea of retrospective sampling; we sample the uniform random variable used to decide on the acceptance (or rejection) of the proposed new state of the unknown parameter first and then only evaluate bounds for the acceptance probability, in the hope that we will not need to know the acceptance probability exactly in order to come to a decision on whether to accept or reject the newly proposed value. This strategy is based on an efficient scheme for computing lower and upper bounds for the normalisation constant. This procedure can be applied to a number of discrete distributions, including the COM-Poisson distribution. The other MCMC algorithm proposed is based on an algorithm known as the exchange algorithm. The latter requires sampling from the COM-Poisson distribution and we will describe how this can be done efficiently using rejection sampling. We will also present simulation studies which show the advantages of using the COM-Poisson regression model compared to the alternative models commonly used in literature (Poisson and negative binomial). Three real world applications are presented: the number of emergency hospital admissions in Scotland in 2010, the number of papers published by Ph.D. students and fertility data from the second German Socio-Economic Panel. COM-Poisson distributions are also the cornerstone of the proposed density regression technique based on Dirichlet process mixture models. Density regression can be thought of as a competitor to quantile regression. Quantile regression estimates the quantiles of the conditional distribution of the response variable given the covariates. This is especially useful when the dispersion changes across the covariates. Instead of estimating the conditional mean , quantile regression estimates the conditional quantile function across different quantiles. As a result, quantile regression models both location and shape shifts of the conditional distribution. This allows for a better understanding of how the covariates affect the conditional distribution of the response variable. Almost all quantile regression techniques deal with a continuous response. Quantile regression models for count data have so far received little attention. A technique that has been suggested is adding uniform random noise ('jittering'), thus overcoming the problem that, for a discrete distribution, the conditional quantile function is not a continuous function of the parameters of interest. Even though this enables us to estimate the conditional quantiles of the response variable, it has disadvantages. For small values of the response variable Y, the added noise can have a large influence on the estimated quantiles. In addition, the problem of 'crossing quantiles' still exists for the jittering method. We eliminate all the aforementioned problems by estimating the density of the data, rather than the quantiles. Simulation studies show that the proposed approach performs better than the already established jittering method. To illustrate the new method we analyse fertility data from the second German Socio-Economic Panel.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhuang, Lili. "Bayesian Dynamical Modeling of Count Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315949027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gao, Dexiang. "Analysis of clustered longitudinal count data /." Connect to full text via ProQuest. Limited to UCD Anschutz Medical Campus, 2007.

Find full text
Abstract:
Thesis (Ph.D. in Analytic Health Sciences, Department of Preventive Medicine and Biometrics) -- University of Colorado Denver, 2007.<br>Typescript. Includes bibliographical references (leaves 75-77). Free to UCD affiliates. Online version available via ProQuest Digital Dissertations;
APA, Harvard, Vancouver, ISO, and other styles
14

Wan, Chung-him. "Analysis of zero-inflated count data." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43703719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Giuffrida, Mario Valerio. "Learning to Count Leaves of Plants." Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/277/1/Giuffrida_phdthesis.pdf.

Full text
Abstract:
Plant phenotyping refers to the measurement of plant visual traits. In the past, the collection of such traits has been done manually by plant scientists, which is a tedious, error-prone, and time-consuming task. For this reason, image-based plant phenotyping is used to facilitate the measurement of plant traits with algorithms. However, the lack of robust software to extract reliable phenotyping traits from plant images has created a bottleneck. Here, we will study the problem of estimating the total number of leaves in plant images. The leaf count is a sought-after plant trait, as it is related to the plant development stage, health, yield potential, and flowering time. Previously, leaf counting was determined using a per-leaf segmentation. The typical approaches for per-leaf segmentation are: (i) image processing to segment leaves, using assumptions and heuristics; or (ii) training a neural network. However, both approaches have drawbacks. Heuristics-based approaches use a set of rules based upon observations that can easily fail. Per-leaf segmentation via neural networks requires fine grained annotated datasets during training, which are hard to obtain. Alternatively, the estimation of the number of leaves in an image can be addressed as a direct regression problem. In this context, the learning of the algorithm is relaxed to the prediction of a single number (the leaf count) and the collection of labelled datasets is easy enough to be also performed by non-experts. This thesis discusses the first machine learning algorithm for leaf counting for top-view rosette plants. This approach extracted patches from the log-polar representation of the image, allowing us to cancel out leaf rotation. These patches were then used to learn a visual dictionary, which was used to encode the image into a holistic descriptor. As a next step, we developed a shallow neural network to extract rotation-invariant features. Using this architecture, we could learn features to explicitly account for the radial arrangement of leaves in rosette plants. Although the results were promising, the leaf counting with rotation-invariant features could not outperform the previous approach. For this reason, we moved our attention to deep neural networks. However, it is widely known that deep architectures are hungry of data. Therefore, we addressed the problem of how to collect more labelled plant image datasets, using three approaches: (i) we developed an annotation tool to help experts to annotate images; (ii) we uploaded images in a crowdsourcing online platform, allowing citizen scientists to annotate them; (iii) we used a generative deep neural network to synthesise the images of plants with the leaf count. Lastly, we will show how a deep leaf counting network can be trained with data from different sources and modalities, showing promising results and reducing the performance gap between algorithm and human annotators.
APA, Harvard, Vancouver, ISO, and other styles
16

Taylor, John Vincent. "The effects of lindane (γ- Hexachlorocyclohexane) on the reproductive potential and early development of brown trout (salmo trutta)." Thesis, Keele University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fernández, Fontelo Amanda. "New models of count data with applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/666009.

Full text
Abstract:
Donat que les dades de recompte es troben en molt fenòmens reals, la necessitat de mètodes i tècniques de qualitat per modelitzar i analitzar aquest tipus de dades és completament indiscutible. En aquest sentit, durant els últims anys, s’han trobat molts articles a la literatura dins dels que s’han desenvolupat tant mètodes bαsics com més generals per l’anàlisi d’aquestes dades. Tot i que a la literatura hi ha un ampli ventall de treballs que tracten alguns dels problemes més rellevants de les dades de recompte, molts altres problemes encara no s’han abordat. Aquesta tesi doctoral té la finalitat d’introduir nous mètodes i tècniques per analitzar alguns dels problemes de les dades de recompte com la sobredispersió, la inflació al zero (i la deflació al zero), i el fenomen que es dona quan hi ha falta de recomptes. Aquesta tesis està formada per un conjunt de publicacions que presenten i discuteixen en detall alguns dels mètodes proposats per tractar els problemes anteriorment mencionats. Particularment, dos d’aquests articles [1, 2] es centren en ajustar el fenomen de falta de recomptes, proposant dos models basats en els processos autoregressius de dades discretes i no negatives. A més a més, s’han estudiat una sèrie d’aplicacions, en diferents contextos, basades en dades reals, amb la finalitat de demostrar la usabilitat d’aquests nous models. D’altra banda, el treball [3] proposa un model més general de series temporals de recomptes. Aquest model considera series temporals amb una sobredispersió moderada, independentment de si la sèrie és o no estacionaria. Aquest nou model s’ha utilitzat per analitzar les dades de mortalitat recollides en granges bovines a petita escala. Aquestes dades de mortalitat tenen la particularitat de ser recomptes baixos, amb molts zeros i una sobredispersió forτa lleugera. Aquesta anàlisi forma part dæun projecte del Ministeri d’Agricultura, Pesca i Alimentació del Govern d’Espanya. L’última publicació que s’ha inclòs en aquesta tesi [4] proposa una proba exacte de bondat d’ajustament per detectar l’ inflació al zero (i la deflació al zero) en distribucions discretes dins del marc de la dosimetria biològica. La proba proposada en aquest treball va ser introduïda per primer cop per [5], derivada dels problemes d’ocupació. En el context de la dosimetria biològica, aquest nou test es considera un complement del test clàssic u quan les dades no són sobredisperses (sotadisperses), però si estan inflades al zero (no inflades al zero). Els mètodes introduïts en aquesta tesi doctoral es poden veure com a petits signes de progrés dins de l’anàlisi de dades de recompte. Aquests mètodes permeten estudiar problemes des de diferents punts de vista, mostrant resultats especialment bons quan s’analitzen problemes reals dins de l’àmbit de la salut publica i la dosimetria biològica. No obstant, encara que aquest treball és un avenτ dins de l’anàlisi de dades de recompte, molts més esforτos s’han de fer per anar millorant les tècniques i les eines d’anàlisis de dades de recompte. [1] Fernández-Fontelo, A., Cabaña, A., Puig, P. and Moriña, D. (2016). Under-reported data analysis with INAR-hidden Markov chains. Statistics in Medicine; 35(26): 4875-4890. [2] Fernández-Fontelo, A., Cabaña, A., Joe, H., Puig, P. and Moriña, D. Count time series models with under-reported data for gender-based violence in Galicia (Spain). Submitted. [3] Fernández-Fontelo, A., Fontdecaba, S., Alba, A. and Puig, P. (2017). Integer-valued AR processes with Hermite innovations and time-varying parameters: An application to bovine fallen stock surveillance at a local scale. Statistical Modelling; 17(3): 172-195. [4] Fernández-Fontelo, A., Puig, P., Ainsbury, E.A. and Higueras, M. (2018). An exact goodness-of-fit test based on the occupancy problems to study zero-inflation and zero-deflation in biological dosimetry data. Radiation Protection Dosimetry: 1-10. [5] Rao, C.R. and Chakravarti, I.M. (1956). Some small sample tests of significance for a Poisson distribution. Biometrics; 12: 264-282.<br>Since count data are present in the nature of many real processes, the need for high-quality methods and techniques to accurately model and analyse these data is irrefutable. In this sense, in the past years, many comprehensive works have been presented in the literature where both, primary and more general methods to deal with count data, have developed based on different approaches. Despite the vast amount of excellent works dealing with the major concerns in count data, some issues related to these data remain to be addressed. This Ph.D. thesis is aimed at introducing novel methods and techniques of count data analysis to deal with some issues such that the overdispersion, the zero-inflation (and zero-deflation), and the phenomenon of under-reporting. In this sense, this thesis comprises different publications where innovative methods have been presented and discussed in detail. In particular, two of these articles [1, 2] are focused on the assessment of the under-reporting issue in count time series. These works propose two realistic models based on integer-valued autoregressive models. Besides, real-data applications within different frameworks are studied to demonstrate the practicality of these proposed models. On the other hand, the paper by [3] proposes a general model of count time series, which considers slightly overdispersed data, even if a series is non-stationary. This model has been used to analyse data of fallen cattle collected at a local scale when series have low counts, many zeros, and moderate overdispersion as part of a project commanded by the Ministry of Agriculture, Food and Environment of Spain. The last paper included in this thesis [4] proposes an exact goodness-of-fit test for detecting zero-inflation (and zero-deflation) in count distributions within the biological dosimetry framework. The test suggested in [4] was firstly introduced by [5] derived from the problems of occupancy. In the biological dosimetry context, this test is viewed as a complement to the always used u-test, when data are not overdispersed (not underdispersed), but they are zero-inflated (zero-deflated). The methods introduced in this Ph.D. thesis can be viewed as small but relevant signs of progress in count data analysis. They allow studying several issues of count data from different points of view, showing especially good results when dealing with some real-world concerns in public health and biological dosimetry frameworks. Although this work constitutes an advance in count data analysis, more efforts have to keep doing to improve the existing techniques and tools. [1] Fernández-Fontelo, A., Cabaña, A., Puig, P. and Moriña, D. (2016). Under-reported data analysis with INAR-hidden Markov chains. Statistics in Medicine; 35(26): 4875-4890. [2] Fernández-Fontelo, A., Cabaña, A., Joe, H., Puig, P. and Moriña, D. Count time series models with under-reported data for gender-based violence in Galicia (Spain). Submitted. [3] Fernández-Fontelo, A., Fontdecaba, S., Alba, A. and Puig, P. (2017). Integer-valued AR processes with Hermite innovations and time-varying parameters: An application to bovine fallen stock surveillance at a local scale. Statistical Modelling; 17(3): 172-195. [4] Fernández-Fontelo, A., Puig, P., Ainsbury, E.A. and Higueras, M. (2018). An exact goodness-of-fit test based on the occupancy problems to study zero-inflation and zero-deflation in biological dosimetry data. Radiation Protection Dosimetry: 1-10. [5] Rao, C.R. and Chakravarti, I.M. (1956). Some small sample tests of significance for a Poisson distribution. Biometrics; 12: 264-282.
APA, Harvard, Vancouver, ISO, and other styles
18

Arvidsson, Klas. "Simulering av miljoner grindar med Count Algoritmen." Thesis, Linköping University, Department of Computer and Information Science, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2476.

Full text
Abstract:
<p>A key part in the development and verification of digital systems is simulation. But hardware simulators are expensive, and software simulation is not fast enough for designs with a large number of gates. As today’s digital zesigns constantly grow in size (number of gates), and that trend shows no signs to end, faster simulators handling millions of gates are needed. </p><p>We investigate how to create a software gate-level simulator able to simulate a high number of gates fast. This involves a trade-off between memory requirement and speed. A compact netlist representation can utilize cache memories more efficient but requires more work to interpret, while high memory requirements can limit the performance to the speed of main memory. </p><p>We have selected the Counting Algorithm to implement the experimental simulator MICA. The main reasons for this choice is the compact way in which gates can be stored, but still be evaluated in a simple and standard way. </p><p>The report describes the issues and solutions encountered and evaluate the resulting simulator. MICA simulates a SPARC architecture processor called Leon. Larger netlists are achieved by simulating several instances of this processor. Simulation of 128 instances is done at a speed of 9 million gates per second using only 3.5MB memory. In MICA this design correspond to 2.5 million gates.</p>
APA, Harvard, Vancouver, ISO, and other styles
19

Jameson, Andrew Mackenzie. "Count Almaviva in Le Nozze di Figaro." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31374.

Full text
Abstract:
Preparing an operatic role is a very extensive process for a student singer. Problems that confront a young singer when challenged with a new role are numerous. Aside from having to prepare vocally for the role there are many other aspects of the preparation process that are equally as difficult. The role which I prepared was the Count Almaviva in Le Nozze di Figaro by W.A. Mozart. Due to the fact that I was to perform the opera in Italian I completed a word by word translation of the entire opera. I also had to learn how to correctly pronounce each word that I sang so that the audience would understand what I said. I spent countless hours working on vocal production and an equal amount in rehearsal while learning how to perform this highly specialized form of music. In conclusion to the performances I found that I had a far greater ability to perform Mozart's music while possessing a solid understanding of the Italian language.<br>Arts, Faculty of<br>Music, School of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Crowe, Brenda. "Seasonal and calendar estimation for count data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27901.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Russell, Nathan. "Complexity of control of Borda count elections /." Online version of thesis, 2007. http://hdl.handle.net/1850/4923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rock, Melanie. "Sweet blood and power : making diabetics count." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38265.

Full text
Abstract:
As recently as 1995, sweet blood did not resonate broadly as an urgent transnational concern. This thesis chronicles how diabetes mellitus, sweet blood, became recognized as a social problem besetting Canada, among many other countries.<br>This ethnographic study brings anthropological theories---developed for the most part to analyze the lives of "non-Western" peoples---to bear on "Western" philosophy, science, medicine, mass media, governments, and commerce. Throughout, this thesis challenges received wisdom about disease, technologies, kinship, commodification, embodiment, and personhood.<br>This thesis argues that a statistical concept, the population, is the linchpin of both politics and economics in large-scale societies. Statistically-fashioned populations, combined with the conviction that the future can be partially controlled, undergird the very definition of diabetes as a disease. In turn, biomedical knowledge about diabetes grounds the understanding of sweet blood as a social problem in need of better management. The political economy of sweet blood shows that, under "Western" eyes, persons can remain intact while their bodies---down to their very cells---divide and multiply, both literally and figuratively. As members of statistically-fashioned populations, human beings have a patent existence and many "statistical doubles." These statistical doppelgangers help shape feelings, actions, identities, and even the length of human lives. They permit countless strangers and "lower" nonhuman beings---among them, mice, flies, and bacteria---to count as kin. Through the generation and use of statistics, people and their body parts undergo valuation and commodification, but are neither bought nor sold. The use of statistics to commodify human beings and body parts, this thesis finds, inevitably anchors biomedical practice, biomedical research, health policies, and the marketing of pharmaceuticals and all other things known to affect health.
APA, Harvard, Vancouver, ISO, and other styles
23

Kalktawi, Hadeel Saleh. "Discrete Weibull regression model for count data." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/14476.

Full text
Abstract:
Data can be collected in the form of counts in many situations. In other words, the number of deaths from an accident, the number of days until a machine stops working or the number of annual visitors to a city may all be considered as interesting variables for study. This study is motivated by two facts; first, the vital role of the continuous Weibull distribution in survival analyses and failure time studies. Hence, the discrete Weibull (DW) is introduced analogously to the continuous Weibull distribution, (see, Nakagawa and Osaki (1975) and Kulasekera (1994)). Second, researchers usually focus on modeling count data, which take only non-negative integer values as a function of other variables. Therefore, the DW, introduced by Nakagawa and Osaki (1975), is considered to investigate the relationship between count data and a set of covariates. Particularly, this DW is generalised by allowing one of its parameters to be a function of covariates. Although the Poisson regression can be considered as the most common model for count data, it is constrained by its equi-dispersion (the assumption of equal mean and variance). Thus, the negative binomial (NB) regression has become the most widely used method for count data regression. However, even though the NB can be suitable for the over-dispersion cases, it cannot be considered as the best choice for modeling the under-dispersed data. Hence, it is required to have some models that deal with the problem of under-dispersion, such as the generalized Poisson regression model (Efron (1986) and Famoye (1993)) and COM-Poisson regression (Sellers and Shmueli (2010) and Sáez-Castillo and Conde-Sánchez (2013)). Generally, all of these models can be considered as modifications and developments of Poisson models. However, this thesis develops a model based on a simple distribution with no modification. Thus, if the data are not following the dispersion system of Poisson or NB, the true structure generating this data should be detected. Applying a model that has the ability to handle different dispersions would be of great interest. Thus, in this study, the DW regression model is introduced. Besides the exibility of the DW to model under- and over-dispersion, it is a good model for inhomogeneous and highly skewed data, such as those with excessive zero counts, which are more disperse than Poisson. Although these data can be fitted well using some developed models, namely, the zero-inated and hurdle models, the DW demonstrates a good fit and has less complexity than these modifed models. However, there could be some cases when a special model that separates the probability of zeros from that of the other positive counts must be applied. Then, to cope with the problem of too many observed zeros, two modifications of the DW regression are developed, namely, zero-inated discrete Weibull (ZIDW) and hurdle discrete Weibull (HDW) models. Furthermore, this thesis considers another type of data, where the response count variable is censored from the right, which is observed in many experiments. Applying the standard models for these types of data without considering the censoring may yield misleading results. Thus, the censored discrete Weibull (CDW) model is employed for this case. On the other hand, this thesis introduces the median discrete Weibull (MDW) regression model for investigating the effect of covariates on the count response through the median which are more appropriate for the skewed nature of count data. In other words, the likelihood of the DW model is re-parameterized to explain the effect of the predictors directly on the median. Thus, in comparison with the generalized linear models (GLMs), MDW and GLMs both investigate the relations to a set of covariates via certain location measurements; however, GLMs consider the means, which is not the best way to represent skewed data. These DW regression models are investigated through simulation studies to illustrate their performance. In addition, they are applied to some real data sets and compared with the related count models, mainly Poisson and NB models. Overall, the DW models provide a good fit to the count data as an alternative to the NB models in the over-dispersion case and are much better fitting than the Poisson models. Additionally, contrary to the NB model, the DW can be applied for the under-dispersion case.
APA, Harvard, Vancouver, ISO, and other styles
24

Zeileis, Achim, Christian Kleiber, and Simon Jackman. "Regression Models for Count Data in R." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/1168/1/document.pdf.

Full text
Abstract:
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of zero-inflated and hurdle regression models in the functions zeroinfl() and hurdle() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both model classes are able to incorporate over-dispersion and excess zeros - two problems that typically occur in count data sets in economics and the social and political sciences - better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (author's abstract)<br>Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
25

Zeileis, Achim, Christian Kleiber, and Simon Jackman. "Regression Models for Count Data in R." Foundation for Open Access Statistics, 2008. http://epub.wu.ac.at/4986/1/Zeileis_etal_2008_JSS_Regression%2DModels%2Dfor%2DCount%2DData%2Din%2DR.pdf.

Full text
Abstract:
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of hurdle and zero-inflated regression models in the functions hurdle() and zeroinfl() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both hurdle and zero-inflated model, are able to incorporate over-dispersion and excess zeros-two problems that typically occur in count data sets in economics and the social sciences-better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
26

Rucinski, Marek. "Modelling learning to count in humanoid robots." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/2995.

Full text
Abstract:
This thesis concerns the formulation of novel developmental robotics models of embodied phenomena in number learning. Learning to count is believed to be of paramount importance for the acquisition of the remarkable fluency with which humans are able to manipulate numbers and other abstract concepts derived from them later in life. The ever-increasing amount of evidence for the embodied nature of human mathematical thinking suggests that the investigation of numerical cognition with the use of robotic cognitive models has a high potential of contributing toward the better understanding of the involved mechanisms. This thesis focuses on two particular groups of embodied effects tightly linked with learning to count. The first considered phenomenon is the contribution of the counting gestures to the counting accuracy of young children during the period of their acquisition of the skill. The second phenomenon, which arises over a longer time scale, is the human tendency to internally associate numbers with space that results, among others, in the widely-studied SNARC effect. The PhD research contributes to the knowledge in the subject by formulating novel neuro-robotic cognitive models of these phenomena, and by employing these in two series of simulation experiments. In the context of the counting gestures the simulations provide evidence for the importance of learning the number words prior to learning to count, for the usefulness of the proprioceptive information connected with gestures to improving counting accuracy, and for the significance of the spatial correspondence between the indicative acts and the objects being enumerated. In the context of the model of spatial-numerical associations the simulations demonstrate for the first time that these may arise as a consequence of the consistent spatial biases present when children are learning to count. Finally, based on the experience gathered throughout both modelling experiments, specific guidelines concerning future efforts in the application of robotic modelling in mathematical cognition are formulated.
APA, Harvard, Vancouver, ISO, and other styles
27

Tarnoff, David. "Episode 2.1 – How Computers Count without Fingers." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/7.

Full text
Abstract:
In this episode, we visit some ancient Sumerians so we can expand our view of finger counting and see how this applies to counting with transistors. From this, we will have the basis for unsigned binary integers and the humble binary digit or bit. We also show how to calculate the upper limit to which a fixed number of transistors can count.
APA, Harvard, Vancouver, ISO, and other styles
28

Seeger, Judith Leland. ""Count Claros" : study of a ballad tradition /." New York : Garland, 1990. http://catalogue.bnf.fr/ark:/12148/cb355343788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Maurer, Jaclyn. "Calories Count - Tips for Healthy Weight Management." College of Agriculture and Life Sciences, University of Arizona (Tucson, AZ), 2005. http://hdl.handle.net/10150/146469.

Full text
Abstract:
4 pp.<br>Weight management is more than just cutting back on carbohydrate or fat. Controlling calories is key to achieving and maintaining a healthy weight. This publication reviews how calories count, not matter what type of diet you choose to follow.
APA, Harvard, Vancouver, ISO, and other styles
30

Ding, Minsheng. "Energy efficient high port count optical switches." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275326.

Full text
Abstract:
The advance of internet applications, such as video streaming, big data and cloud computing, is reshaping the telecommunication and internet industries. Bandwidth demands in datacentres have been boosted by these emerging data-hungry internet applications. Regarding inter- and intra-datacentre communications, fine-grained data need to be exchanged across a large shared memory space. Large-scale high-speed optical switches tend to use a rearrangeably non-blocking architecture as this limits the number of switching elements required. However, this comes at the expense of requiring more sophisticated route selection within the switch and also some forms of time-slotted protocols. The looping algorithm is the classical routing algorithm to set up paths in rearrangeably non-blocking switches. It was born in the electronic switch era, where all links in the switches are equal. It is, therefore, not able to accommodate loss difference between optical paths due to the different length of waveguides and distinct numbers of crossings, and bends, leading to sub-optimal performance. We, therefore, propose an advanced path-selection algorithm based on the looping algorithm that minimises the path-dependent loss. It explores all possible set-ups for a given connection assignment and selects the optimal one. It guarantees that no individual path would have a sufficiently substantial loss, therefore, improve the overall performance of the switch. The performance of the proposed algorithm has been assessed by modelling switches using the VPI simulator. An 8×8 Clos-tree switch demonstrates a 2.7dB decrease in loss and 1.9dB improvement in IPDR with 1.5 dB penalty for the worst case. An 8×8 dilated Beneš shows more than 4 dB loss reduction for the lossiest path and 1.4 dB IPDR improvement for 1 dB power penalty. The improved algorithm can be run once for each switch design and store its output in a compact lookup table, enabling rapid switch reconfiguration. Microelectromechanical systems (MEMS) based optical switches have been fabricated with over 1,000 ports which meet the port count requirements in data centre networks. However, the reconfiguration speed of the MEMS switches is limited to the millisecond to microsecond timescale, which is not sufficient for packet switching in datacentres. Opto-electronic devices, such as Mach-Zehnder Interferometers (MZIs) and semiconductor optical amplifiers (SOAs) with nanosecond response time show the potential to fulfil the requirements of packet switching. However, the scalability of MZI switches is inherently limited by insertion loss and accumulated crosstalk, while the scalability of SOA switches is restricted by accumulated noise and distortion. We, therefore, have proposed a dilated Beneš hybrid MZI-SOA design, where MZIs are implemented as 1×2 or 2×1 low-loss switching elements, minimising crosstalk by using a single input, and where short SOAs are included as gain or absorption units, offering either loss compensation or crosstalk suppression though adding only minimal noise and distortion. A 4×4 device has been fabricated and exhibits a mere 1.3dB loss, an extinction ratio of 47dB, and more than 13dB IPDR for a 0.5dB power penalty. When operating with 10 Gb/s per port, 6pJ/bit energy consumption is demonstrated, delivering 20% reduced energy consumption compared with SOA-based switches. The tolerance of the current control accuracy of this switch is very broad. Within a 5 mA bias current range, the power penalty can be maintained below 0.2 dB for 8 dB IPDR and 12 mA for 10 dB IPDR with a penalty less 0.5 dB. The excellent crosstalk and power penalty performance demonstrated by this chip enable the scalability of this hybrid approach. The performance of 16×16 port dilated Beneš hybrid switch is experimentally assessed by cascading 4×4 switch chips, demonstrating an IPDR of 15 dB at a 1 dB penalty with a 0.6 dB power penalty floor. In terms of switches with port count larger than 16×16, the power penalty performance has been analysed with physical layer simulations fitted with state-of-the-art data. We assess the feasibility of three potential topologies, with different architectural optimisations: dilated Beneš, Beneš and Clos-Beneš. Quantitative analysis for switches with up to 2048 ports is presented, achieving a 1.15dB penalty for a BER of 10-3, compatible with soft-decision forward error correction.
APA, Harvard, Vancouver, ISO, and other styles
31

Nguyen, Thi Kim Hue. "Structure learning of graphs for count data." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3421952.

Full text
Abstract:
Biological processes underlying the basic functions of a cell involve complex interactions between genes. From a technical point of view, these interactions can be represented through a graph where genes and their connections are, respectively, nodes and edges. The main research objective of this thesis is to develop a statistical framework for modelling the interactions between genes when the activity of genes is measured on a discrete scale. We propose several algorithms. First, we define an algorithm for learning the structure of a undirected graph, proving its theoretical consistence in the limit of infinite observations. Next, we tackle structure learning of directed acyclic graphs (DAGs), adopting a model specification proved to guarantee identifiability of the models. Then, we develop new algorithms for both guided and unguided structure learning of DAGs. All proposed algorithms show promising results when applied to simulated data as well as to real data.<br>I processi biologici che regolano le funzioni di base di una cellula sono caratterizzati da numerose interazioni tra geni. Da un punto di vista matematico, è possibile rappresentare queste interazioni attraverso grafi in cui i nodi e gli archi rappresentano, rispettivamente, i geni coinvolti e le loro interazioni. L’obiettivo principale di questa tesi è quello di sviluppare un approccio statistico alla modellazione delle interazioni tra geni quando questi sono misurati su scala discreta. Vengono a tal fine proposti vari algoritmi. La prima proposta è relativa ad un algoritmo disegnato per stimare la struttura di un grafo non orientato, per il quale si fornisce la dimostrazione di convergenza al crescere delle osservazioni. Altre tre proposte coinvolgono la definizione di algoritmi supervisionati per la stima della struttura di grafi direzionali aciclici, basati su una specificazione del modello statistico che ne garantisce l’identificabilità. Sempre con riferimento ai grafi direzionali aciclici, infine, si propone un algoritmo non supervisionato. Tutti gli algoritmi proposti mostrano risultati promettenti in termini di ricostruzione delle vere strutture quando applicati a dati simulati e dati reali.
APA, Harvard, Vancouver, ISO, and other styles
32

Rogers, Joy Michelle. "Changepoint Analysis of HIV Marker Responses." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/16.

Full text
Abstract:
We will propose a random changepoint model for the analysis of longitudinal CD4 and CD8 T-cell counts, as well as viral RNA loads, for HIV infected subjects following highly active antiretroviral treatment. The data was taken from two studies, one of the Aids Clinical Group Trial 398 and one performed by the Terry Beirn Community Programs for Clinical Research on AIDS. Models were created with the changepoint following both exponential and truncated normal distributions. The estimation of the changepoints was performed in a Bayesian analysis, with implementation in the WinBUGS software using Markov Chain Monte Carlo methods. For model selection, we used the deviance information criterion (DIC), a two term measure of model adequacy and complexity. DIC indicates that the data support a random changepoint model with the changepoint following an exponential distribution. Visual analyses of the posterior densities of the parameters also support these conclusions.
APA, Harvard, Vancouver, ISO, and other styles
33

Spangler, Ashley. "An Exploration of the First Pitch in Baseball." Bowling Green State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1490300154782369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ma, Norman K. "Modeling software artifact count attribute with s-curves." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Nord, Sabine, and Hussein Kassim. "Viabilitet av Neisseria gonorrhoeae, Propionibacterium acnes och Bacteroides fragilis vid förlängd förvaring i transportröret COPAN E-swab™." Thesis, Hälsohögskolan, Högskolan i Jönköping, HHJ, Avd. för naturvetenskap och biomedicin, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-27060.

Full text
Abstract:
En säker bakteriediagnostik kräver en korrekt provtagningsteknik, ett lämpligt transportsätt och rätt odlingsteknik. På grund av centralisering av verksamheter och ekonomiska begräsningar, är transport av prover till mikrobiologilaboratorium mycket vanligt. Detta medför behov av ett transportmedium som håller känsliga bakterier vid liv under långa transport- och förvaringstider. Dessutom skall transportrören klara av maskinell utodling som kräver ett vätskebaserat provtagningsmedium. Denna övergång minskar även arbetsbelastning och förbättra ergonomin hos personalen. I denna studie utvärderades viabilitet av Propionibacterium acnes (n=3), Bacteroides fragilis (n=2) och Neisseria gonorrhoeae (n=3) i transportmediet från COPAN E-swab™. Alla tre bakterier förvarades i transportrören i 24, 48 och 72 timmar (h) och dessutom i 120h för P. acnes och B. fragilis. Den ursprungliga koncentrationen samt den procentuella viabilitetsförlusten över tid beräknades med hjälp av viable count. Efter 120h förvaring i transportmediet hade viabiliteten minskat till 47 – 80 % för P. acnes, 18 % för B. fragilis 1 och 73 % för B. fragilis 2. N. gonorrhoeae visade en minskad viabilitet med 96 – 99,97 % efter 24h förvaring i 4°C och rumstemperatur. P. acnes och B. fragilis kunde förvaras i transportmediet i upp till 5 dygn utan att diagnostiken äventyrades. N. gonorrhoeae kunde förvaras i transportmediet i 24h för att diagnostik genom odling ska vara möjlig.<br>Safe bacterial diagnostics requires proper sampling techniques, suitable specimen transport and correct inoculation techniques. As a result of centralization of operations and economic restraints, transportation of samples to microbiology laboratories is very common. This entails the need for a transport medium that keeps sensitive bacteria alive under the long transport and storage time. Additionally, the transport tube should be convenient for automatic inoculation that requires a liquid based medium. This transition also reduces workload and improves ergonomics of the staff. In this study the survival of Propionibacterium acnes (n = 3), Bacteroides fragilis (n = 2) and Neisseria gonorrheae (n = 3) in the transport medium Copan E-swab™   was evaluated. All the species were inoculated in the tubes for 24, 48, 72h and also 120h for B. fragilis and P. acnes.  Initial concentration at start time and the percentage survival over time was calculated with the help of viable count. Viability decreased to 47 – 80 % for P. acnes, 18 % for B. fragilis 1 and 73% for B. fragilis 2 after 120 h storage in the transportmedium. N. gonorrhoeae showed a reduction in viability of 96 – 99,97 % after 24h storage in the medium at 4°C and in room temperature. P. acnes and B. fragilis can be stored in the medium for up to 5 days, without endangering a diagnosis. N. gonorrhoeae can be stored in the transport medium for up to 24h to secure a diagnosis through culturing.
APA, Harvard, Vancouver, ISO, and other styles
36

Leonte, Daniela School of Mathematics UNSW. "Flexible Bayesian modelling of gamma ray count data." Awarded by:University of New South Wales. School of Mathematics, 2003. http://handle.unsw.edu.au/1959.4/19147.

Full text
Abstract:
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. In this thesis we describe computational methods for Bayesian inference and model selection for generalized linear models, which improve on existing techniques. These methods are applied to the building of flexible models for gamma ray count data (data measuring the natural radioactivity of rocks) at the Castlereagh Waste Management Centre, which served as a hazardous waste disposal facility for the Sydney region between March 1978 and August 1998. Bayesian model selection methods for generalized linear models enable us to approach problems of smoothing, change point detection and spatial prediction for these data within a common methodological and computational framework, by considering appropriate basis expansions of a mean function. The data at Castlereagh were collected in the following way. A number of boreholes were drilled at the site, and for each borehole a gamma ray detector recorded gamma ray emissions at different depths as the detector was raised gradually from the bottom of the borehole to ground level. The profile of intensity of gamma counts can be informative about the geology at each location, and estimation of intensity profiles raises problems of smoothing and change point detection for count data. The gamma count profiles can also be modelled spatially, to inform the geological profile across the site. Understanding the geological structure of the site is important for modelling the transport of chemical contaminants beneath the waste disposal area. The structure of the thesis is as follows. Chapter 1 describes the Castlereagh hazardous waste site and the geophysical data, which motivated the methodology developed in this research. We summarise the principles of Gamma Ray (GR) logging, a method routinely employed by geophysicists and environmental engineers in the detailed evaluation of hazardous site geology, and detail the use of the Castlereagh data in this research. In Chapter 2 we review some fundamental ideas of Bayesian inference and computation and discuss them in the context of generalised linear models. Chapter 3 details the theoretical basis of our work. Here we give a new Markov chain Monte Carlo sampling scheme for Bayesian variable selection in generalized linear models, which is analogous to the well-known Swendsen-Wang algorithm for the Ising model. Special cases of this sampling scheme are used throughout the rest of the thesis. In Chapter 4 we discuss the use of methods for Bayesian model selection in generalized linear models in two specific applications, which we implement on the Castlereagh data. First, we consider smoothing problems where we flexibly estimate the dependence of a response variable on one or more predictors, and we apply these ideas to locally adaptive smoothing of gamma ray count data. Second, we discuss how the problem of multiple change point detection can be cast as one of model selection in a generalized linear model, and consider application to change point detection for gamma ray count data. In Chapter 5 we consider spatial models based on partitioning a spatial region of interest into cells via a Voronoi tessellation, where the number of cells and the positions of their centres is unknown, and show how these models can be formulated in the framework of established methods for Bayesian model selection in generalized linear models. We implement the spatial partition modelling approach to the spatial analysis of gamma ray data, showing how the posterior distribution of the number of cells, cell centres and cell means provides us with an estimate of the mean response function describing spatial variability across the site. Chapter 6 presents some conclusions and suggests directions for future research. A paper based on the work of Chapter 3 has been accepted for publication in the Journal of Computational and Graphical Statistics, and a paper based on the work in Chapter 4 has been accepted for publication in Mathematical Geology. A paper based on the spatial modelling of Chapter 5 is in preparation and will be submitted for publication shortly. The work in this thesis was collaborative, to a smaller or larger extent in its various components. I authored Chapters 1 and 2 entirely, including definition of the problem in the context of the CWMC site, data gathering and preparation for analysis, review of the literature on computational methods for Bayesian inference and model selection for generalized linear models. I also authored Chapters 4 and 5 and benefited from some of Dr Nott's assistance in developing the algorithms. In Chapter 3, Dr Nott led the development of sampling scheme B (corresponding to having non-zero interaction parameters in our Swendsen-Wang type algorithm). I developed the algorithm for sampling scheme A (corresponding to setting all algorithm interaction parameters to zero in our Swendsen-Wang type algorithm), and performed the comparison of the performance of the two sampling schemes. The final discussion in Chapter 6 and the direction for further research in the case study context is also my work.
APA, Harvard, Vancouver, ISO, and other styles
37

McPherson, Leslie M. (Leslie Margaret). "Learning the categories count noun and mass noun." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tam, Donna. "Gallager's algorithm and the count-to-infinity problem." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59431.

Full text
Abstract:
Multi-path routing algorithms in packet-switched computer networks have the potential for achieving high throughput and low average packet delay by making use of several paths to route data from a source to a destination. In this thesis, we investigate a multi-path algorithm proposed by Gallager (GALL77) which, given continuously differentiable link flows, has been proven to generate "optimal" routes in a quasi-static environment. However, the performance of the algorithm in a realistic environment has not been established. We show that the Count-to-Infinity problem is inherent in this algorithm and we describe a method of dealing with it. Employing two separate approximations for network flow gradients, we compare its performance to that of a single-path algorithm (using a network simulator based on the ARPANET). We establish the fact that Gallager's algorithm can achieve significantly better results in networks which are not failure prone and are not too large.
APA, Harvard, Vancouver, ISO, and other styles
39

O'Hara, Louise. "Does coping count in adjustment to multiple sclerosis?" Thesis, Brunel University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.392034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Linklater, Holly. "Making children count? : an autoethnographic exploration of pedagogy." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=167353.

Full text
Abstract:
This autoethnographic exploration of pedagogy or the craft of teaching was undertaken while I worked as a reception class teacher in a large English primary school. Naturally occurring data that developed out of the process of teaching and learning were used to construct multiple case studies (Stake, 2006). An iterative process of analysis using inductive and deductive methods enabled me to explore the nuances of pedagogical practice, including those that had been tacitly or intuitively known. The work of Hart, Dixon, Drummond and McIntyre (2004) Learning without Limits, and the metaphor of craft were used as a theoretical framework to support this exploration of how and why pedagogical choices and decisions were made and justified. Analysis revealed how pedagogical thinking was embedded within the complex process of life within the community. Commitment to the core idea of learners’ transformability and the principles coagency, everybody and trust (Hart et al., op. cit.) were found to be necessary but not sufficient to explain pedagogical thinking. A principled belief in possibility was added to articulate how I could be determined for children’s learning without determining what would be achieved. Analysis of how these principles functioned was articulated as a practical cycle of choice, reflection and collaboration. This cycle ensured that the principles were shared within the community. The notion of attentiveness to imagination was developed to articulate how I worked to create and sustain an inclusive environment for learning. Attentiveness was used to reflect the necessary constancy of the process of teaching and learning. Imagination was used to articulate how the process of recognising children’s individuality was achieved by connecting their past, present and future lives, acknowledging how possibilities for learning were created by building on, but not being constrained by what had come before.
APA, Harvard, Vancouver, ISO, and other styles
41

Genis, Amelia. "Numbers count: the importance of numeracy for journalists." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52371.

Full text
Abstract:
Thesis (MPhil) -- Stellenbosch University, 2001.<br>Bibliography<br>ENGLISH ABSTRACT: Few news subjects or events can be comprehensively covered in the media without numbers being used. Indeed, most reports are essentially 'number stories', or could be improved through the judicious use of numbers. Despite this there are frequent complaints about poor levels of numeracy among journalists. Although numbers are fundamental to virtually everything they write, the most superficial review of South African newspapers indicates that most encounters between journalists and numbers of any sort are uncomfortable, to say the least. Reporters shy away from using numbers, and frequently resort to vague comments such as "many", "more", "worse" or "better". When reports do include numbers, they often don't make sense, largely because journalists are unable to do simple calculations and have little understanding of concepts such as the size of the world's population, a hectare, or a square kilometer. They frequently use numbers to lend weight to their facts without having the numerical skills to question whether the figures are correct. Numeracy is not the ability to solve complicated mathematical problems or remember and use a mass of complicated axioms and formulas; it's a practical life skill. For journalists it is the ability to understand the numbers they encounter in everyday life - percentages, exchange rates, very large and small amounts - and the ability to ask intelligent questions about these numbers before presenting them meaningfully in their reports. This thesis is not a compendium of all the mathematical formulas a journalist could ever need. It is a catalogue of the errors that are frequently made, particularly in newspapers, and suggestions to improve number usage. It will hopefully also serve to make journalists aware of the potential of numbers to improve reporting and increase accuracy. This thesis emphasises the importance of basic numeracy for all journalists, primarily by discussing the basic numerical skills without which they cannot do their job properly, but also by noting the concerns of experienced journalists, mathematicians, statisticians and educators about innumeracy in the media. Although the contents of this thesis also apply to magazine, radio and television journalists, it is primarily aimed at their counterparts at South Africa's daily and weekly newspapers. I hope the information contained herein is of use to journalists and journalism students; that it will open their eyes to the possibility of improving number usage and thereby reporting, serve as encouragement to brush up their numerical skills, and help to shed light on the numbers which surround them and which they use so readily.<br>AFRIKAANSE OPSOMMING: Min nuusonderwerpe of -gebeure kan in beriggewing tot hul reg kom sonder dat enige getalle gebruik word. Trouens, die meeste berigte is in wese 'syferstories', of kan verbeter word deur meer sinvolle gebruik van syfers. Tog is daar vele klagtes oor joemaliste se gebrekkige syfervaardigheid. Ten spyte van die ingeworteldheid van getalle in haas alles wat hulle skryf, toon selfs die mees oppervlakkige ondersoek na syfergebruik in Suid-Afrikaanse koerante joemaliste se ongemaklike omgang met die meeste syfers. Hulle is skugter om syfers te gebruik, en verlaat hulle dikwels op vae kommentaar soos "baie", "meer", "erger" of "beter". Indien hulle syfers gebruik, maak die syfers dikwels nie sin nie: meermale omdat joemaliste nie basiese berekeninge rondom persentasies en statistiek kan doen nie, en min begrip het vir algemene groothede soos die wereldbevolking, 'n hektaar of 'n vierkante kilometer. Hulle sal dikwels enige syfer gebruik omdat hulle meen dit verleen gewig aan hul feite en omdat hulle nie die syfervaardigheid het om dit te bevraagteken nie. Syfervaardigheid is nie die vermoe om suiwer wiskunde te doen of 'n magdom stellings en formules te onthou en gebruik nie; dis 'n praktiese lewensvaardigheid, die vermoe om die syferprobleme wat die daaglikse roetine oplewer - persentasies, wisselkoerse, baie groot en klein getalle- te verstaan en te hanteer. Hierdie tesis is nie 'n versameling van alle berekeninge wat joemaliste ooit sal nodig kry nie; maar veel eerder 'n beskrywing van die potensiaal van syfers om verslaggewing te verbeter en joemaliste te help om ag te slaan op die getalle rondom hulle en die wat hulle in hul berigte gebruik. Die doel van die tesis is om die belangrikheid van 'n basiese syfervaardigheid vir alle joemaliste te beklemtoon, veral die basiese syfervaardighede waarsonder joemaliste nie die verslaggewingtaak behoorlik kan aanpak nie, te bespreek, en ook om ervare joemaliste, wiskundiges, statistici en opvoeders se kommer oor joemaliste se gebrek aan syfervaardigheid op te teken. Hoewel alles wat in die tesis vervat is, ewe veel van toepassing is op tydskrif-, radio- en televisiejoemaliste, val die klem hoofsaaklik op hul ewekniee by Suid-Afrikaanse dag- en weekblaaie. Ek hoop die inligting hierin vervat sal van nut wees vir praktiserende joemaliste en joemalistiekstudente om hulle bewus te maak van die moontlikhede wat bestaan om syfergebruik, en uiteindelik verslaggewing, te verbeter en as aanmoediging dien om hul syfervaardigheid op te skerp.
APA, Harvard, Vancouver, ISO, and other styles
42

Meya, Wilhelm Krudener, and Wilhelm Krudener Meya. "Calico winter count 1825-1877 : an ethnohistorical analysis." Thesis, The University of Arizona, 1999. http://hdl.handle.net/10150/622039.

Full text
Abstract:
The purpose of this study is to analyze the effectiveness of using the Calico winter count, a 19th century Teton Lakota winter count, as a basis for reconstructing the history of the winter count-producing group. As emic history-keeping devices, winter counts are a crucial type of indigenous data set whose importance is defined through Lakota social theory, ethnohistory theory, and comparative analysis with other historical and cultural data sets. The results of these studies will reveal that winter counts, despite their peripheral utilization in Lakota historiography, are highly credible historical sources that can play central roles in the construction of tribal histories. Winter counts are able to convey a new dimension of pre-reservation life on the plains for the Lakota people. They can be used to relate the internal reality of tribal life, while providing a more complete ethnographic context for describing the tribe historically and to aid in the creation of a convincing historical narrative. This study has important implications for future historical methodology as well as a significant social value for modern Lakota people.
APA, Harvard, Vancouver, ISO, and other styles
43

Vashisth, Abhishek. "LOW DEVICE COUNT ULTRA LOW POWER NEMS FPGA." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1383618426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sitter, Nicholas James. "Two-wire, low component count soil temperature sensor." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/1081.

Full text
Abstract:
A two-wire, low component count soil temperature sensor was developed. The sensor uses one wire for ground and the other wire is used for both power and communication. Pulse width modulation is used to send temperature measurements to the master, where the duty cycle is proportional to the temperature. The sensor parasitically powers itself from the bidirectional data line. In order to reduce the number of components necessary, a microcontroller with an internal temperature sensor is used. Finally, the sensor can receive data from the master on the bidirectional communication line, which is used for calibrating the sensor.
APA, Harvard, Vancouver, ISO, and other styles
45

Banker, Kristi Marie. "And count myself a king of infinite ((words))." Thesis, University of Iowa, 2014. https://ir.uiowa.edu/etd/4569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nicolson-Setz, Helen Ann. "Producing literacy practices that count for subject English." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16370/1/Helen_Setz_Thesis.pdf.

Full text
Abstract:
This thesis presents a study of the production of literacy practices in Year 10 English lessons in a culturally diverse secondary school in a low socio-economic area. The study explored the everyday interactional work of the teacher and students in accomplishing the literacy knowledge and practices that count for subject English. This study provides knowledge about the learning opportunities and literacy knowledge made available through the interactional work in English lessons. An understanding of the dynamics of the interactional work and what that produces opens up teaching practice to change and potentially to improve student learning outcomes. This study drew on audio-recorded data of classroom interactions between the teacher and students in four mainstream Year 10 English lessons with a culturally diverse class in a disadvantaged school, and three audio-recorded interviews with the teacher. This study employed two perspectives: ethnomethodological resources and Bernsteinian theory. The analyses of the interactional work using both perspectives showed how students might be positioned to access the literacy learning on offer. In addition, using both perspectives provided a way to associate the literacy knowledge and practices produced at the classroom level to the knowledge that counted for subject English. The analyses of the lesson data revealed the institutional and moral work necessary for the assembly of knowledge about literacy practices and for constructing student-teacher relations and identities. Documenting the ongoing interactional work of teacher and students showed what was accomplished through the talk-in-interaction and how the literacy knowledge and practices were constructed and constituted. The detailed descriptions of the ongoing interactional work showed how the literacy knowledge was modified appropriate for student learning needs, advantageously positioning the students for potential acquisition. The study produced three major findings. First, the literacy practices and knowledge produced in the classroom lessons were derived from the social and functional view of language and text in the English syllabus in use at that time. Students were not given the opportunity to use their learning beyond what was required for the forthcoming assessment task. The focus seemed to be on access to school literacies, providing students with opportunities to learn the literacy practices necessary for assessment or future schooling. Second, the teacher’s version of literacy knowledge was dominant. The teacher’s monologues and elaborations produced the literacy knowledge and practices that counted and the teacher monitored what counted as relevant knowledge and resources for the lessons. The teacher determined which texts were critiqued, thus taking a critical perspective could be seen as a topic rather than an everyday practice. Third, the teacher’s pedagogical competence was displayed through her knowledge about English, her responsibility and her inclusive teaching practice. The teacher’s interactional work encouraged positive student-teacher relations. The teacher spoke about students positively and constructed them as capable. Rather than marking student ethnic or cultural background, the teacher responded to students’ learning needs in an ongoing way, making the learning explicit and providing access to school literacies. This study’s significance lies in its detailed descriptions of teacher and student work in lessons and what that work produced. It documented which resources were considered relevant to produce literacy knowledge. Further, this study showed how two theoretical approaches can be used to provide richer descriptions of the teacher and student work, and literacy knowledge and practices that counted in English lessons and for subject English.
APA, Harvard, Vancouver, ISO, and other styles
47

Nicolson-Setz, Helen Ann. "Producing literacy practices that count for subject English." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16370/.

Full text
Abstract:
This thesis presents a study of the production of literacy practices in Year 10 English lessons in a culturally diverse secondary school in a low socio-economic area. The study explored the everyday interactional work of the teacher and students in accomplishing the literacy knowledge and practices that count for subject English. This study provides knowledge about the learning opportunities and literacy knowledge made available through the interactional work in English lessons. An understanding of the dynamics of the interactional work and what that produces opens up teaching practice to change and potentially to improve student learning outcomes. This study drew on audio-recorded data of classroom interactions between the teacher and students in four mainstream Year 10 English lessons with a culturally diverse class in a disadvantaged school, and three audio-recorded interviews with the teacher. This study employed two perspectives: ethnomethodological resources and Bernsteinian theory. The analyses of the interactional work using both perspectives showed how students might be positioned to access the literacy learning on offer. In addition, using both perspectives provided a way to associate the literacy knowledge and practices produced at the classroom level to the knowledge that counted for subject English. The analyses of the lesson data revealed the institutional and moral work necessary for the assembly of knowledge about literacy practices and for constructing student-teacher relations and identities. Documenting the ongoing interactional work of teacher and students showed what was accomplished through the talk-in-interaction and how the literacy knowledge and practices were constructed and constituted. The detailed descriptions of the ongoing interactional work showed how the literacy knowledge was modified appropriate for student learning needs, advantageously positioning the students for potential acquisition. The study produced three major findings. First, the literacy practices and knowledge produced in the classroom lessons were derived from the social and functional view of language and text in the English syllabus in use at that time. Students were not given the opportunity to use their learning beyond what was required for the forthcoming assessment task. The focus seemed to be on access to school literacies, providing students with opportunities to learn the literacy practices necessary for assessment or future schooling. Second, the teacher’s version of literacy knowledge was dominant. The teacher’s monologues and elaborations produced the literacy knowledge and practices that counted and the teacher monitored what counted as relevant knowledge and resources for the lessons. The teacher determined which texts were critiqued, thus taking a critical perspective could be seen as a topic rather than an everyday practice. Third, the teacher’s pedagogical competence was displayed through her knowledge about English, her responsibility and her inclusive teaching practice. The teacher’s interactional work encouraged positive student-teacher relations. The teacher spoke about students positively and constructed them as capable. Rather than marking student ethnic or cultural background, the teacher responded to students’ learning needs in an ongoing way, making the learning explicit and providing access to school literacies. This study’s significance lies in its detailed descriptions of teacher and student work in lessons and what that work produced. It documented which resources were considered relevant to produce literacy knowledge. Further, this study showed how two theoretical approaches can be used to provide richer descriptions of the teacher and student work, and literacy knowledge and practices that counted in English lessons and for subject English.
APA, Harvard, Vancouver, ISO, and other styles
48

Alsaadi, Yousef Saeed. "Practical use and development of biomérieux TEMPO® system in microbial food safety." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18708.

Full text
Abstract:
Doctor of Philosophy<br>Department of Food Science<br>Daniel Y.C. Fung<br>In the food industry, coliform testing is traditionally done by the time consuming and labor intensive plate count method or tube enumeration methods. The TEMPO® system (bioMérieux, Inc.) was developed to improve laboratory efficiency and to replace traditional methods. It uses a miniaturization of the Most Probable Number (MPN) method with 16 tubes with 3 dilutions in one single disposable card. It utilizes two stations: the TEMPO® Preparation station and the TEMPO® Reading station. In this study, the Oxyase® (Oxyase®, Inc.) enzyme was added to TEMPO® CC (Coliforms Count), TEMPO® AC (aerobic colony count) and TEMPO® EC (E. coli Count) methods. Water samples of 1 ml with 0.1 ml of Oxyase® enzyme were compared to samples without the Oxyase® enzyme using the TEMPO® system. Samples were spiked with different levels of coliforms (10, 102, 103 and 104 CFU/ml), stomached (20 sec), and pipetted into the three different TEMPO® media reagents (4 ml) in duplicate and then automatically transferred into the corresponding TEMPO® cards by the TEMPO® preparation station. Counts were obtained using the TEMPO® reading station after 8, 12, 16, 22 and 24 hours at an incubation temperature of 35°C. Results from 20 replicates were compared statistically. Using TEMPO® tests, high counts in food samples (>6 log 10 CFU/ml) can be read in 6±2 hours of incubation using the time-to-detection calibration curve. The TEMPO® system reduces reading time (reading protocol should be changed). There is no need to wait for 22 hours of incubation only 12 hours is required. Oxyrase® enzyme is not needed for the TEMPO® system.
APA, Harvard, Vancouver, ISO, and other styles
49

Pihl, Svante, and Leonardo Olivetti. "An Empirical Comparison of Static Count Panel Data Models: the Case of Vehicle Fires in Stockholm County." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412014.

Full text
Abstract:
In this paper we study the occurrences of outdoor vehicle fires recorded by the Swedish Civil Contingencies Agency (MSB) for the period 1998-2019, and build static panel data models to predict future occurrences of fire in Stockholm County. Through comparing the performance of different models, we look at the effect of different distributional assumptions for the dependent variable on predictive performance. Our study concludes that treating the dependent variable as continuous does not hamper performance, with the exception of models meant to predict more uncommon occurrences of fire. Furthermore, we find that assuming that the dependent variable follows a Negative Binomial Distribution, rather than a Poisson Distribution, does not lead to substantial gains in performance, even in cases of overdispersion. Finally, we notice a slight increase in the number of vehicle fires shown in the data, and reflect on whether this could be related to the increased population size.
APA, Harvard, Vancouver, ISO, and other styles
50

Alves, Vanderli de AraÃjo. "Sobre o princÃpio fundamental da contagem." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15768.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior<br>Neste trabalho apresentamos o PrincÃpio Fundamental da Contagem (PFC) como uma consequÃncia do PrincÃpio de InduÃÃo Finita (PIF) e, como aplicaÃÃo do (PFC), apresentamos a soluÃÃo de vÃrios problemas envolvendo contagem, arranjos, permutaÃÃes e combinaÃÃes. O principal objetivo do trabalho à apresentar o raciocÃnio lÃgico-matemÃtico que envolve a noÃÃo de contagem evitando dar o protagonismo que costumeiramente à dado as fÃrmulas matemÃticas no ensino bÃsico. Neste intuito, resolvemos vÃrios problemas sem fazer uso de fÃrmulas matemÃticas, priorizando a aplicaÃÃo direta do PFC.<br>We presented The Fundamental Principle Count (PFC) as a consequence of the Principle Finite Induction (PIF) and as the application (PFC), we presented the resolution of various problems involving count, arrangements, combinations and permutations. The main objective of this work is to present the logical-mathematical reasoning that involves counting notion avoiding to give the leadership that is customarily given to mathematical formulas. To this end, we solve various problems without using mathematical formulas, giving priority to direct application of the PFC.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography