Siga este link para ver outros tipos de publicações sobre o tema: Counter data.

Teses / dissertações sobre o tema "Counter data"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Counter data".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Dutrisac, James George. "Counter-Surveillance in an Algorithmic World". Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/711.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Rankin, Jenny Grant. "Over-the-Counter Data's Impact on Educators' Data Analysis Accuracy". Thesis, Northcentral University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3575082.

Texto completo da fonte
Resumo:

There is extensive research on the benefits of making data-informed decisions, but research also contains evidence many educators incorrectly interpret student data. Meanwhile, the types of detailed labeling on over-the-counter medication have been shown to improve use of non-medication products, as well. However, data systems most educators use to analyze student data usually display data without supporting guidance concerning the data's proper analysis. In this dissertation, the data-equivalent to over-the-counter medicine is termed over-the-counter data: essentially, enlisting medical label conventions to pair data reports with straightforward verbiage on the proper interpretation of report contents. The researcher in this experimental, quantitative study explored the inclusion of such supports in data systems and their reports. The cross-sectional sampling of 211 educators of varied backgrounds and roles at nine elementary and secondary schools throughout California answered survey questions regarding student data reports with varied forms of analysis guidance. Respondents' data analyses were found to be 307% more accurate when a report footer was present, 205% more accurate when an abstract was present, and 273% more accurate when an interpretation guide was present. These findings and others were significant and fill a void in field literature by containing evidence that can be used to identify how data systems can increase data analysis accuracy by offering analysis support through labeling and supplemental documentation. Recommendations for future research include measuring the impact over-the-counter data has on data analysis accuracy when all supports are offered to educators in concert. Keywords: abstract, analysis, data, data-driven decision-making, DDDM, data-informed decision-making, data system, data warehouse, footer, ICT, interpretation guide, report.

Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Liu, Dapeng. "Towards developing a goal-driven data integration framework for counter-terrorism analytics". VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5986.

Texto completo da fonte
Resumo:
Terrorist attacks can cause massive casualties and severe property damage, resulting in terrorism crises surging across the world; accordingly, counter-terrorism analytics that take advantage of big data have been attracting increasing attention. The knowledge and clues essential for analyzing terrorist activities are often spread across heterogeneous data sources, which calls for an effective data integration solution. In this study, employing the goal definition template in the Goal-Question-Metric approach, we design and implement an automated goal-driven data integration framework for counter-terrorism analytics. The proposed design elicits and ontologizes an input user goal of counter-terrorism analytics; recognizes goal-relevant datasets; and addresses semantic heterogeneity in the recognized datasets. Our proposed design, following the design science methodology, presents a theoretical framing for on-demand data integration designs that can accommodate diverse and dynamic user goals of counter-terrorism analytics and output integrated data tailored to these goals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Domin, Annika. "Konzeption eines RDF-Vokabulars für die Darstellung von COUNTER-Nutzungsstatistiken". Master's thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-179416.

Texto completo da fonte
Resumo:
Die vorliegende Masterarbeit dokumentiert die Erstellung eines RDF-basierten Vokabulars zur Darstellung von Nutzungsstatistiken elektronischer Ressourcen, die nach dem COUNTER-Standard erstellt wurden. Die konkrete Anwendung dieses Vokabulars bildet das Electronic Resource Management System (ERMS), welches momentan von der Universitätsbibliothek Leipzig im Rahmen des kooperativen Projektes AMSL entwickelt wird. Dieses basiert auf Linked Data, soll die veränderten Verwaltungsprozesse elektronischer Ressourcen abbilden können und gleichzeitig anbieterunabhängig und flexibel sein. Das COUNTER-Vokabular soll aber auch über diese Anwendung hinaus einsetzbar sein. Die Arbeit gliedert sich in die beiden Teile Grundlagen und Modellierung. Im ersten Teil wird zu nächst die bibliothekarische Notwendigkeit von ERM-Systemen herausgestellt und der Fokus der Betrachtung auf das Teilgebiet der Nutzungsstatistiken und die COUNTER-Standardisierung gelenkt. Anschließend werden die technischen Grundlagen der Modellierung betrachtet, um die Arbeit auch für nicht mit Linked Data vertraute Leser verständlich zu machen. Darauf folgt der Modellierungsteil, der mit einer Anforderungsanalyse sowie der Analyse des den COUNTER-Dateien zugrunde liegenden XML-Schemas beginnt. Daran schließt sich die Modellierung des Vokabulars mit Hilfe von RDFS und OWL an. Aufbauend auf angestellten Überlegungen zur Übertragung von XML-Statistiken nach RDF und der Vergabe von URIs werden anschließend reale Beispieldateien manuell konvertiert und in einem kurzen Test erfolgreich überprüft. Den Abschluss bilden ein Fazit der Arbeit sowie ein Ausblick auf das weitere Verfahren mit den Ergebnissen. Das erstellte RDF-Vokabular ist bei GitHub unter der folgenden URL zur Weiterverwendung hinterlegt: https://github.com/a-nnika/counter.vocab
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Jacobson, Jessica. "Using Single Propeller Performance Data to Predict Counter-Rotating Propeller Performance for a High Speed Autonomous Underwater Vehicle". Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/32753.

Texto completo da fonte
Resumo:
The use of counter-rotating propellers is often desirable for aerospace and ocean engineering applications. Counter-rotating propellers offer higher peak efficiencies, better off-design performance, and roll control capabilities. But counter-rotating propeller matching is a difficult and complex procedure. Although much research has been done on the design of optimal counter-rotating propeller sets, there has been less focus on predicting the performance of unmatched counter-rotating sets. In this study, it was desired to use off-the-shelf marine propellers to make a counter-rotating pair for a high speed autonomous underwater vehicle (AUV). Counter-rotating propellers were needed to provide roll control for the AUV. Pre-existing counter-rotating propeller design methods were not applicable because they all require inputs of complex propeller blade geometries. These geometries are rarely known for off-the-shelf propellers.

This study proposes a new method for predicting the counter-rotating performance of unmatched propeller sets. It is suggested here that propeller performance curves can be used to predict counter-rotating thrust and torque performance.

Propeller performance tests were run in the Virginia Tech Water Tunnel for a variety of small, off-the shelf propellers. The collected data was used to generate the propeller performance curves. The propellers were then paired up and tested as counter-rotating sets. A momentum theory based model was formulated that predicted counter-rotating performance using the propeller performance data. The counter-rotating data was used to determine the effectiveness of the method.

A solution was found that successfully predicted the counter-rotating performance of all of the tested propeller sets using six interaction coefficients. The optimal values of these coefficients were used to write two counter-rotating performance prediction programs. The first program takes the forward and aft RPMs and the flow speed as inputs, and predicts the generated thrust and torque. The second program takes the flow speed and the desired thrust as inputs and calculates the forward and aft RPM values that will generate the desired thrust while producing zero torque. The second program was used to determine the optimal counter-rotating set for the HSAUV.
Master of Science

Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Avadhani, Umesh D. "Data processing in a small transit company using an automatic passenger counter". Thesis, Virginia Tech, 1986. http://hdl.handle.net/10919/45669.

Texto completo da fonte
Resumo:

This thesis describes the work done in the second stage of the implementation of the Automatic Passenger Counter (APC) system at the Roanoke Valley - Metro Transit Company. This second stage deals with the preparation of a few reports and plots that would help the transit managers in efficiently managing the transit system. The reports and plots give an evaluation of the system and service operations by which the decision makers can support their decisions.

For an efficient management of the transit system, data on ridership activity, running times schedule information, and fare revenue is required. From this data it is possible to produce management information reports and summary statistics.

The present data collection program at Roanoke Valleyâ Metro is carried by using checkers and supervisors to collect ridership and schedule adherence information using manual methods. The information needed for efficient management of transit operations is both difficult and expensive to obtain. The new APC system offers the management with a new and powerful tool that will enhance their capability to make better decisions when allocating the service needs. The data from the APC are essential for the transit propertys ongoing planning and scheduling activites. The management could easily quantify the service demands on a route or for the whole system as desired by the user.


Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Vavassori, Luca. "SSC: Single-Shot Multiscale Counter. : Counting Generic Objects in Images". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264236.

Texto completo da fonte
Resumo:
Counting object in pictures is a computer vision task that has been explored in the past years, achieving state-of-the-art results thanks to the rise of convolutional neural networks. Most of the work focused on specific and limited domains to predict the number of just one category in the likes of people, cars, cells, and animals. Little effort has been employed to investigate methods to count the instances of different classes at the same time. This thesis work explored the different approaches present in the literature to understand their strenghts and weaknesses and eventually improve the accuracy and reduce the inference time of models aimed to estimate the number of multiple elements. At first, new techniques have been applied on top of the previously proposed algorithms to lower the prediction error. Secondly, the possibility to adapt an object detector to the counting task avoiding the localization prediction has been investigated. As a result, a new model called Single-Shot Multiscale Counter has been proposed, based on the architecture of the Single-Shot Multibox Detector. It achieved a lower prediction error on the ground truth count by 11% (from an mRMSE of 0.42 to 0.35) and an inference time 16x to 20x faster compared to the models found in the literature (from 1.25s to 0.049s).
Att räkna objekt i bilder är en datorvisionsuppgift som har utforskats under de senaste åren och uppnått toppmoderna resultat tack vare ökningen av invändiga neurala nätverk. De flesta av arbetena fokuserade på specifika och begränsade domäner för att förutsäga antalet bara en kategori som människor, bilar, celler och djur. Liten ansträngning har använts för att undersöka metoder för att räkna förekomsten av olika klasser samtidigt. Detta avhandlingsarbete utforskade de olika metoder som finns i litteraturen för att förstå deras styrka och svagheter och så småningom förbättra noggrannheten och minska inferingstiden för modeller som syftar till att uppskatta antalet flera element. Först har nya tekniker tillämpats ovanpå de tidigare föreslagna algoritmerna för att sänka förutsägelsefelet. För det andra har möjligheten att anpassa en objektdetektor till räkneuppgiften för att undvika lokaliseringsförutsägelse undersökts. Som ett resultat har en ny modell som heter Single-Shot Multiscale Counter föreslagits, baserad på arkitekturen för Single-Shot Multibox Detector. Den uppnådde ett lägre förutsägelsefel på sanningsräkningen på marken med 11 % (från en mRMSE på 0,42 till 0,35) och en slutningstid 16x till 20x snabbare jämfört med modellerna som finns i litteraturen (från 1,25 till 0,049 sek).
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Hua, Nan. "Space-efficient data sketching algorithms for network applications". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44899.

Texto completo da fonte
Resumo:
Sketching techniques are widely adopted in network applications. Sketching algorithms “encode” data into succinct data structures that can later be accessed and “decoded” for various purposes, such as network measurement, accounting, anomaly detection and etc. Bloom filters and counter braids are two well-known representatives in this category. Those sketching algorithms usually need to strike a tradeoff between performance (how much information can be revealed and how fast) and cost (storage, transmission and computation). This dissertation is dedicated to the research and development of several sketching techniques including improved forms of stateful Bloom Filters, Statistical Counter Arrays and Error Estimating Codes. Bloom filter is a space-efficient randomized data structure for approximately representing a set in order to support membership queries. Bloom filter and its variants have found widespread use in many networking applications, where it is important to minimize the cost of storing and communicating network data. In this thesis, we propose a family of Bloom Filter variants augmented by rank-indexing method. We will show such augmentation can bring a significant reduction of space and also the number of memory accesses, especially when deletions of set elements from the Bloom Filter need to be supported. Exact active counter array is another important building block in many sketching algorithms, where storage cost of the array is of paramount concern. Previous approaches reduce the storage costs while either losing accuracy or supporting only passive measurements. In this thesis, we propose an exact statistics counter array architecture that can support active measurements (real-time read and write). It also leverages the aforementioned rank-indexing method and exploits statistical multiplexing to minimize the storage costs of the counter array. Error estimating coding (EEC) has recently been established as an important tool to estimate bit error rates in the transmission of packets over wireless links. In essence, the EEC problem is also a sketching problem, since the EEC codes can be viewed as a sketch of the packet sent, which is decoded by the receiver to estimate bit error rate. In this thesis, we will first investigate the asymptotic bound of error estimating coding by viewing the problem from two-party computation perspective and then investigate its coding/decoding efficiency using Fisher information analysis. Further, we develop several sketching techniques including Enhanced tug-of-war(EToW) sketch and the generalized EEC (gEEC)sketch family which can achieve around 70% reduction of sketch size with similar estimation accuracies. For all solutions proposed above, we will use theoretical tools such as information theory and communication complexity to investigate how far our proposed solutions are away from the theoretical optimal. We will show that the proposed techniques are asymptotically or empirically very close to the theoretical bounds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Eriksson, Tilda. "Change Detection in Telecommunication Data using Time Series Analysis and Statistical Hypothesis Testing". Thesis, Linköpings universitet, Matematiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94530.

Texto completo da fonte
Resumo:
In the base station system of the GSM mobile network there are a large number of counters tracking the behaviour of the system. When the software of the system is updated, we wish to find out which of the counters that have changed their behaviour. This thesis work has shown that the counter data can be modelled as a stochastic time series with a daily profile and a noise term. The change detection can be done by estimating the daily profile and the variance of the noise term and perform statistical hypothesis tests of whether the mean value and/or the daily profile of the counter data before and after the software update can be considered equal. When the chosen counter data has been analysed, it seems to be reasonable in most cases to assume that the noise terms are approximately independent and normally distributed, which justies the hypothesis tests. When the change detection is tested on data where the software is unchanged and on data with known software updates, the results are as expected in most cases. Thus the method seems to be applicable under the conditions studied.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Feng, Shiguang. "The Expressive Power, Satisfiability and Path Checking Problems of MTL and TPTL over Non-Monotonic Data Words". Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-208823.

Texto completo da fonte
Resumo:
Recently, verification and analysis of data words have gained a lot of interest. Metric temporal logic (MTL) and timed propositional temporal logic (TPTL) are two extensions of Linear time temporal logic (LTL). In MTL, the temporal operator are indexed by a constraint interval. TPTL is a more powerful logic that is equipped with a freeze formalism. It uses register variables, which can be set to the current data value and later these register variables can be compared with the current data value. For monotonic data words, Alur and Henzinger proved that MTL and TPTL are equally expressive and the satisfiability problem is decidable. We study the expressive power, satisfiability problems and path checking problems for MLT and TPTL over all data words. We introduce Ehrenfeucht-Fraisse games for MTL and TPTL. Using the EF-game for MTL, we show that TPTL is strictly more expressive than MTL. Furthermore, we show that the MTL definability problem that whether a TPTL-formula is definable in MTL is not decidable. When restricting the number of register variables, we are able to show that TPTL with two register variables is strictly more expressive than TPTL with one register variable. For the satisfiability problem, we show that for MTL, the unary fragment of MTL and the pure fragment of MTL, SAT is not decidable. We prove the undecidability by reductions from the recurrent state problem and halting problem of two-counter machines. For the positive fragments of MTL and TPTL, we show that a positive formula is satisfiable if and only it is satisfied by a finite data word. Finitary SAT and infinitary SAT coincide for positive MTL and positive TPTL. Both of them are r.e.-complete. For existential TPTL and existential MTL, we show that SAT is NP-complete. We also investigate the complexity of path checking problems for TPTL and MTL over data words. These data words can be either finite or infinite periodic. For periodic words without data values, the complexity of LTL model checking belongs to the class AC^1(LogDCFL). For finite monotonic data words, the same complexity bound has been shown for MTL by Bundala and Ouaknine. We show that path checking for TPTL is PSPACE-complete, and for MTL is P-complete. If the number of register variables allowed is restricted, we obtain path checking for TPTL with only one register variable is P-complete over both infinite and finite data words; for TPTL with two register variables is PSPACE-complete over infinite data words. If the encoding of constraint numbers of the input TPTL-formula is in unary notation, we show that path checking for TPTL with a constant number of variables is P-complete over infinite unary encoded data words. Since the infinite data word produced by a deterministic one-counter machine is periodic, we can transfer all complexity results for the infinite periodic case to model checking over deterministic one-counter machines.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Boskovitz, Agnes, e abvi@webone com au. "Data Editing and Logic: The covering set method from the perspective of logic". The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20080314.163155.

Texto completo da fonte
Resumo:
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Leung, Joseph, Michio Aoyagi, Donald Billings, Herbert Hoy, Mei Lin e Fred Shigemoto. "A MOBILE RANGE SYSTEM TO TRACK TELEMETRY FROM A HIGH-SPEED INSTRUMENTATION PACKAGE". International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/607380.

Texto completo da fonte
Resumo:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California
As renewal interest in building vehicles based on hypersonic technologies begin to emerge again, test ranges anticipating in supporting flight research of these vehicles will face a set of engineering problems. Most fundamentals of these will be to track and gather error free telemetry from the vehicles in flight. The first series of vehicles will likely be reduced-scale models that restrict the locations and geometric shapes of the telemetry antennas. High kinetic heating will further limit antenna design and construction. Consequently, antennas radiation patterns will be sub-optimal, showing lower gains and detrimental nulls. A mobile system designed to address the technical issues above will be described. The use of antenna arrays, spatial diversity and a hybrid tracking system using optical and electronic techniques to obtain error free telemetry in the present of multipath will be presented. System tests results will also be presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Ji, Yuxiong. "Distribution-based Approach to Take Advantage of Automatic Passenger Counter Data in Estimating Period Route-level Transit Passenger Origin-Destination Flows:Methodology Development, Numerical Analyses and Empirical Investigations". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1299688722.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Parker, Marc. "Cicero, money and the challenge of 'new terrorism' : is counter terrorist financing (CTF) a critical inhibitor? : should the emphasis on finance interventions prevail?" Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/4900.

Texto completo da fonte
Resumo:
Much of the first generation literature on counter terrorist financing made sweeping generalisations and observations regarding these interventions based on relatively limited case study data. Given that the UK approach to counter terrorism clearly attests to the symbiosis between terrorism and money, this thesis evaluates the contemporary relevance of Cicero's aphorism that “the sinews of war are infinite money.” Drawing on a series of discussions and formal interviews with CTF practitioners into several of the most recent high profile terror attacks in the United Kingdom, it confirms a notable shift in terrorist financing methodology in recent years and underscores the trend towards increasing operational independence and financial autonomy. It thus considers the continuing centrality of money in the terrorism equation and has been framed specifically to examine the financing challenges posed by domestic terror cells in the UK, given the trend towards low cost terrorism with its emphasis on self sufficiency and the emergence of more discreet and ‘criminally sterile' funding methodologies. This thesis is primarily concerned with reviewing the efficacy of the UK counter terrorism-financing (CTF) model as perceived by practitioners, both in policy terms and in the context of operational outcomes. The increasing emphasis on new funding methodologies and the ensuing lack of visibility and opportunities for interdiction at the conspiracy phase of terrorist plots, further highlights the operational challenges posed for practitioners in confronting these ‘new' threats. As such, this research encourages several new perspectives, including a review of UK corporate knowledge on previous CTF interventions and consideration of military ‘threat finance' practice to deliver greater operational impact. In particular, it advocates a new focus on micro CTF interventions to address changes in the ‘economy of terror'. Finally, this thesis strongly attests to the continued relevance of finance or more specifically, the 'financial footprint' to inform and provide intelligence insight for counter terrorism responses generally. In doing so, it also considers the impact on privacy from increasingly intrusive financial and digital data collection and the trade-offs that inevitably emerge when liberty and security collide.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Beridzishvili, Jumber. "When the state cannot deal with online content : Reviewing user-driven solutions that counter political disinformation on Facebook". Thesis, Malmö universitet, Malmö högskola, Institutionen för globala politiska studier (GPS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-18502.

Texto completo da fonte
Resumo:
Online disinformation damage on the world’s democracy has been critical. Yet, states fail to handle online content harms. Due to exception from legal liability for hosted content, Facebook, used by a third of the world population, operates ‘duty-free’ along with other social media companies.Concerned with solutions, this has given rise to the idea in studies that social resistance could be one of the most effective ways for combating disinformation. However, how exactly do we resist, is an unsettled subject. Are there any socially-driven processes against disinformation happening out there?This paper aimed to identify such processes for giving a boost to theory-building around the topic. Two central evidence cases were developed: #IAmHere digital movement fighting disinformation and innovative tool ‘Who is Who’ for distinguishing fake accounts. Based on findings, I argue that efforts by even a very small part of society can have a significant impact on defeating online disinformation. This is because digital activism shares phenomenal particularities for shaping online political discourse around disinformation. Tools such as ‘Who is Who’, on the other hand, build social resilience against the issue, also giving boost digital activists for mass reporting of disinformation content. User-driven solutions have significant potential for further research.Keywords: Online disinformation; algorithms; digital activism; user-driven solutions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Kotlar, Kim Leslie. "Predicting actions taken to counter economic sanctions : an examination of U.S. government financial data collection and its usefulness in determining if foreign governments anticipate economic sanctions : a case study of Iraq". Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23749.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Vítek, Pavel. "Business Intelligence analýza podnikání v lékárně Alfa ve městě Nymburk". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-72466.

Texto completo da fonte
Resumo:
This thesis deals with the theme of business analysis of real company -- pharmacy, running a business in market environment of the city of Nymburk. Main focus is on research of actual position of the pharmacy in the local market in context of a new competitor entering the market. The whole thesis is divided into two consistent parts: The first part is a short theoretical introduction to the methods used and general background of the market in the city of Nymburk. The following practical part analyzes business of the company and development of sales of over-the-counter medicine in context of a new competitor entering the market in the period examined for the purpose of this thesis. Methods applied to achieve the the main goals of the thesis are following: the SWOT analysis method, which is used to discover the strengths and weaknesses of the company itself and to define threats and opportunities, based on the market environment. The subsequent method used within research is Balanced Scorecard method which is used to design the Key Performance Indicators for measuring and observing the company's performance and development. Finally data mining methods of shopping basket, segmentation and forecasting were used to analyse trends in over-the-counter medicine sales. All these methods are keystones for formulation of a basic concept for future strategical and tactical decisions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Chong, Wai Yu Ryan. "Statistical and analytic data processing based on SQL7 with web interface". Leeds, 2001. http://www.leeds.ac.uk/library/counter2/compstmsc/20002001/chong.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

COSTA, FABIO E. da. "Desenvolvimento de conjunto detector cintilador com sistema de contagens e aquisicao de dados para medidas de vazao utilizando tracadores radioativos". reponame:Repositório Institucional do IPEN, 2001. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10850.

Texto completo da fonte
Resumo:
Made available in DSpace on 2014-10-09T12:44:42Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:57:40Z (GMT). No. of bitstreams: 1 07178.pdf: 5559120 bytes, checksum: 99e8bb52573d489b59ca94f5400da04c (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Osuna, Echavarría Leyre Estíbaliz. "Semiparametric Bayesian Count Data Models". Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-25573.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Zeileis, Achim, Christian Kleiber e Simon Jackman. "Regression Models for Count Data in R". Foundation for Open Access Statistics, 2008. http://epub.wu.ac.at/4986/1/Zeileis_etal_2008_JSS_Regression%2DModels%2Dfor%2DCount%2DData%2Din%2DR.pdf.

Texto completo da fonte
Resumo:
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of hurdle and zero-inflated regression models in the functions hurdle() and zeroinfl() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both hurdle and zero-inflated model, are able to incorporate over-dispersion and excess zeros-two problems that typically occur in count data sets in economics and the social sciences-better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (authors' abstract)
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Liu, Dong, e Jing Wang. "The determinants of internationalstudent mobility : An empirical study on U.S. Data". Thesis, Högskolan Dalarna, Nationalekonomi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:du-3759.

Texto completo da fonte
Resumo:
The increase in foreign students in countries such as the US, the UK and Francesuggests that the international ‘education industry’ is growing in importance. Thepurpose of this paper is to investigate the empirical determinants of internationalstudent mobility. A secondary purpose is to give tentative policy suggestions to hostcountry, source country and also to provide some recommendations to students whowant to study abroad. Using pooled cross-sectional time series data for the US overthe time period 1993-2006, we estimate an econometric model of enrolment rates offoreign students in the US. Our results suggest that tuition fees, US federal support ofeducation, and the size of the ‘young’ generation of source countries have asignificant influence on international student mobility. We also consider other factorsthat may be relevant in this context.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Pihl, Svante, e Leonardo Olivetti. "An Empirical Comparison of Static Count Panel Data Models: the Case of Vehicle Fires in Stockholm County". Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412014.

Texto completo da fonte
Resumo:
In this paper we study the occurrences of outdoor vehicle fires recorded by the Swedish Civil Contingencies Agency (MSB) for the period 1998-2019, and build static panel data models to predict future occurrences of fire in Stockholm County. Through comparing the performance of different models, we look at the effect of different distributional assumptions for the dependent variable on predictive performance. Our study concludes that treating the dependent variable as continuous does not hamper performance, with the exception of models meant to predict more uncommon occurrences of fire. Furthermore, we find that assuming that the dependent variable follows a Negative Binomial Distribution, rather than a Poisson Distribution, does not lead to substantial gains in performance, even in cases of overdispersion. Finally, we notice a slight increase in the number of vehicle fires shown in the data, and reflect on whether this could be related to the increased population size.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Duan, Yuanyuan. "Statistical Predictions Based on Accelerated Degradation Data and Spatial Count Data". Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/56616.

Texto completo da fonte
Resumo:
This dissertation aims to develop methods for statistical predictions based on various types of data from different areas. We focus on applications from reliability and spatial epidemiology. Chapter 1 gives a general introduction of statistical predictions. Chapters 2 and 3 investigate the photodegradation of an organic coating, which is mainly caused by ultraviolet (UV) radiation but also affected by environmental factors, including temperature and humidity. In Chapter 2, we identify a physically motivated nonlinear mixed-effects model, including the effects of environmental variables, to describe the degradation path. Unit-to-unit variabilities are modeled as random effects. The maximum likelihood approach is used to estimate parameters based on the accelerated test data from laboratory. The developed model is then extended to allow for time-varying covariates and is used to predict outdoor degradation where the explanatory variables are time-varying. Chapter 3 introduces a class of models for analyzing degradation data with dynamic covariate information. We use a general path model with random effects to describe the degradation paths and a vector time series model to describe the covariate process. Shape restricted splines are used to estimate the effects of dynamic covariates on the degradation process. The unknown parameters of these models are estimated by using the maximum likelihood method. Algorithms for computing the estimated lifetime distribution are also described. The proposed methods are applied to predict the photodegradation path of an organic coating in a complicated dynamic environment. Chapter 4 investigates the Lyme disease emergency in Virginia at census tract level. Based on areal (census tract level) count data of Lyme disease cases in Virginia from 1998 to 2011, we analyze the spatial patterns of the disease using statistical smoothing techniques. We also use the space and space-time scan statistics to reveal the presence of clusters in the spatial and spatial/temporal distribution of Lyme disease. Chapter 5 builds a predictive model for Lyme disease based on historical data and environmental/demographical information of each census tract. We propose a Divide-Recombine method to take advantage of parallel computing. We compare prediction results through simulation studies, which show our method can provide comparable fitting and predicting accuracy but can achieve much more computational efficiency. We also apply the proposed method to analyze Virginia Lyme disease spatio-temporal data. Our method makes large-scale spatio-temporal predictions possible. Chapter 6 gives a general review on the contributions of this dissertation, and discusses directions for future research.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Hellström, Jörgen. "Count data modelling and tourism demand". Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-82168.

Texto completo da fonte
Resumo:
This thesis consists of four papers concerning modelling of count data and tourism demand. For three of the papers the focus is on the integer-valued autoregressive moving average model class (INARMA), and especially on the ENAR(l) model. The fourth paper studies the interaction between households' choice of number of leisure trips and number of overnight stays within a bivariate count data modelling framework. Paper [I] extends the basic INAR(1) model to enable more flexible and realistic empirical economic applications. The model is generalized by relaxing some of the model's basic independence assumptions. Results are given in terms of first and second conditional and unconditional order moments. Extensions to general INAR(p), time-varying, multivariate and threshold models are also considered. Estimation by conditional least squares and generalized method of moments techniques is feasible. Monte Carlo simulations for two of the extended models indicate reasonable estimation and testing properties. An illustration based on the number of Swedish mechanical paper and pulp mills is considered. Paper[II] considers the robustness of a conventional Dickey-Fuller (DF) test for the testing of a unit root in the INAR(1) model. Finite sample distributions for a model with Poisson distributed disturbance terms are obtained by Monte Carlo simulation. These distributions are wider than those of AR(1) models with normal distributed error terms. As the drift and sample size, respectively, increase the distributions appear to tend to T-2) and standard normal distributions. The main results are summarized by an approximating equation that also enables calculation of critical values for any sample and drift size. Paper[III] utilizes the INAR(l) model to model the day-to-day movements in the number of guest nights in hotels. By cross-sectional and temporal aggregation an INARMA(1,1) model for monthly data is obtained. The approach enables easy interpretation and econometric modelling of the parameters, in terms of daily mean check-in and check-out probability. Empirically approaches accounting for seasonality by dummies and using differenced series, as well as forecasting, are studied for a series of Norwegian guest nights in Swedish hotels. In a forecast evaluation the improvements by introducing economic variables is minute. Paper[IV] empirically studies household's joint choice of the number of leisure trips and the total night to stay on these trips. The paper introduces a bivariate count hurdle model to account for the relative high frequencies of zeros. A truncated bivariate mixed Poisson lognormal distribution, allowing for both positive as well as negative correlation between the count variables, is utilized. Inflation techniques are used to account for clustering of leisure time to weekends. Simulated maximum likelihood is used as estimation method. A small policy study indicates that households substitute trips for nights as the travel costs increase.

Härtill 4 uppsatser.


digitalisering@umu
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

He, Xin. "Semiparametric analysis of panel count data". Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4774.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on November 27, 2007) Vita. Includes bibliographical references.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Quoreshi, Shahiduzzaman. "Modelling high frequency financial count data /". Umeå : Umeå University, 2005. http://swopec.hhs.se/umnees/abs/umnees0656.htm.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Wan, Chung-him, e 溫仲謙. "Analysis of zero-inflated count data". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43703719.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Wan, Chung-him. "Analysis of zero-inflated count data". Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43703719.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Gao, Dexiang. "Analysis of clustered longitudinal count data /". Connect to full text via ProQuest. Limited to UCD Anschutz Medical Campus, 2007.

Encontre o texto completo da fonte
Resumo:
Thesis (Ph.D. in Analytic Health Sciences, Department of Preventive Medicine and Biometrics) -- University of Colorado Denver, 2007.
Typescript. Includes bibliographical references (leaves 75-77). Free to UCD affiliates. Online version available via ProQuest Digital Dissertations;
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Zhuang, Lili. "Bayesian Dynamical Modeling of Count Data". The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1315949027.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Chanialidis, Charalampos. "Bayesian mixture models for count data". Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6371/.

Texto completo da fonte
Resumo:
Regression models for count data are usually based on the Poisson distribution. This thesis is concerned with Bayesian inference in more flexible models for count data. Two classes of models and algorithms are presented and studied in this thesis. The first employs a generalisation of the Poisson distribution called the COM-Poisson distribution, which can represent both overdispersed data and underdispersed data. We also propose a density regression technique for count data, which, albeit centered around the Poisson distribution, can represent arbitrary discrete distributions. The key contribution of this thesis are MCMC-based methods for posterior inference in these models. One key challenge in COM-Poisson-based models is the fact that the normalisation constant of the COM-Poisson distribution is not known in closed form. We propose two exact MCMC algorithms which address this problem. One is based on the idea of retrospective sampling; we sample the uniform random variable used to decide on the acceptance (or rejection) of the proposed new state of the unknown parameter first and then only evaluate bounds for the acceptance probability, in the hope that we will not need to know the acceptance probability exactly in order to come to a decision on whether to accept or reject the newly proposed value. This strategy is based on an efficient scheme for computing lower and upper bounds for the normalisation constant. This procedure can be applied to a number of discrete distributions, including the COM-Poisson distribution. The other MCMC algorithm proposed is based on an algorithm known as the exchange algorithm. The latter requires sampling from the COM-Poisson distribution and we will describe how this can be done efficiently using rejection sampling. We will also present simulation studies which show the advantages of using the COM-Poisson regression model compared to the alternative models commonly used in literature (Poisson and negative binomial). Three real world applications are presented: the number of emergency hospital admissions in Scotland in 2010, the number of papers published by Ph.D. students and fertility data from the second German Socio-Economic Panel. COM-Poisson distributions are also the cornerstone of the proposed density regression technique based on Dirichlet process mixture models. Density regression can be thought of as a competitor to quantile regression. Quantile regression estimates the quantiles of the conditional distribution of the response variable given the covariates. This is especially useful when the dispersion changes across the covariates. Instead of estimating the conditional mean , quantile regression estimates the conditional quantile function across different quantiles. As a result, quantile regression models both location and shape shifts of the conditional distribution. This allows for a better understanding of how the covariates affect the conditional distribution of the response variable. Almost all quantile regression techniques deal with a continuous response. Quantile regression models for count data have so far received little attention. A technique that has been suggested is adding uniform random noise ('jittering'), thus overcoming the problem that, for a discrete distribution, the conditional quantile function is not a continuous function of the parameters of interest. Even though this enables us to estimate the conditional quantiles of the response variable, it has disadvantages. For small values of the response variable Y, the added noise can have a large influence on the estimated quantiles. In addition, the problem of 'crossing quantiles' still exists for the jittering method. We eliminate all the aforementioned problems by estimating the density of the data, rather than the quantiles. Simulation studies show that the proposed approach performs better than the already established jittering method. To illustrate the new method we analyse fertility data from the second German Socio-Economic Panel.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Burger, George William. "Quantitative Analysis of Cross-Country Flight Performance Data :". Connect to this title online, 2005. http://hdl.handle.net/1811/297.

Texto completo da fonte
Resumo:
Thesis (Honors)--Ohio State University, 2005.
Title from first page of PDF file. Document formattted into pages: contains xi, 66 p.; also includes graphics. Includes bibliographical references (p. 65-66). Available online via Ohio State University's Knowledge Bank.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Sages, Harry M. "Implementing a data analysis system for the calibration of an iodine neutrino detector". Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1048385.

Texto completo da fonte
Resumo:
This study presents a comprehensive overview of the significance and results of implementing a data analysis for the calibration of an iodine neutrino detector. Previous neutrino detectors have failed to confirm the standard solar model or settle the question of a massive neutrino. An iodine detector, which was proposed in 1988, is being constructed to hopefully resolve these issues. Before the iodine detector can give conclusive results, it must first be calibrated. Because there is no standard neutrino source, these calibrations must be done indirectly. The method for calibrating the 127-Iodine detector is by using a (p,n) reaction at 0' on an iodine target and a proton beam provided by the Indiana University Cyclotron FacHity (IUCF). When a neutrino is captured by 127-Iodine, the nucleus becomes an excited state of 127-Xenon at an energy of 125 keV. By measuring the Gwnow Teller strength fimction of the transition from the ground state in 127-Iodine to the 125 keV excited state in 127-Xenon, the iodine detector can be suitably calibrated.
Department of Physics and Astronomy
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Heinen, Andreas. "Modelling time series counts data in financial microstructure /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3130202.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Slother, Alisha Rene'. "Trend Analysis of County Coroner's Data on Suicide". Cleveland State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=csu1447780520.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Shuma, Mercy Violet 1957. "Design of a microcomputer "time interval board" for time interval statistical analysis of nuclear systems". Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276685.

Texto completo da fonte
Resumo:
A microcomputer based hardware, the Time Interval Board, was designed and the software interface control program was developed. The board measures time intervals between consecutive pulses from a discriminator output. The data is stored in on-board 16K x 16 memory. The microcomputer empties and processes the data when the on-board memory is filled. Data collection continues until the preset collection period is finished or a forced end is initiated. During this period, control is passed between the hardware and the microcomputer via the interface circuit. The designed hardware is IBM PC compatible.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Fernández, Fontelo Amanda. "New models of count data with applications". Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/666009.

Texto completo da fonte
Resumo:
Donat que les dades de recompte es troben en molt fenòmens reals, la necessitat de mètodes i tècniques de qualitat per modelitzar i analitzar aquest tipus de dades és completament indiscutible. En aquest sentit, durant els últims anys, s’han trobat molts articles a la literatura dins dels que s’han desenvolupat tant mètodes bαsics com més generals per l’anàlisi d’aquestes dades. Tot i que a la literatura hi ha un ampli ventall de treballs que tracten alguns dels problemes més rellevants de les dades de recompte, molts altres problemes encara no s’han abordat. Aquesta tesi doctoral té la finalitat d’introduir nous mètodes i tècniques per analitzar alguns dels problemes de les dades de recompte com la sobredispersió, la inflació al zero (i la deflació al zero), i el fenomen que es dona quan hi ha falta de recomptes. Aquesta tesis està formada per un conjunt de publicacions que presenten i discuteixen en detall alguns dels mètodes proposats per tractar els problemes anteriorment mencionats. Particularment, dos d’aquests articles [1, 2] es centren en ajustar el fenomen de falta de recomptes, proposant dos models basats en els processos autoregressius de dades discretes i no negatives. A més a més, s’han estudiat una sèrie d’aplicacions, en diferents contextos, basades en dades reals, amb la finalitat de demostrar la usabilitat d’aquests nous models. D’altra banda, el treball [3] proposa un model més general de series temporals de recomptes. Aquest model considera series temporals amb una sobredispersió moderada, independentment de si la sèrie és o no estacionaria. Aquest nou model s’ha utilitzat per analitzar les dades de mortalitat recollides en granges bovines a petita escala. Aquestes dades de mortalitat tenen la particularitat de ser recomptes baixos, amb molts zeros i una sobredispersió forτa lleugera. Aquesta anàlisi forma part dæun projecte del Ministeri d’Agricultura, Pesca i Alimentació del Govern d’Espanya. L’última publicació que s’ha inclòs en aquesta tesi [4] proposa una proba exacte de bondat d’ajustament per detectar l’ inflació al zero (i la deflació al zero) en distribucions discretes dins del marc de la dosimetria biològica. La proba proposada en aquest treball va ser introduïda per primer cop per [5], derivada dels problemes d’ocupació. En el context de la dosimetria biològica, aquest nou test es considera un complement del test clàssic u quan les dades no són sobredisperses (sotadisperses), però si estan inflades al zero (no inflades al zero). Els mètodes introduïts en aquesta tesi doctoral es poden veure com a petits signes de progrés dins de l’anàlisi de dades de recompte. Aquests mètodes permeten estudiar problemes des de diferents punts de vista, mostrant resultats especialment bons quan s’analitzen problemes reals dins de l’àmbit de la salut publica i la dosimetria biològica. No obstant, encara que aquest treball és un avenτ dins de l’anàlisi de dades de recompte, molts més esforτos s’han de fer per anar millorant les tècniques i les eines d’anàlisis de dades de recompte. [1] Fernández-Fontelo, A., Cabaña, A., Puig, P. and Moriña, D. (2016). Under-reported data analysis with INAR-hidden Markov chains. Statistics in Medicine; 35(26): 4875-4890. [2] Fernández-Fontelo, A., Cabaña, A., Joe, H., Puig, P. and Moriña, D. Count time series models with under-reported data for gender-based violence in Galicia (Spain). Submitted. [3] Fernández-Fontelo, A., Fontdecaba, S., Alba, A. and Puig, P. (2017). Integer-valued AR processes with Hermite innovations and time-varying parameters: An application to bovine fallen stock surveillance at a local scale. Statistical Modelling; 17(3): 172-195. [4] Fernández-Fontelo, A., Puig, P., Ainsbury, E.A. and Higueras, M. (2018). An exact goodness-of-fit test based on the occupancy problems to study zero-inflation and zero-deflation in biological dosimetry data. Radiation Protection Dosimetry: 1-10. [5] Rao, C.R. and Chakravarti, I.M. (1956). Some small sample tests of significance for a Poisson distribution. Biometrics; 12: 264-282.
Since count data are present in the nature of many real processes, the need for high-quality methods and techniques to accurately model and analyse these data is irrefutable. In this sense, in the past years, many comprehensive works have been presented in the literature where both, primary and more general methods to deal with count data, have developed based on different approaches. Despite the vast amount of excellent works dealing with the major concerns in count data, some issues related to these data remain to be addressed. This Ph.D. thesis is aimed at introducing novel methods and techniques of count data analysis to deal with some issues such that the overdispersion, the zero-inflation (and zero-deflation), and the phenomenon of under-reporting. In this sense, this thesis comprises different publications where innovative methods have been presented and discussed in detail. In particular, two of these articles [1, 2] are focused on the assessment of the under-reporting issue in count time series. These works propose two realistic models based on integer-valued autoregressive models. Besides, real-data applications within different frameworks are studied to demonstrate the practicality of these proposed models. On the other hand, the paper by [3] proposes a general model of count time series, which considers slightly overdispersed data, even if a series is non-stationary. This model has been used to analyse data of fallen cattle collected at a local scale when series have low counts, many zeros, and moderate overdispersion as part of a project commanded by the Ministry of Agriculture, Food and Environment of Spain. The last paper included in this thesis [4] proposes an exact goodness-of-fit test for detecting zero-inflation (and zero-deflation) in count distributions within the biological dosimetry framework. The test suggested in [4] was firstly introduced by [5] derived from the problems of occupancy. In the biological dosimetry context, this test is viewed as a complement to the always used u-test, when data are not overdispersed (not underdispersed), but they are zero-inflated (zero-deflated). The methods introduced in this Ph.D. thesis can be viewed as small but relevant signs of progress in count data analysis. They allow studying several issues of count data from different points of view, showing especially good results when dealing with some real-world concerns in public health and biological dosimetry frameworks. Although this work constitutes an advance in count data analysis, more efforts have to keep doing to improve the existing techniques and tools. [1] Fernández-Fontelo, A., Cabaña, A., Puig, P. and Moriña, D. (2016). Under-reported data analysis with INAR-hidden Markov chains. Statistics in Medicine; 35(26): 4875-4890. [2] Fernández-Fontelo, A., Cabaña, A., Joe, H., Puig, P. and Moriña, D. Count time series models with under-reported data for gender-based violence in Galicia (Spain). Submitted. [3] Fernández-Fontelo, A., Fontdecaba, S., Alba, A. and Puig, P. (2017). Integer-valued AR processes with Hermite innovations and time-varying parameters: An application to bovine fallen stock surveillance at a local scale. Statistical Modelling; 17(3): 172-195. [4] Fernández-Fontelo, A., Puig, P., Ainsbury, E.A. and Higueras, M. (2018). An exact goodness-of-fit test based on the occupancy problems to study zero-inflation and zero-deflation in biological dosimetry data. Radiation Protection Dosimetry: 1-10. [5] Rao, C.R. and Chakravarti, I.M. (1956). Some small sample tests of significance for a Poisson distribution. Biometrics; 12: 264-282.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Crowe, Brenda. "Seasonal and calendar estimation for count data". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ27901.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Kalktawi, Hadeel Saleh. "Discrete Weibull regression model for count data". Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/14476.

Texto completo da fonte
Resumo:
Data can be collected in the form of counts in many situations. In other words, the number of deaths from an accident, the number of days until a machine stops working or the number of annual visitors to a city may all be considered as interesting variables for study. This study is motivated by two facts; first, the vital role of the continuous Weibull distribution in survival analyses and failure time studies. Hence, the discrete Weibull (DW) is introduced analogously to the continuous Weibull distribution, (see, Nakagawa and Osaki (1975) and Kulasekera (1994)). Second, researchers usually focus on modeling count data, which take only non-negative integer values as a function of other variables. Therefore, the DW, introduced by Nakagawa and Osaki (1975), is considered to investigate the relationship between count data and a set of covariates. Particularly, this DW is generalised by allowing one of its parameters to be a function of covariates. Although the Poisson regression can be considered as the most common model for count data, it is constrained by its equi-dispersion (the assumption of equal mean and variance). Thus, the negative binomial (NB) regression has become the most widely used method for count data regression. However, even though the NB can be suitable for the over-dispersion cases, it cannot be considered as the best choice for modeling the under-dispersed data. Hence, it is required to have some models that deal with the problem of under-dispersion, such as the generalized Poisson regression model (Efron (1986) and Famoye (1993)) and COM-Poisson regression (Sellers and Shmueli (2010) and Sáez-Castillo and Conde-Sánchez (2013)). Generally, all of these models can be considered as modifications and developments of Poisson models. However, this thesis develops a model based on a simple distribution with no modification. Thus, if the data are not following the dispersion system of Poisson or NB, the true structure generating this data should be detected. Applying a model that has the ability to handle different dispersions would be of great interest. Thus, in this study, the DW regression model is introduced. Besides the exibility of the DW to model under- and over-dispersion, it is a good model for inhomogeneous and highly skewed data, such as those with excessive zero counts, which are more disperse than Poisson. Although these data can be fitted well using some developed models, namely, the zero-inated and hurdle models, the DW demonstrates a good fit and has less complexity than these modifed models. However, there could be some cases when a special model that separates the probability of zeros from that of the other positive counts must be applied. Then, to cope with the problem of too many observed zeros, two modifications of the DW regression are developed, namely, zero-inated discrete Weibull (ZIDW) and hurdle discrete Weibull (HDW) models. Furthermore, this thesis considers another type of data, where the response count variable is censored from the right, which is observed in many experiments. Applying the standard models for these types of data without considering the censoring may yield misleading results. Thus, the censored discrete Weibull (CDW) model is employed for this case. On the other hand, this thesis introduces the median discrete Weibull (MDW) regression model for investigating the effect of covariates on the count response through the median which are more appropriate for the skewed nature of count data. In other words, the likelihood of the DW model is re-parameterized to explain the effect of the predictors directly on the median. Thus, in comparison with the generalized linear models (GLMs), MDW and GLMs both investigate the relations to a set of covariates via certain location measurements; however, GLMs consider the means, which is not the best way to represent skewed data. These DW regression models are investigated through simulation studies to illustrate their performance. In addition, they are applied to some real data sets and compared with the related count models, mainly Poisson and NB models. Overall, the DW models provide a good fit to the count data as an alternative to the NB models in the over-dispersion case and are much better fitting than the Poisson models. Additionally, contrary to the NB model, the DW can be applied for the under-dispersion case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Zeileis, Achim, Christian Kleiber e Simon Jackman. "Regression Models for Count Data in R". Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/1168/1/document.pdf.

Texto completo da fonte
Resumo:
The classical Poisson, geometric and negative binomial regression models for count data belong to the family of generalized linear models and are available at the core of the statistics toolbox in the R system for statistical computing. After reviewing the conceptual and computational features of these methods, a new implementation of zero-inflated and hurdle regression models in the functions zeroinfl() and hurdle() from the package pscl is introduced. It re-uses design and functionality of the basic R functions just as the underlying conceptual tools extend the classical models. Both model classes are able to incorporate over-dispersion and excess zeros - two problems that typically occur in count data sets in economics and the social and political sciences - better than their classical counterparts. Using cross-section data on the demand for medical care, it is illustrated how the classical as well as the zero-augmented models can be fitted, inspected and tested in practice. (author's abstract)
Series: Research Report Series / Department of Statistics and Mathematics
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Zhao, Qi. "Towards Ideal Network Traffic Measurement: A Statistical Algorithmic Approach". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19821.

Texto completo da fonte
Resumo:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Xu, Jun; Committee Member: Ammar, Mostafa; Committee Member: Feamster, Nick; Committee Member: Ma, Xiaoli; Committee Member: Zegura, Ellen.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Bailey, Daniel John. "Data mining of early day motions and multiscale variance stabilisation of count data". Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492552.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Peng, Jin. "Count Data Models for Injury Data from the National Health Interview Survey (NHIS)". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365780835.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Yildiz, Dilek. "Methods for combining administrative data to estimate population counts". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/397608/.

Texto completo da fonte
Resumo:
Governments require information about population counts and characteristics in order to make plans, develop policies and provide public services. The main source of this information is the traditional population censuses. However, they are costly, and the information collected by the decennial censuses goes out-of-date easily. For this reason, this thesis has two main aims: to develop methodologies to combine administrative data sources to estimate population counts in the absence of both a traditional census, and to produce uncertainty estimates for the estimated population counts. Although, the methodologies are illustrated using administrative data sources from England and Wales, they can easily be applied to other countries' administrative data sources. The most comprehensive administrative sources in England and Wales are the NHS Patient Register and the Customer Information System. However, it is known that both of these sources exceed the census estimates. Therefore, it is crucial to use another source to adjust the bias to estimate population counts using these administrative sources. Three substantial chapters assessing methodologies to combine administrative sources with the auxiliary information are presented. The first of these chapters presents a basis methodology, log-linear models with offsets, which is extended in the following chapters. The second chapter extends these models by using individually linked administrative sources. The third chapter improves on the basis models to produce measures of uncertainty. This thesis evaluates different log-linear models in terms of their capacity for producing accurate population counts for age group, sex and local authority groups both within the classical and the Bayesian framework. On the other hand, it also presents a detailed perspective to understand which population groups tend to be missed by the administrative data in England and Wales, and how much they can be improved just by combining them with the specific association structures obtained from auxiliary data sources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Kritzer, Matthew Carroll. "GIS and archaeology : investigating source data and site patterning". Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935936.

Texto completo da fonte
Resumo:
Using a Geographic Information System (GIS), locational analysis was performed for prehistoric sites recorded during a 1985 surface survey conducted in Henry County, Indiana. Two sensitivity models were developed to identify areas more likely to contain substantial archaeological resources. Both models were based on environmental data derived largely from soil survey information. An intuitive model was created and "blindly" applied to the study area. This model did not interpret the distribution of sites very well. During development of an alternative model, the 1985 survey data was more thoroughly investigated. Site locations were found to be correlated with Soil Conservation Service drainage categories. In upland areas, sites with ten or more artifacts clustered around pockets of very poorly drained Millgrove loam soils. In lowland areas, sites with ten or more artifacts exhibited a preference for well drained soils. Before and during analysis, the integrity of source data was investigated. A United States Geological Survey 7.5-minute digital elevation model was found to be unsuitable for analysis within the study area. Mapping errors were discovered within the 1985 survey data. Global Positioning System (GPS) technology, which can increase the spatial integrity of survey data, was demonstrated and used to register and adjust source data. A mapping-grade GPS base station was established at Ball State University.
Department of Anthropology
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Ibukun, Michael Abimbola. "Modely s Touchardovým rozdělením". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445468.

Texto completo da fonte
Resumo:
In 2018, Raul Matsushita, Donald Pianto, Bernardo B. De Andrade, Andre Cançado & Sergio Da Silva published a paper titled ”Touchard distribution”, which presented a model that is a two-parameter extension of the Poisson distribution. This model has its normalizing constant related to the Touchard polynomials, hence the name of this model. This diploma thesis is concerned with the properties of the Touchard distribution for which delta is known. Two asymptotic tests based on two different statistics were carried out for comparison in a Touchard model with two independent samples, supported by simulations in R.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Afonso, Helena Maria Dias. "Corruption and Education: Empirical evidence from cross-country micro-data". Master's thesis, NSBE - UNL, 2013. http://hdl.handle.net/10362/11657.

Texto completo da fonte
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
This study uses two micro-level datasets to perform an empirical assessment of the role of education on the decision to be corrupt. Bribe payments are used as proxy for corruption. The results show that increasing the level of schooling increases the propensity to bribe. A model of costs and benefits is assumed and regressions are ran to evaluate the effect of education on an intrinsic and an extrinsic cost of being corrupt, measured by justifiability of corruption and perception of corruption, respectively. The estimates show education increases one’s intrinsic cost (decreases justifiability) and decreases one’s extrinsic cost (increases perception).
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Grunow, Nathan Daniel, e Nathan Daniel Grunow. "Analysis of Recurrent Polyp Data in the Presence of Misclassification". Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/622835.

Texto completo da fonte
Resumo:
Several standard methods are available to analyze and estimate parameters of count data. None of these methods are designed to account for potential misclassification of the data, where counts are observed or recorded as higher or lower than their actual value. These false counts can result in erroneous conclusions and biased estimates. For this paper, a standard estimation model was modified in several ways in order to incorporate each misclassification mechanism. The probability distribution of the observed data was derived and combined with informative distributions for the misclassification parameters. Once this additional information was taken into account, a distribution of observed data conditional on only the parameter of interest was obtained. By incorporating information about the misclassification mechanisms, the resulting estimation will be more accurate than the standard methods. To demonstrate the flexibility of this approach, data from a count distribution affected by various misclassification mechanisms were simulated. Each dataset was analyzed by several standard estimation methods and an appropriate new method. The results from all simulated data were compared, and the impact of each mechanism in regards to each estimation method was discussed. Data from a colorectal polyp prevention study were also analyzed with all available methods to showcase the incorporation of additional covariates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Leonte, Daniela School of Mathematics UNSW. "Flexible Bayesian modelling of gamma ray count data". Awarded by:University of New South Wales. School of Mathematics, 2003. http://handle.unsw.edu.au/1959.4/19147.

Texto completo da fonte
Resumo:
Bayesian approaches to prediction and the assessment of predictive uncertainty in generalized linear models are often based on averaging predictions over different models, and this requires methods for accounting for model uncertainty. In this thesis we describe computational methods for Bayesian inference and model selection for generalized linear models, which improve on existing techniques. These methods are applied to the building of flexible models for gamma ray count data (data measuring the natural radioactivity of rocks) at the Castlereagh Waste Management Centre, which served as a hazardous waste disposal facility for the Sydney region between March 1978 and August 1998. Bayesian model selection methods for generalized linear models enable us to approach problems of smoothing, change point detection and spatial prediction for these data within a common methodological and computational framework, by considering appropriate basis expansions of a mean function. The data at Castlereagh were collected in the following way. A number of boreholes were drilled at the site, and for each borehole a gamma ray detector recorded gamma ray emissions at different depths as the detector was raised gradually from the bottom of the borehole to ground level. The profile of intensity of gamma counts can be informative about the geology at each location, and estimation of intensity profiles raises problems of smoothing and change point detection for count data. The gamma count profiles can also be modelled spatially, to inform the geological profile across the site. Understanding the geological structure of the site is important for modelling the transport of chemical contaminants beneath the waste disposal area. The structure of the thesis is as follows. Chapter 1 describes the Castlereagh hazardous waste site and the geophysical data, which motivated the methodology developed in this research. We summarise the principles of Gamma Ray (GR) logging, a method routinely employed by geophysicists and environmental engineers in the detailed evaluation of hazardous site geology, and detail the use of the Castlereagh data in this research. In Chapter 2 we review some fundamental ideas of Bayesian inference and computation and discuss them in the context of generalised linear models. Chapter 3 details the theoretical basis of our work. Here we give a new Markov chain Monte Carlo sampling scheme for Bayesian variable selection in generalized linear models, which is analogous to the well-known Swendsen-Wang algorithm for the Ising model. Special cases of this sampling scheme are used throughout the rest of the thesis. In Chapter 4 we discuss the use of methods for Bayesian model selection in generalized linear models in two specific applications, which we implement on the Castlereagh data. First, we consider smoothing problems where we flexibly estimate the dependence of a response variable on one or more predictors, and we apply these ideas to locally adaptive smoothing of gamma ray count data. Second, we discuss how the problem of multiple change point detection can be cast as one of model selection in a generalized linear model, and consider application to change point detection for gamma ray count data. In Chapter 5 we consider spatial models based on partitioning a spatial region of interest into cells via a Voronoi tessellation, where the number of cells and the positions of their centres is unknown, and show how these models can be formulated in the framework of established methods for Bayesian model selection in generalized linear models. We implement the spatial partition modelling approach to the spatial analysis of gamma ray data, showing how the posterior distribution of the number of cells, cell centres and cell means provides us with an estimate of the mean response function describing spatial variability across the site. Chapter 6 presents some conclusions and suggests directions for future research. A paper based on the work of Chapter 3 has been accepted for publication in the Journal of Computational and Graphical Statistics, and a paper based on the work in Chapter 4 has been accepted for publication in Mathematical Geology. A paper based on the spatial modelling of Chapter 5 is in preparation and will be submitted for publication shortly. The work in this thesis was collaborative, to a smaller or larger extent in its various components. I authored Chapters 1 and 2 entirely, including definition of the problem in the context of the CWMC site, data gathering and preparation for analysis, review of the literature on computational methods for Bayesian inference and model selection for generalized linear models. I also authored Chapters 4 and 5 and benefited from some of Dr Nott's assistance in developing the algorithms. In Chapter 3, Dr Nott led the development of sampling scheme B (corresponding to having non-zero interaction parameters in our Swendsen-Wang type algorithm). I developed the algorithm for sampling scheme A (corresponding to setting all algorithm interaction parameters to zero in our Swendsen-Wang type algorithm), and performed the comparison of the performance of the two sampling schemes. The final discussion in Chapter 6 and the direction for further research in the case study context is also my work.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia