Academic literature on the topic 'Event data methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Event data methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Event data methods"

1

Schaubel, Douglas E., and Jianwen Cai. "Semiparametric Methods for Clustered Recurrent Event Data." Lifetime Data Analysis 11, no. 3 (2005): 405–25. http://dx.doi.org/10.1007/s10985-005-2970-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schaubel, Douglas E., and Jianwen Cai. "Multiple imputation methods for recurrent event data with missing event category." Canadian Journal of Statistics 34, no. 4 (2006): 677–92. http://dx.doi.org/10.1002/cjs.5550340408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Morháč, Miroslav, та Vladislav Matoušek. "Event sorting methods of γ-ray spectroscopic data". Computer Physics Communications 182, № 3 (2011): 600–610. http://dx.doi.org/10.1016/j.cpc.2010.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kieser, Meinhard. "Statistical methods for the analysis of adverse event data." Pharmaceutical Statistics 15, no. 4 (2016): 290–91. http://dx.doi.org/10.1002/pst.1759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Van Boxtel, Geert J. M. "Computational and statistical methods for analyzing event-related potential data." Behavior Research Methods, Instruments, & Computers 30, no. 1 (1998): 87–102. http://dx.doi.org/10.3758/bf03209419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Galbraith, Christopher, Padhraic Smyth, and Hal S. Stern. "Statistical Methods for the Forensic Analysis of Geolocated Event Data." Forensic Science International: Digital Investigation 33 (July 2020): 301009. http://dx.doi.org/10.1016/j.fsidi.2020.301009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bok, Kyoungsoo, Daeyun Kim, and Jaesoo Yoo. "Complex Event Processing for Sensor Stream Data." Sensors 18, no. 9 (2018): 3084. http://dx.doi.org/10.3390/s18093084.

Full text
Abstract:
As a large amount of stream data are generated through sensors over the Internet of Things environment, studies on complex event processing have been conducted to detect information required by users or specific applications in real time. A complex event is made by combining primitive events through a number of operators. However, the existing complex event-processing methods take a long time because they do not consider similarity and redundancy of operators. In this paper, we propose a new complex event-processing method considering similar and redundant operations for stream data from sensors in real time. In the proposed method, a similar operation in common events is converted into a virtual operator, and redundant operations on the same events are converted into a single operator. The event query tree for complex event detection is reconstructed using the converted operators. Through this method, the cost of comparison and inspection of similar and redundant operations is reduced, thereby decreasing the overall processing cost. To prove the superior performance of the proposed method, its performance is evaluated in comparison with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Bakoyannis, Giorgos, and Giota Touloumi. "Practical methods for competing risks data: A review." Statistical Methods in Medical Research 21, no. 3 (2011): 257–72. http://dx.doi.org/10.1177/0962280210394479.

Full text
Abstract:
Competing risks data arise naturally in medical research, when subjects under study are at risk of more than one mutually exclusive event such as death from different causes. The competing risks framework also includes settings where different possible events are not mutually exclusive but the interest lies on the first occurring event. For example, in HIV studies where seropositive subjects are receiving highly active antiretroviral therapy (HAART), treatment interruption and switching to a new HAART regimen act as competing risks for the first major change in HAART. This article introduces competing risks data and critically reviews the widely used statistical methods for estimation and modelling of the basic (estimable) quantities of interest. We discuss the increasingly popular Fine and Gray model for subdistribution hazard of interest, which can be readily fitted using standard software under the assumption of administrative censoring. We present a simulation study, which explores the robustness of inference for the subdistribution hazard to the assumption of administrative censoring. This shows a range of scenarios within which the strictly incorrect assumption of administrative censoring has a relatively small effect on parameter estimates and confidence interval coverage. The methods are illustrated using data from HIV-1 seropositive patients from the collaborative multicentre study CASCADE (Concerted Action on SeroConversion to AIDS and Death in Europe).
APA, Harvard, Vancouver, ISO, and other styles
9

Kataev, M. Yu, and V. V. Orlova. "Social media event data analysis." Proceedings of Tomsk State University of Control Systems and Radioelectronics 23, no. 4 (2020): 71–77. http://dx.doi.org/10.21293/1818-0442-2020-23-4-71-77.

Full text
Abstract:
Social media analysis has become ubiquitous at a quantitative and qualitative level due to the ability to study content from open social networks. This content is a rich source of data for the construction and analysis of the interaction of social network users when forming various groups, used not only for statistical calculations, social areas of analysis, but also in trade or for the development of recommendation systems. The large number of social media users results in a huge amount of unstructured data (by time, type of communication, type of message and geographic location). This article aims to discuss the problem of analyzing social networks and obtaining information from unstructured data. The article discusses information extraction methods, well-known software products and datasets.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Tu, and Danting Zhu. "A review of statistical methods on testing time-to-event data." Biometrics & Biostatistics International Journal 7, no. 6 (2018): 570–72. http://dx.doi.org/10.15406/bbij.2018.07.00260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Event data methods"

1

Reid, Jeffrey Gordon. "Event-by-event analysis methods and applications to relativistic heavy-ion collision data /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Devine, Jon. "Support Vector Methods for Higher-Level Event Extraction in Point Data." Fogler Library, University of Maine, 2009. http://www.library.umaine.edu/theses/pdf/DevineJ2009.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mason, Tracey. "Application of survival methods for the analysis of adverse event data." Thesis, Keele University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267646.

Full text
Abstract:
The concept of collecting Adverse Events (AEs) arose with the advent of the Thalidomide incident. Prior to this the development and marketing of drugs was not regulated in any way. It was the teterogenic effects which raised people's awareness of the damage prescription drugs could cause. This thesis will begin by describing the background to the foundation of the Committee for the Safety of Medicines (CSM) and how AEs are collected today. This thesis will investigate survival analysis, discriminant analysis and logistic regression to identify prognostic indicators. These indicators will be developed to build, assess and compare predictor models produced to see if the factors identified are similar amongst the methodologies used and if so are the background assumptions valid in this case. ROC analysis will be used to classify the prognostic indices produced by a valid cut-off point, in many medical applications the emphasis is on creating the index - the cut-off points are chosen by clinical judgement. Here ROC analysis is used to give a statistical background to the decision. In addition neural networks will be investigated and compared to the other models. Two sets of data are explored within the thesis, firstly data from a Phase III clinical trial used to assess the efficacy and safety of a new drug used to repress the advance of Alzheimer's disease where AEs are collected routinely and secondly data from a drug monitoring system used by the Department of Rheumatology at the Haywood Hospital to identify patients likely to require a change in their medication based on their blood results.
APA, Harvard, Vancouver, ISO, and other styles
4

Dextraze, Mathieu Francis. "Comparing Event Detection Methods in Single-Channel Analysis Using Simulated Data." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39729.

Full text
Abstract:
With more states revealed, and more reliable rates inferred, mechanistic schemes for ion channels have increased in complexity over the history of single-channel studies. At the forefront of single-channel studies we are faced with a temporal barrier delimiting the briefest event which can be detected in single-channel data. Despite improvements in single-channel data analysis, the use of existing methods remains sub-optimal. As existing methods in single-channel data analysis are unquantified, optimal conditions for data analysis are unknown. Here we present a modular single-channel data simulator with two engines; a Hidden Markov Model (HMM) engine, and a sampling engine. The simulator is a tool which provides the necessary a priori information to be able to quantify and compare existing methods in order to optimize analytic conditions. We demonstrate the utility of our simulator by providing a preliminary comparison of two event detection methods in single-channel data analysis; Threshold Crossing and Segmental k-means with Hidden Markov Modelling (SKM-HMM).
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Zhongnan. "Statistical Methods for Multivariate Functional Data Clustering, Recurrent Event Prediction, and Accelerated Degradation Data Analysis." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/102628.

Full text
Abstract:
In this dissertation, we introduce three projects in machine learning and reliability applications after the general introductions in Chapter 1. The first project concentrates on the multivariate sensory data, the second project is related to the bivariate recurrent process, and the third project introduces thermal index (TI) estimation in accelerated destructive degradation test (ADDT) data, in which an R package is developed. All three projects are related to and can be used to solve certain reliability problems. Specifically, in Chapter 2, we introduce a clustering method for multivariate functional data. In order to cluster the customized events extracted from multivariate functional data, we apply the functional principal component analysis (FPCA), and use a model based clustering method on a transformed matrix. A penalty term is imposed on the likelihood so that variable selection is performed automatically. In Chapter 3, we propose a covariate-adjusted model to predict next event in a bivariate recurrent event system. Inspired by geyser eruptions in Yellowstone National Park, we consider two event types and model their event gap time relationship. External systematic conditions are taken account into the model with covariates. The proposed covariate adjusted recurrent process (CARP) model is applied to the Yellowstone National Park geyser data. In Chapter 4, we compare estimation methods for TI. In ADDT, TI is an important index indicating the reliability of materials, when the accelerating variable is temperature. Three methods are introduced in TI estimations, which are least-squares method, parametric model and semi-parametric model. An R package is implemented for all three methods. Applications of R functions are introduced in Chapter 5 with publicly available ADDT datasets. Chapter 6 includes conclusions and areas for future works.<br>Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
6

Stewart, Catherine Helen. "Multilevel modelling of event history data : comparing methods appropriate for large datasets." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/2007/.

Full text
Abstract:
Abstract When analysing medical or public health datasets, it may often be of interest to measure the time until a particular pre-defined event occurs, such as death from some disease. As it is known that the health status of individuals living within the same area tends to be more similar than for individuals from different areas, event times of individuals from the same area may be correlated. As a result, multilevel models must be used to account for the clustering of individuals within the same geographical location. When the outcome is time until some event, multilevel event history models must be used. Although software does exist for fitting multilevel event history models, such as MLwiN, computational requirements mean that the use of these models is limited for large datasets. For example, to fit the proportional hazards model (PHM), the most commonly used event history model for modelling the effect of risk factors on event times, in MLwiN a Poisson model is fitted to a person-period dataset. The person-period dataset is created by rearranging the original dataset so that each individual has a line of data corresponding to every risk set they survive until either censoring or the event of interest occurs. When time is treated as a continuous variable so that each risk set corresponds to a distinct event time, as is the case for the PHM, the size of the person-period dataset can be very large. This presents a problem for those working in public health as datasets used for measuring and monitoring public health are typically large. Furthermore, individuals may be followed-up for a long period of time and this can also contribute to a large person-period dataset. A further complication is that interest may be in modelling a rare event, resulting in a high proportion of censored observations. This can also be problematic when estimating multilevel event history models. Since multilevel event history models are important in public health, the aim of this thesis is to develop these models so they can be fitted to large datasets considering, in particular, datasets with long periods of follow-up and rare events. Two datasets are used throughout the thesis to investigate three possible alternatives to fitting the multilevel proportional hazards model in MLwiN in order to overcome the problems discussed. The first is a moderately-sized Scottish dataset, which will be the main focus of the thesis, and is used as a ‘training dataset’ to explore the limitations of existing software packages for fitting multilevel event history models and also for investigating alternative methods. The second dataset, from Sweden, is used to test the effectiveness of each alternative method when fitted to a much larger dataset. The adequacy of the alternative methods are assessed on the following criteria: how effective they are at reducing the size of the person-period dataset, how similar parameter estimates obtained from using methods are compared to the PHM and how easy they are to implement. The first alternative method involves defining discrete-time risk sets and then estimating discrete-time hazard models via multilevel logistic regression models fitted to a person-period dataset. The second alternative method involves aggregating the data of individuals within the same higher-level units who have the same values for the covariates in a particular model. Aggregating the data like this means that one line of data is used to represent all such individuals since these individuals are at risk of experiencing the event of interest at the same time. This method is termed ‘grouping according to covariates’. Both continuous-time and discrete-time event history models can be fitted to the aggregated person-period dataset. The ‘grouping according to covariates’ method and the first method, which involves defining discrete-time risk sets, are both implemented in MLwiN and pseudo-likelihood methods of estimation are used. The third and final method to be considered, however, involves fitting Bayesian event history (frailty) models and using Markov chain Monte Carlo (MCMC) methods of estimation. These models are fitted in WinBUGS, a software package specially designed to make practical MCMC methods available to applied statisticians. In WinBUGS, an additive frailty model is adopted and a Weibull distribution is assumed for the survivor function. Methodological findings were that the discrete-time method led to a successful reduction in the continuous-time person-period dataset; however, it was necessary to experiment with the length of time intervals in order to have the widest interval without influencing parameter estimates. The grouping according to covariates method worked best when there were, on average, a larger number of individuals per higher-level unit, there were few risk factors in the model and little or none of the risk factors were continuous. The Bayesian method could be favourable as no data expansion is required to fit the Weibull model in WinBUGS and time is treated as a continuous variable. However, models took a much longer time to run using MCMC methods of estimation as opposed to likelihood methods. This thesis showed that it was possible to use a re-parameterised version of the Weibull model, as well as a variance expansion technique, to overcome slow convergence by reducing correlation in the Markov chains. This may be a more efficient way to reduce computing time than running further iterations.
APA, Harvard, Vancouver, ISO, and other styles
7

Huo, Zhao. "A Comparsion of Multiple Imputation Methods for Missing Covariate Values in Recurrent Event Data." Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-256602.

Full text
Abstract:
Multiple imputation (MI) is a commonly used approach to impute missing data. This thesis studies missing covariates in recurrent event data, and discusses ways to include the survival outcomes in the imputation model. Some MI methods under consideration are the event indicator D combined with, respectively, the right-censored event times T, the logarithm of T and the cumulative baseline hazard H0(T). After imputation, we can then proceed to the complete data analysis. The Cox proportional hazards (PH) model and the PWP model are chosen as the analysis models, and the coefficient estimates are of substantive interest. A Monte Carlo simulation study is conducted to compare different MI methods, the relative bias and mean square error will be used in the evaluation process. Furthermore, an empirical study based on cardiovascular disease event data which contains missing values will be conducted. Overall, the results show that MI based on the Nelson-Aalen estimate of H0(T) is preferred in most circumstances.
APA, Harvard, Vancouver, ISO, and other styles
8

Jenkins, J. Craig, and Thomas V. Maher. "What Should We Do about Source Selection in Event Data? Challenges, Progress, and Possible Solutions." ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2016. http://hdl.handle.net/10150/621502.

Full text
Abstract:
The prospect of using the Internet and other Big Data methods to construct event data promises to transform the field but is stymied by the lack of a coherent strategy for addressing the problem of selection. Past studies have shown that event data have significant selection problems. In terms of conventional standards of representativeness, all event data have some unknown level of selection no matter how many sources are included. We summarize recent studies of news selection and outline a strategy for reducing the risks of possible selection bias, including techniques for generating multisource event inventories, estimating larger populations, and controlling for nonrandomness. These build on a relativistic strategy for addressing event selection and the recognition that no event data set can ever be declared completely free of selection bias.
APA, Harvard, Vancouver, ISO, and other styles
9

Francis, Ben. "Stochastic control methods to individualise drug therapy by incorporating pharmacokinetic, pharmacodynamic and adverse event data." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/11855/.

Full text
Abstract:
There are a number of methods available to clinicians for determining an individualised dosage regimen for a patient. However, often these methods are non-adaptive to the patient’s requirements and do not allow for changing clinical targets throughout the course of therapy. The drug dose algorithm constructed in this thesis, using stochastic control methods, harnesses information on the variability of the patient’s response to the drug thus ensuring the algorithm is adapting to the needs of the patient. Novel research is undertaken to include process noise in the Pharmacokinetic/Pharmacodynamic (PK/PD) response prediction to better simulate the patient response to the dose by allowing values sampled from the individual PK/PD parameter distributions to vary over time. The Kalman filter is then adapted to use these predictions alongside measurements, feeding information back into the algorithm in order to better ascertain the current PK/PD response of the patient. From this a dosage regimen is estimated to induce desired future PK/PD response via an appropriately formulated cost function. Further novel work explores different formulations of this cost function by considering probabilities from a Markov model. In applied examples, previous methodology is adapted to allow control of patients that have missing covariate information to be appropriately dosed in warfarin therapy. Then using the introduced methodology in the thesis, the drug dose algorithm is shown to be adaptive to patient needs for imatinib and simvastatin therapy. The differences, between standard dosing and estimated dosage regimens using the methodologies developed, are wide ranging as some patients require no dose alterations whereas other required a substantial change in dosing to meet the PK/PD targets. The outdated paradigm of ‘one size fits all’ dosing is subject to debate and the research in this thesis adds to the evidence and also provides an algorithm for a better approach to the challenge of individualising drug therapy to treat the patient more effectively. The drug dose algorithm developed is applicable to many different drug therapy scenarios due to the enhancements made to the formulation of the cost functions. With this in mind, application of the drug dose algorithm in a wide range of clinical dosing decisions is possible.
APA, Harvard, Vancouver, ISO, and other styles
10

Kawaguchi, Hirokazu. "Signal Extraction and Noise Removal Methods for Multichannel Electroencephalographic Data." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Event data methods"

1

Interval-censored time-to-event data: Methods and applications. Chapman and Hall/CRC, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lemeshow, Stanley. Applied Survival Analysis: Regression Modeling of Time to Event Data. 2nd ed. Wiley-Interscience, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hosmer, David W. Applied survival analysis: Regression modeling of time-to-event data. 2nd ed. John Wiley & Sons, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hosmer, David W. Applied survival analysis: Regression modeling of time-to-event data. 2nd ed. Wiley-Interscience, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stanley, Lemeshow, ed. Applied survival analysis: Regression modeling of time to event data. Wiley, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rosenbluth, William. Black box data from accident vehicles: Methods of retrieval, translation, and interpretation. ASTM International, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Luttmer, Erzo F. P. Measuring poverty dynamics and inequality in transition economies: Disentangling real events from noisy data. World Bank, Europe and Central Asia Region, Poverty Reduction and Economic Management Sector Unit, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Luttmer, Erzo F. P. Measuring poverty dynamics and inequality in transition economies: Disentangling real events from noisy data. World Bank, Europe and Central Asia Region, Poverty Reduction and Economic Management Sector Unit, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schneider, Jörg, and Ton Vrouwenvelder. Introduction to safety and reliability of structures. 3rd ed. International Association for Bridge and Structural Engineering (IABSE), 1997. http://dx.doi.org/10.2749/sed005.

Full text
Abstract:
&lt;p&gt;Society expects that buildings and other structures are safe for the people who use them or who are near them. The failure of a building or structure is expected to be an extremely rare event. Thus, society implicitly relies on the expertise of the professionals involved in the planning, design, construction, operation and maintenance of the structures it uses.&lt;p&gt;Structural engineers devote all their effort to meeting society’s expectations effi ciently. Engineers and scientists work together to develop solutions to structural problems. Given that nothing is absolutely and eternally safe, the goal is to attain an acceptably small probability of failure for a structure, a facility, or a situation. Reliability analysis is part of the science and practice of engineering today, not only with respect to the safety of structures, but also for questions of serviceability and other requirements of technical systems that might be impacted by some probability.&lt;p&gt;The present volume takes a rather broad approach to safety and reliability in Structural Engineering. It treats the underlying concepts of safety, reliability and risk and introduces the reader in a fi rst chapter to the main concepts and strategies for dealing with hazards. The next chapter is devoted to the processing of data into information that is relevant for applying reliability theory. Two following chapters deal with the modelling of structures and with methods of reliability analysis. Another chapter focuses on problems related to establishing target reliabilities, assessing existing structures, and on effective strategies against human error. The last chapter presents an outlook to more advanced applications. The Appendix supports the application of the methods proposed and refers readers to a number of related computer programs.&lt;p&gt;This book is aimed at both students and practicing engineers. It presents the concepts and procedures of reliability analysis in a straightforward, understandable way, making use of simple examples, rather than extended theoretical discussion. It is hoped that this approach serves to advance the application of safety and reliability analysis in engineering practice.&lt;p&gt;The book is amended with a free access to an educational version of a Variables Processor computer program. FreeVaP can be downloaded free of charge and supports the understanding of the subjects treated in this book.
APA, Harvard, Vancouver, ISO, and other styles
10

J, Rosenblatt Alan, ed. International relations: Using MicroCase ExplorIt. Wadsworth/Thomson Learning, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Event data methods"

1

Scharfstein, Daniel, Yuxin Zhu, and Anastasios Tsiatis. "Time to Event Data." In Handbook of Statistical Methods for Randomized Controlled Trials. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781315119694-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ni, Ai, and Chi Song. "Variable Selection for Time-to-Event Data." In Methods in Molecular Biology. Springer US, 2020. http://dx.doi.org/10.1007/978-1-0716-0849-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Su, Eric Wen. "Drug Repositioning by Mining Adverse Event Data in ClinicalTrials.gov." In Methods in Molecular Biology. Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4939-8955-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tekle, Fetene B., and Jeroen K. Vermunt. "Event history analysis." In APA handbook of research methods in psychology, Vol 3: Data analysis and research publication. American Psychological Association, 2012. http://dx.doi.org/10.1037/13621-013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chakrabartty, Shantanu. "Asynchronous Event-Based Self-Powering, Computation, and Data Logging." In Advances in Energy Harvesting Methods. Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-5705-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Hansheng, and Shein-Chung Chow. "Sample Size for Comparing Time-to-Event Data." In Methods and Applications of Statistics in Clinical Trials. John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118596333.ch41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Michalopoulos, Kostas, Vasiliki Iordanidou, and Michalis Zervakis. "Application of Decomposition Methods in the Filtering of Event-Related Potentials." In Data Mining for Biomarker Discovery. Springer US, 2012. http://dx.doi.org/10.1007/978-1-4614-2107-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacqmin-Gadda, Hélène, Loïc Ferrer, and Cécile Proust-Lima. "Joint Models for Longitudinal and Time to Event Data." In Handbook of Statistical Methods for Randomized Controlled Trials. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781315119694-21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bautista, Oliver, and Keaven Anderson. "Sample Size Estimation and Power Analysis: Time to Event Data." In Handbook of Statistical Methods for Randomized Controlled Trials. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781315119694-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

van der Aalst, Wil M. P. "Object-Centric Process Mining: Dealing with Divergence and Convergence in Event Data." In Software Engineering and Formal Methods. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30446-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Event data methods"

1

Maciąg, Piotr. "A Framework for Discovering Frequent Event Graphs from Uncertain Event-based Spatio-temporal Data." In 8th International Conference on Pattern Recognition Applications and Methods. SCITEPRESS - Science and Technology Publications, 2019. http://dx.doi.org/10.5220/0007411206560663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Punz, Thomas, and CMS Collaboration. "The CMS ECAL non-event data handling." In INTERNATIONAL CONFERENCE OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING 2009: (ICCMSE 2009). AIP, 2012. http://dx.doi.org/10.1063/1.4771867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balk, Deborah, Son V. Nghiem, Ernesto Rodriguez, and Christopher Small. "New Methods for Understanding Intra-urban Contours at a Global Scale: An Application of Dense Sampling Methods of QuikSCAT Scatterometer with Population and Housing Data." In 2007 Urban Remote Sensing Joint Event. IEEE, 2007. http://dx.doi.org/10.1109/urs.2007.371803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Niu, Donglai, Mingming Wang, Hui Yuan, and Wei Xu. "Event-driven data mining methods for large-scale market prediction." In SIGSPATIAL'16: 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. ACM, 2016. http://dx.doi.org/10.1145/3017611.3017618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Souto, L., S. Herraiz, and J. Melendez. "Performance Comparison of Quantitative Methods for PMU Data Event Detection with Noisy Data." In 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). IEEE, 2020. http://dx.doi.org/10.1109/isgt-europe47291.2020.9248826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Congcong, Lei Wang, Peng Gong, Jie Wang, and Qing Ying. "Residential area extraction by integrating supervised/unsupervised/contextual/object-based methods with moderate resolution remotely sensed data." In 2011 Joint Urban Remote Sensing Event (JURSE). IEEE, 2011. http://dx.doi.org/10.1109/jurse.2011.5764747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kabir, Md Faisal, Sheikh Md Rabiul Islam, and Xu Huang. "Towards the Appropriate Feature Extraction and Event Classification Methods for NIRS Data." In 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2). IEEE, 2018. http://dx.doi.org/10.1109/ic4me2.2018.8465635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sallis, Philip, and Sergio Hern´ndez. "An Event-State Depiction Algorithm Using CPA Methods with Continuous Feed Data." In 2011 5th Asia Modelling Symposium (AMS 2011). IEEE, 2011. http://dx.doi.org/10.1109/ams.2011.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mauer, Daniel, Barry Lai, Jennifer Casper, et al. "Heuristic methods for automating event detection on sensor data in near real-time." In 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA 2011). IEEE, 2011. http://dx.doi.org/10.1109/cogsima.2011.5753446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Wankyung, Wooyoung Soh, Theodore E. Simos, and George Maroulis. "Applying Association Rule of the Data Mining Method for the Network Event Analysis." In COMPUTATIONAL METHODS IN SCIENCE AND ENGINEERING: Theory and Computation: Old Problems and New Challenges. Lectures Presented at the International Conference on Computational Methods in Science and Engineering 2007 (ICCMSE 2007): VOLUME 1. AIP, 2007. http://dx.doi.org/10.1063/1.2836142.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Event data methods"

1

Karniadakis, George Em, Eric Vanden-Eijnden, Guang Lin, and Xiaoliang Wan. Final Technical Report - Stochastic Nonlinear Data-Reduction Methods with Detection and Prediction of Critical Rare Event. Office of Scientific and Technical Information (OSTI), 2013. http://dx.doi.org/10.2172/1107086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Downing, W. Logan, Howell Li, William T. Morgan, Cassandra McKee, and Darcy M. Bullock. Using Probe Data Analytics for Assessing Freeway Speed Reductions during Rain Events. Purdue University, 2021. http://dx.doi.org/10.5703/1288284317350.

Full text
Abstract:
Rain impacts roadways such as wet pavement, standing water, decreased visibility, and wind gusts and can lead to hazardous driving conditions. This study investigates the use of high fidelity Doppler data at 1 km spatial and 2-minute temporal resolution in combination with commercial probe speed data on freeways. Segment-based space-mean speeds were used and drops in speeds during rainfall events of 5.5 mm/hour or greater over a one-month period on a section of four to six-lane interstate were assessed. Speed reductions were evaluated as a time series over a 1-hour window with the rain data. Three interpolation methods for estimating rainfall rates were tested and seven metrics were developed for the analysis. The study found sharp drops in speed of more than 40 mph occurred at estimated rainfall rates of 30 mm/hour or greater, but the drops did not become more severe beyond this threshold. The average time of first detected rainfall to impacting speeds was 17 minutes. The bilinear method detected the greatest number of events during the 1-month period, with the most conservative rate of predicted rainfall. The range of rainfall intensities were estimated between 7.5 to 106 mm/hour for the 39 events. This range was much greater than the heavy rainfall categorization at 16 mm/hour in previous studies reported in the literature. The bilinear interpolation method for Doppler data is recommended because it detected the greatest number of events and had the longest rain duration and lowest estimated maximum rainfall out of three methods tested, suggesting the method balanced awareness of the weather conditions around the roadway with isolated, localized rain intensities.
APA, Harvard, Vancouver, ISO, and other styles
3

Rathinam, Francis, P. Thissen, and M. Gaarder. Using big data for impact evaluations. Centre of Excellence for Development Impact and Learning (CEDIL), 2021. http://dx.doi.org/10.51744/cmb2.

Full text
Abstract:
The amount of big data available has exploded with recent innovations in satellites, sensors, mobile devices, call detail records, social media applications, and digital business records. Big data offers great potential for examining whether programmes and policies work, particularly in contexts where traditional methods of data collection are challenging. During pandemics, conflicts, and humanitarian emergency situations, data collection can be challenging or even impossible. This CEDIL Methods Brief takes a step-by-step, practical approach to guide researchers designing impact evaluations based on big data. This brief is based on the CEDIL Methods Working Paper on ‘Using big data for evaluating development outcomes: a systematic map’.
APA, Harvard, Vancouver, ISO, and other styles
4

Puttanapong, Nattapong, Arturo M. Martinez Jr, Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Predicting Poverty Using Geospatial Data in Thailand. Asian Development Bank, 2020. http://dx.doi.org/10.22617/wps200434-2.

Full text
Abstract:
This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. It also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of population living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Treadwell, Jonathan R., James T. Reston, Benjamin Rouse, Joann Fontanarosa, Neha Patel, and Nikhil K. Mull. Automated-Entry Patient-Generated Health Data for Chronic Conditions: The Evidence on Health Outcomes. Agency for Healthcare Research and Quality (AHRQ), 2021. http://dx.doi.org/10.23970/ahrqepctb38.

Full text
Abstract:
Background. Automated-entry consumer devices that collect and transmit patient-generated health data (PGHD) are being evaluated as potential tools to aid in the management of chronic diseases. The need exists to evaluate the evidence regarding consumer PGHD technologies, particularly for devices that have not gone through Food and Drug Administration evaluation. Purpose. To summarize the research related to automated-entry consumer health technologies that provide PGHD for the prevention or management of 11 chronic diseases. Methods. The project scope was determined through discussions with Key Informants. We searched MEDLINE and EMBASE (via EMBASE.com), In-Process MEDLINE and PubMed unique content (via PubMed.gov), and the Cochrane Database of Systematic Reviews for systematic reviews or controlled trials. We also searched ClinicalTrials.gov for ongoing studies. We assessed risk of bias and extracted data on health outcomes, surrogate outcomes, usability, sustainability, cost-effectiveness outcomes (quantifying the tradeoffs between health effects and cost), process outcomes, and other characteristics related to PGHD technologies. For isolated effects on health outcomes, we classified the results in one of four categories: (1) likely no effect, (2) unclear, (3) possible positive effect, or (4) likely positive effect. When we categorized the data as “unclear” based solely on health outcomes, we then examined and classified surrogate outcomes for that particular clinical condition. Findings. We identified 114 unique studies that met inclusion criteria. The largest number of studies addressed patients with hypertension (51 studies) and obesity (43 studies). Eighty-four trials used a single PGHD device, 23 used 2 PGHD devices, and the other 7 used 3 or more PGHD devices. Pedometers, blood pressure (BP) monitors, and scales were commonly used in the same studies. Overall, we found a “possible positive effect” of PGHD interventions on health outcomes for coronary artery disease, heart failure, and asthma. For obesity, we rated the health outcomes as unclear, and the surrogate outcomes (body mass index/weight) as likely no effect. For hypertension, we rated the health outcomes as unclear, and the surrogate outcomes (systolic BP/diastolic BP) as possible positive effect. For cardiac arrhythmias or conduction abnormalities we rated the health outcomes as unclear and the surrogate outcome (time to arrhythmia detection) as likely positive effect. The findings were “unclear” regarding PGHD interventions for diabetes prevention, sleep apnea, stroke, Parkinson’s disease, and chronic obstructive pulmonary disease. Most studies did not report harms related to PGHD interventions; the relatively few harms reported were minor and transient, with event rates usually comparable to harms in the control groups. Few studies reported cost-effectiveness analyses, and only for PGHD interventions for hypertension, coronary artery disease, and chronic obstructive pulmonary disease; the findings were variable across different chronic conditions and devices. Patient adherence to PGHD interventions was highly variable across studies, but patient acceptance/satisfaction and usability was generally fair to good. However, device engineers independently evaluated consumer wearable and handheld BP monitors and considered the user experience to be poor, while their assessment of smartphone-based electrocardiogram monitors found the user experience to be good. Student volunteers involved in device usability testing of the Weight Watchers Online app found it well-designed and relatively easy to use. Implications. Multiple randomized controlled trials (RCTs) have evaluated some PGHD technologies (e.g., pedometers, scales, BP monitors), particularly for obesity and hypertension, but health outcomes were generally underreported. We found evidence suggesting a possible positive effect of PGHD interventions on health outcomes for four chronic conditions. Lack of reporting of health outcomes and insufficient statistical power to assess these outcomes were the main reasons for “unclear” ratings. The majority of studies on PGHD technologies still focus on non-health-related outcomes. Future RCTs should focus on measurement of health outcomes. Furthermore, future RCTs should be designed to isolate the effect of the PGHD intervention from other components in a multicomponent intervention.
APA, Harvard, Vancouver, ISO, and other styles
6

Corriveau, Elizabeth, and Jay Clausen. Application of Incremental Sampling Methodology for subsurface sampling. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40480.

Full text
Abstract:
Historically, researchers studying contaminated sites have used grab sampling to collect soil samples. However, this methodology can introduce error in the analysis because it does not account for the wide variations of contaminant concentrations in soil. An alternative method is the Incremental Sampling Methodology (ISM), which previous studies have shown more accurately captures the true concentration of contaminants over an area, even in heterogeneous soils. This report describes the methods and materials used with ISM to collect soil samples, specifically for the purpose of mapping subsurface contamination from site activities. The field data presented indicates that ISM is a promising methodology for collecting subsurface soil samples containing contaminants of concern, including metals and semivolatile organic compounds (SVOCs), for analysis. Ultimately, this study found ISM to be useful for supplying information to assist in the decisions needed for remediation activities.
APA, Harvard, Vancouver, ISO, and other styles
7

Edwards, Susan L., Marcus E. Berzofsky, and Paul P. Biemer. Addressing Nonresponse for Categorical Data Items Using Full Information Maximum Likelihood with Latent GOLD 5.0. RTI Press, 2018. http://dx.doi.org/10.3768/rtipress.2018.mr.0038.1809.

Full text
Abstract:
Full information maximum likelihood (FIML) is an important approach to compensating for nonresponse in data analysis. Unfortunately, only a few software packages implement FIML and even fewer have the capability to compensate for missing not at random (MNAR) nonresponse. One of these packages is Statistical Innovations’ Latent GOLD; however, the user documentation for Latent GOLD provides no mention of this capability. The purpose of this paper is to provide guidance for fitting MNAR FIML models for categorical data items using the Latent GOLD 5.0 software. By way of comparison, we also provide guidance on fitting FIML models for nonresponse missing at random (MAR) using the methods of Fuchs (1982) and Fay (1986), who incorporated item nonresponse indicators within a structural modeling framework. We compare both FIML for MAR and FIML for MNAR nonresponse models for independent and dependent variables. Also, we provide recommendations for future applications of FIML using Latent GOLD.
APA, Harvard, Vancouver, ISO, and other styles
8

Vas, Dragos, Steven Peckham, Carl Schmitt, Martin Stuefer, Ross Burgener, and Telayna Wong. Ice fog monitoring near Fairbanks, AK. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40019.

Full text
Abstract:
Ice fog events, which occur during the Arctic winter, result in greatly decreased visibility and can lead to an increase of ice on roadways, aircraft, and airfields. The Fairbanks area is known for ice fog conditions, and previous studies have shown these events to be associated with moisture released from local power generation. Despite the identified originating mechanism of ice fog, there remains a need to quantify the environmental conditions controlling its origination, intensity, and spatial extent. This investigation focused on developing innovative methods of identifying and characterizing the environmental conditions that lead to ice fog formation near Fort Wainwright, Alaska. Preliminary data collected from December 2019 to March 2020 suggest that ice fog events occurred with temperatures below −34°C, up to 74% of the time ice fog emanated from the power generation facility, and at least 95% of ice particles during ice fog events were solid droxtals with diameters ranging from 7 to 50 μm. This report documents the need for frequent and detailed observations of the meteorological conditions in combination with photographic and ice particle observations. Datasets from these observations capture the environmental complexity and the impacts from energy generation in extremely cold weather conditions.
APA, Harvard, Vancouver, ISO, and other styles
9

Wright, Kirsten. Collecting Plant Phenology Data In Imperiled Oregon White Oak Ecosystems: Analysis and Recommendations for Metro. Portland State University, 2020. http://dx.doi.org/10.15760/mem.64.

Full text
Abstract:
Highly imperiled Oregon white oak ecosystems are a regional conservation priority of numerous organizations, including Oregon Metro, a regional government serving over one million people in the Portland area. Previously dominant systems in the Pacific Northwest, upland prairie and oak woodlands are now experiencing significant threat, with only 2% remaining in the Willamette Valley in small fragments (Hulse et al. 2002). These fragments are of high conservation value because of the rich biodiversity they support, including rare and endemic species, such as Delphinium leucophaeum (Oregon Department of Agriculture, 2020). Since 2010, Metro scientists and volunteers have collected phenology data on approximately 140 species of forbs and graminoids in regional oak prairie and woodlands. Phenology is the study of life-stage events in plants and animals, such as budbreak and senescence in flowering plants, and widely acknowledged as a sensitive indicator of environmental change (Parmesan 2007). Indeed, shifts in plant phenology have been observed over the last few decades as a result of climate change (Parmesan 2006). In oak systems, these changes have profound implications for plant community composition and diversity, as well as trophic interactions and general ecosystem function (Willis 2008). While the original intent of Metro’s phenology data-collection was to track long-term phenology trends, limitations in data collection methods have made such analysis difficult. Rather, these data are currently used to inform seasonal management decisions on Metro properties, such as when to collect seed for propagation and when to spray herbicide to control invasive species. Metro is now interested in fine-tuning their data-collection methods to better capture long-term phenology trends to guide future conservation strategies. Addressing the regional and global conservation issues of our time will require unprecedented collaboration. Phenology data collected on Metro properties is not only an important asset for Metro’s conservation plan, but holds potential to support broader research on a larger scale. As a leader in urban conservation, Metro is poised to make a meaningful scientific contribution by sharing phenology data with regional and national organizations. Data-sharing will benefit the common goal of conservation and create avenues for collaboration with other scientists and conservation practitioners (Rosemartin 2013). In order to support Metro’s ongoing conservation efforts in Oregon white oak systems, I have implemented a three-part master’s project. Part one of the project examines Metro’s previously collected phenology data, providing descriptive statistics and assessing the strengths and weaknesses of the methods by which the data were collected. Part two makes recommendations for improving future phenology data-collection methods, and includes recommendations for datasharing with regional and national organizations. Part three is a collection of scientific vouchers documenting key plant species in varying phases of phenology for Metro’s teaching herbarium. The purpose of these vouchers is to provide a visual tool for Metro staff and volunteers who rely on plant identification to carry out aspects of their job in plant conservation. Each component of this project addresses specific aspects of Metro’s conservation program, from day-to-day management concerns to long-term scientific inquiry.
APA, Harvard, Vancouver, ISO, and other styles
10

Gillen, Emily, Nicole M. Coomer, Christopher Beadles, and Amy Mills. Constructing a Measure of Anesthesia Intensity Using Cross-Sectional Claims Data. RTI Press, 2019. http://dx.doi.org/10.3768/rtipress.2019.mr.0040.1910.

Full text
Abstract:
With intensifying emphasis on episodes of care and bundled payments for surgical admissions, anesthesia expenditures are increasingly important in assessing variation in expenditures for surgical episodes. When comparing anesthesia expenditures across surgical settings, adjustment for anesthesia case complexity and duration of anesthesia services, also known as anesthesia service intensity, is desirable. A single anesthesia intensity measure allows researchers to make more direct comparisons between anesthesia outcomes across settings and services. We describe a process for creating a claims-based anesthesia intensity measure using Medicare claims. We create the measure using two fields: base units associated with American Medical Association Current Procedural Terminology codes on the anesthesia claim and time units associated with the service. We rescaled the time component of the anesthesia intensity measure to equally represent base units and time units. For illustration, we applied the measure to Medicare anesthesia expenditures stratified by rural/urban location. We found that adjustments for intensity were greater in urban settings because the level of intensity is greater. Compared with rural settings, unadjusted expenditures in urban settings are roughly 26 percent higher, whereas adjusted expenditures in urban settings are only 20 percent higher. Even absent longitudinal data, researchers can adjust anesthesia outcomes for intensity using our cross-sectional claims-based intensity method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography