To see the other types of publications on this topic, follow the link: Interval-based event.

Journal articles on the topic 'Interval-based event'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Interval-based event.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kang, Man-Mo, Sang-Mu Park, Sank-Rak Kim, Kang-Hyun Kim, and Dong-Hyeong Lee. "Detection of Complex Event Patterns over Interval-based Events." Journal of the Institute of Webcasting, Internet and Telecommunication 12, no. 4 (2012): 201–9. http://dx.doi.org/10.7236/jiwit.2012.12.4.201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Helmer, Sven, and Fabio Persia. "ISEQL, an Interval-based Surveillance Event Query Language." International Journal of Multimedia Data Engineering and Management 7, no. 4 (2016): 1–21. http://dx.doi.org/10.4018/ijmdem.2016100101.

Full text
Abstract:
The authors propose ISEQL, a language based on relational algebra extended by intervals for detecting high-level surveillance events from a video stream. The operators they introduce for describing temporal constraints are based on the well-known Allen's interval relationships and we implemented on top of a PostgreSQL database system. The semantics of ISEQL are clearly defined, and the authors illustrate its usefulness by expressing typical events in it and showing the promising results of an experimental evaluation.
APA, Harvard, Vancouver, ISO, and other styles
3

Busany, Nimrod, Han Van Der Aa, Arik Senderovich, Avigdor Gal, and Matthias Weidlich. "Interval-based Queries over Lossy IoT Event Streams." ACM/IMS Transactions on Data Science 1, no. 4 (2020): 1–27. http://dx.doi.org/10.1145/3385191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Govindasamy, V., and P. Thambidurai. "Heuristic Event Filtering Methodology for Interval based Temporal Semantics." International Journal of Computer Applications 70, no. 7 (2013): 16–20. http://dx.doi.org/10.5120/11974-7836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Valente, Ana Rita S., Luis M. T. Jesus, Andreia Hall, and Margaret Leahy. "Event- and interval-based measurement of stuttering: a review." International Journal of Language & Communication Disorders 50, no. 1 (2014): 14–30. http://dx.doi.org/10.1111/1460-6984.12113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Artikis, Alexander, Evangelos Makris, and Georgios Paliouras. "A probabilistic interval-based event calculus for activity recognition." Annals of Mathematics and Artificial Intelligence 89, no. 1-2 (2019): 29–52. http://dx.doi.org/10.1007/s10472-019-09664-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Jan. "Failure Interval Probabilistic Analysis for Risk-based Decisions - Concorde Crash Example." Journal of System Safety 56, no. 3 (2021): 46–55. http://dx.doi.org/10.56094/jss.v56i3.15.

Full text
Abstract:
The DC-6, DC-8, DC-10, Concorde, Boeing 787 and Boeing 737 MAX fatal crashes and nearmisses were analyzed with event interval probabilistic analysis methods. Fleet grounding decisions are the epitome of risk-based decisions, and the most important decision is the first opportunity to ground. The “first opportunity to ground” decision is retrospectively judged to be wrong if, in the immediate future, another accident or cause-and-effect findings leads to the original decision being reversed. Using only data available at the time of the significant events, the analysis examines these risk-based decisions as if it they were made at the event’s instant in time.
 The event interval method identified five out of six “first opportunity to ground” decisions correctly, including the Concorde. According to these analyses, the FAA and its predecessor organizations made one correct decision out of five. Use of this method based on statistics and probability would have avoided 503 actual fatalities, plus 9.45 expected value fatalities from additional risk exposure due to flying statistically proven unreliable aircraft. In addition to the reversed decision standard for judging whether these decisions were wrong, the data show that a grounding of the DC-8 and a second grounding of the DC-6 would have been statistically appropriate — but these groundings did not occur.
 A specific objective of this paper is to lead the FAA and aircraft manufacturers to using event interval probabilistic analysis in grounding decisions and air-worthiness certification. The cause-and-effect data necessary to identify issues and make corrections are often sparse or nonexistent at the time of the event. Cause-and-effect data can take days or months to acquire and analyze, but event interval timing data are simple because system performance data are available at the instant the event occurs.
APA, Harvard, Vancouver, ISO, and other styles
8

WANG, Peng, Xiaocong WANG, Nü'e XIAO, and Boyang LI. "Thermal runaway safety analysis of eVTOL based on evidence theory and interval analysis." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 42, no. 3 (2024): 558–66. http://dx.doi.org/10.1051/jnwpu/20244230558.

Full text
Abstract:
In order to ensure the safety of eVTOL, safety analysis of thermal runaway problem of lithium battery is needed. Due to insufficient failure data, a safety assessment method based on evidence theory and interval analysis is proposed. Firstly, a fault tree model for system is constructed. Secondly, for the imprecise failure rate of bottom events in fault tree analysis, the failure probability interval of bottom event is calculated by using the evidence theory, and combining with the importance analysis to address the problem of inaccurate failure rate of bottom events in fault tree analysis. The evidence theory is used to calculate the probability interval of failure of bottom events and combine it with interval theory to calculate the top events of the fault tree. Finally, the importance analysis is used to find out the basic events that have the greatest impact on the top events. The proposed method is used to analyze the safety of the eVTOL′s battery thermal runaway and calculate the importance of each bottom event, so as to provide a basis for reducing the system failure probability and improving the system safety level.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Chunjie, Pengfei Dai, Zhenxing Zhang, and Tong Liu. "Interval-Based Out-of-Order Event Processing in Intelligent Manufacturing." Journal of Intelligent Learning Systems and Applications 10, no. 02 (2018): 21–35. http://dx.doi.org/10.4236/jilsa.2018.102002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

., Kubra Iram. "SEQUENTIAL TEMPORAL PATTERN MINING IN TIME-INTERVAL BASED EVENT DATA." International Journal of Research in Engineering and Technology 05, no. 16 (2016): 48–52. http://dx.doi.org/10.15623/ijret.2016.0516011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Adaikkalavan, Raman, and Sharma Chakravarthy. "SnoopIB: Interval-based event specification and detection for active databases." Data & Knowledge Engineering 59, no. 1 (2006): 139–65. http://dx.doi.org/10.1016/j.datak.2005.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Yue-E., Yongfeng Gao, Di Wu, and Ben Niu. "Interval observer-based event-triggered control for switched linear systems." Journal of the Franklin Institute 357, no. 10 (2020): 5753–72. http://dx.doi.org/10.1016/j.jfranklin.2020.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Insoo, Junhee Seok, and Yoojoong Kim. "CTIVA: Censored time interval variable analysis." PLOS ONE 18, no. 11 (2023): e0294513. http://dx.doi.org/10.1371/journal.pone.0294513.

Full text
Abstract:
Traditionally, datasets with multiple censored time-to-events have not been utilized in multivariate analysis because of their high level of complexity. In this paper, we propose the Censored Time Interval Analysis (CTIVA) method to address this issue. It estimates the joint probability distribution of actual event times in the censored dataset by implementing a statistical probability density estimation technique on the dataset. Based on the acquired event time, CTIVA investigates variables correlated with the interval time of events via statistical tests. The proposed method handles both categorical and continuous variables simultaneously—thus, it is suitable for application on real-world censored time-to-event datasets, which include both categorical and continuous variables. CTIVA outperforms traditional censored time-to-event data handling methods by 5% on simulation data. The average area under the curve (AUC) of the proposed method on the simulation dataset exceeds 0.9 under various conditions. Further, CTIVA yields novel results on National Sample Cohort Demo (NSCD) and proteasome inhibitor bortezomib dataset, a real-world censored time-to-event dataset of medical history of beneficiaries provided by the National Health Insurance Sharing Service (NHISS) and National Center for Biotechnology Information (NCBI). We believe that the development of CTIVA is a milestone in the investigation of variables correlated with interval time of events in presence of censoring.
APA, Harvard, Vancouver, ISO, and other styles
14

Kunchulia, Marina, Khatuna Parkosadze, and Roland Thomaschke. "Age-Related Differences in Time-Based Event Expectancies." Timing & Time Perception 7, no. 1 (2019): 71–85. http://dx.doi.org/10.1163/22134468-20181123.

Full text
Abstract:
The ability to form time-based event expectancies is one of the most important determinants of anticipative behavior. The aim of the present study was to determine whether healthy aging influences the formation of time-based event expectancies. Ten older adults with ages ranging between 60 and 73 years and ten younger adults with ages ranging between 20 and 32 years participated. We employed a binary choice response task mimicking a computer game, in which two target stimuli and two pre-target intervals appeared overall equally often. One of the targets was paired with the short interval and the other target with the long interval in 80% of the trials. Our results showed that younger adults responded more rapidly to frequent interval–target combinations than to infrequent combinations, suggesting that the young participants formed time-based event expectancies. In contrast, the ability to form time-based event expectancies was reduced for older participants. The formation of time-based event expectancies seems to change during healthy aging. We propose that this age-related difference is due to age-related expectation deficits or a reduction of attentional capacities, rather than to deficits in timing abilities.
APA, Harvard, Vancouver, ISO, and other styles
15

Cordes, Anne K., Roger J. Ingham, Peter Frank, and Janis Costello Ingham. "Time-Interval Analysis of Interjudge and Intrajudge Agreement for Stuttering Event Judgments." Journal of Speech, Language, and Hearing Research 35, no. 3 (1992): 483–94. http://dx.doi.org/10.1044/jshr.3503.483.

Full text
Abstract:
In response to the recognized need for a valid and reliable way to measure stuttering, this study investigates a measurement methodology based on time-interval analyses of stuttering event judgments. Three groups of judges, differing in stuttering judgment experience, identified stuttering events in 12 repeated presentations of five 1-min speech samples. Fixed time intervals ranging from 0.2 sec to 7.0 sec were then superimposed on the event judgments by a data analysis program. Inter- and intrajudge interval-by-interval agreement, and agreement for total numbers of intervals containing stuttering event judgments, were calculated for each judge group. Results showed that agreement was superior among more experienced judges and in longer interval lengths. Agreement varied across speech samples but not across the repeated judgment opportunities. Agreement was maximized at greater than chance levels for an interval of approximately 3.0 sec, but even this best agreement did not exceed a mean of approximately 60% for experienced judges.
APA, Harvard, Vancouver, ISO, and other styles
16

Wong, Nathan D., Wenjun Fan, and Jonathan Pak. "Estimating the number of preventable cardiovascular disease events in the United States using the EMPA-REG OUTCOME trial results and National Health and Nutrition Examination Survey." Diabetes and Vascular Disease Research 17, no. 4 (2020): 147916412094567. http://dx.doi.org/10.1177/1479164120945674.

Full text
Abstract:
Aim: We examined eligibility and preventable cardiovascular disease events in US adults with diabetes mellitus from the Empagliflozin Cardiovascular Outcome Event Trial in Type 2 Diabetes Mellitus Patients (EMPA-REG OUTCOME). Methods: We identified adults with diabetes mellitus eligible for EMPA-REG OUTCOME based on trial eligibility criteria available from the National Health and Nutrition Examination Surveys, 2007–2016. We estimated composite cardiovascular disease endpoints, as well as all-cause deaths, death from cardiovascular disease and hospitalizations for heart failure from trial treatment and placebo event rates, the difference indicating the preventable events. Results: Among 29,629 US adults aged ⩾18 years (representing 231.9 million), 4672 (27.3 million) had diabetes mellitus, with 342 (1.86 million) meeting eligibility criteria of EMPA-REG OUTCOME. We estimated from trial primary endpoint event rates of 10.5% and 12.1% in the empagliflozin and placebo groups, respectively, that based on the ‘treatment’ of our 1.86 million estimated EMPA-REG OUTCOME eligible subjects, 12,066 (95% confidence interval: 10,352–13,780) cardiovascular disease events could be prevented annually. Estimated annual preventable deaths from any cause, cardiovascular causes and hospitalizations from heart failure were 17,078 (95% confidence interval: 14,652–19,504), 14,479 (95% confidence interval: 12,422–16,536) and 9467 (95% confidence interval: 8122–10,812), respectively. Conclusion: Empagliflozin, if provided to EMPA-REG OUTCOME eligible US adults, may prevent many cardiovascular disease events, cardiovascular and total deaths, as well as heart failure hospitalizations.
APA, Harvard, Vancouver, ISO, and other styles
17

Lagnoux, Agnès. "RARE EVENT SIMULATION." Probability in the Engineering and Informational Sciences 20, no. 1 (2005): 45–66. http://dx.doi.org/10.1017/s0269964806060025.

Full text
Abstract:
This article deals with estimations of probabilities of rare events using fast simulation based on the splitting method. In this technique, the sample paths are split into multiple copies at various stages in the simulation. Our aim is to optimize the algorithm and to obtain a precise confidence interval of the estimator using branching processes. The numerical results presented suggest that the method is reasonably efficient.
APA, Harvard, Vancouver, ISO, and other styles
18

Arman, Nabil. "A Parallel Algorithm for Generating Maximal Interval Groups in Interval Databases Based on Schedule of Event Points." Information Technology Journal 6, no. 2 (2007): 263–66. http://dx.doi.org/10.3923/itj.2007.263.266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Meseguer, Jordi, Vicenç Puig, and Teresa Escobet. "Fault Diagnosis using a Timed Discrete Event Approach based on Interval Observers." IFAC Proceedings Volumes 41, no. 2 (2008): 6914–19. http://dx.doi.org/10.3182/20080706-5-kr-1001.01172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Plantevit, Marc, Céline Robardet, and Vasile-Marian Scuturici. "Graph dependency construction based on interval-event dependencies detection in data streams." Intelligent Data Analysis 20, no. 2 (2016): 223–56. http://dx.doi.org/10.3233/ida-160803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Haibo, and Yongbo Yu. "Detecting a multigranularity event in an unequal interval time series based on self-adaptive segmenting." Intelligent Data Analysis 25, no. 6 (2021): 1407–29. http://dx.doi.org/10.3233/ida-205480.

Full text
Abstract:
Analyzing the temporal behaviors and revealing the hidden rules of objects that produce time series data to detect the events that users are interested in have recently received a large amount of attention. Generally, in various application scenarios and most research works, the equal interval sampling of a time series is a requirement. However, this requirement is difficult to guarantee because of the presence of sampling errors in most situations. In this paper, a multigranularity event detection method for an unequal interval time series, called SSED (self-adaptive segmenting based event detection), is proposed. First, in view of the trend features of a time series, a self-adaptive segmenting algorithm is proposed to divide a time series into unfixed-length segmentations based on the trends. Then, by clustering the segmentations and mapping the clusters to different identical symbols, a symbol sequence is built. Finally, based on unfixed-length segmentations, the multigranularity events in the discrete symbol sequence are detected using a tree structure. The SSED is compared to two previous methods with ten public datasets. In addition, the SSED is applied to the public transport systems in Xiamen, China, using bus-speed time-series data. The experimental results show that the SSED can achieve higher efficiency and accuracy than existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Feng, Shiwang Tan, Leilei Zhang, Yameng Wang, and Yang Gao. "Fault Tree Interval Analysis of Complex Systems Based on Universal Grey Operation." Complexity 2019 (January 1, 2019): 1–8. http://dx.doi.org/10.1155/2019/1046054.

Full text
Abstract:
The objective of this study is to propose a new operation method based on the universal grey number to overcome the shortcomings of typical interval operation in solving system fault trees. First, the failure probability ranges of the bottom events are described according to the conversion rules between the interval number and universal grey number. A more accurate system reliability calculation is then obtained based on the logical relationship between the AND gates and OR gates of a fault tree and universal grey number arithmetic. Then, considering an aircraft landing gear retraction system as an example, the failure probability range of the top event is obtained through universal grey operation. Next, the reliability of the aircraft landing gear retraction system is evaluated despite insufficient statistical information describing failures. The example demonstrates that the proposed method provides many advantages in resolving the system reliability problem despite poor information, yielding benefits for the function of the interval operation, and overcoming the drawback of solution interval enlargement under different orders of interval operation.
APA, Harvard, Vancouver, ISO, and other styles
23

Abbott, Sam, Joel Hellewell, James Munday, and Sebastian Funk. "The transmissibility of novel Coronavirus in the early stages of the 2019-20 outbreak in Wuhan: Exploring initial point-source exposure sizes and durations using scenario analysis." Wellcome Open Research 5 (February 3, 2020): 17. http://dx.doi.org/10.12688/wellcomeopenres.15718.1.

Full text
Abstract:
Background: The current novel coronavirus outbreak appears to have originated from a point-source exposure event at Huanan seafood wholesale market in Wuhan, China. There is still uncertainty around the scale and duration of this exposure event. This has implications for the estimated transmissibility of the coronavirus and as such, these potential scenarios should be explored. Methods: We used a stochastic branching process model, parameterised with available data where possible and otherwise informed by the 2002-2003 Severe Acute Respiratory Syndrome (SARS) outbreak, to simulate the Wuhan outbreak. We evaluated scenarios for the following parameters: the size, and duration of the initial transmission event, the serial interval, and the reproduction number (R0). We restricted model simulations based on the number of observed cases on the 25th of January, accepting samples that were within a 5% interval on either side of this estimate. Results: Using a pre-intervention SARS-like serial interval suggested a larger initial transmission event and a higher R0 estimate. Using a SARs-like serial interval we found that the most likely scenario produced an R0 estimate between 2-2.7 (90% credible interval (CrI)). A pre-intervention SARS-like serial interval resulted in an R0 estimate between 2-3 (90% CrI). There were other plausible scenarios with smaller events sizes and longer duration that had comparable R0 estimates. There were very few simulations that were able to reproduce the observed data when R0 was less than 1. Conclusions: Our results indicate that an R0 of less than 1 was highly unlikely unless the size of the initial exposure event was much greater than currently reported. We found that R0 estimates were comparable across scenarios with decreasing event size and increasing duration. Scenarios with a pre-intervention SARS-like serial interval resulted in a higher R0 and were equally plausible to scenarios with SARs-like serial intervals.
APA, Harvard, Vancouver, ISO, and other styles
24

LUI, Pun Ho. "Immediate anteriority construction in Cantonese." Cahiers de Linguistique Asie Orientale 52, no. 2 (2023): 190–213. http://dx.doi.org/10.1163/19606028-bja10034.

Full text
Abstract:
Abstract This study explores the immediate anteriority (“IMANTE”) construction in Cantonese, in which two events form an “as soon as” relationship. By collecting examples from corpora, three types of IMANTE construction are categorized, based on the subordinate events: (i) achievement, (ii) completion point of a durational event, and (iii) inchoative point of a durational event. This study argues that (iii) is the default usage of a subordinate event when it is durational because it is unmarked and historically emerged earlier than (ii). Second, (iii) is seemingly a defining property of IMANTE besides “zero-time interval between two events”.
APA, Harvard, Vancouver, ISO, and other styles
25

Zeyu, Wang, Cheng Chu Zong, Chen Minghao, Zhang Yiqian, and Yang Rui. "An Asynchronous LLM Architecture for Event Stream Analysis with Cameras." Social Science Journal for Advanced Research 4, no. 5 (2024): 10–17. https://doi.org/10.5281/zenodo.13639724.

Full text
Abstract:
Event-based cameras, as bio-inspired vision sensors, record intensity changes asynchronously. The Dynamic and Active-pixel Vision Sensor (DAVIS) enhances information diversity by combining a standard camera with an event-based camera. However, current methods analyze event streams synchronously, contradicting their nature and introducing noise. To address this, most approaches accumulate events within a time interval to create synchronous frames, wasting sensitive intensity changes. This paper introduces a novel neural asynchronous approach for event stream analysis. Our method asynchronously extracts dynamic information by leveraging historical motion information and critical features of grayscale frames. Extensive experiments demonstrate our model’s significant improvements over state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
26

Cordes, Anne K., and Roger J. Ingham. "Effects of Time-Interval Judgment Training on Real-Time Measurement of Stuttering." Journal of Speech, Language, and Hearing Research 42, no. 4 (1999): 862–79. http://dx.doi.org/10.1044/jslhr.4204.862.

Full text
Abstract:
The purpose of this study was to investigate whether a previously developed interval-based training program could improve judges' stuttering event judgments. Two groups of judges made real-time stuttering event judgments (computer-mouse button presses) in 3 to 6 trials before the response-contingent judgment training program and in another 3 to 6 trials after training, for recordings of 9 adults who stuttered. Their judgments were analyzed in terms of number of stuttering events, duration of stuttering, and 5-s intervals of speech that could be categorized as judged (or not judged) to contain stuttering. Results showed (a) changes in the amount of stuttering identified by the judges; (b) improved correspondence between the judges' identifications of stuttering events and interval-based standards previously developed from judgments made by experienced, authoritative judges; (c) improved correspondence between interval-based analyses of the judges' stuttering judgments and the previously developed standards; (d) improved intrajudge agreement; (e) improved interjudge agreement; and (f) convergence between the 2 judge groups, for samples and speakers used during training tasks and also for other speakers. Some implications of these findings for developing standardized procedures for the real-time measurement of stuttering are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Naganuma, Misa, Kohei Tahara, Shiori Hasegawa, et al. "Adverse event profiles of solvent-based and nanoparticle albumin-bound paclitaxel formulations using the Food and Drug Administration Adverse Event Reporting System." SAGE Open Medicine 7 (January 2019): 205031211983601. http://dx.doi.org/10.1177/2050312119836011.

Full text
Abstract:
Objectives: Paclitaxel is a highly effective antitumor agent with notable adverse events, including hypersensitivity reactions, peripheral neuropathy, arthralgia, myalgias, and neutropenia. Solvent-based paclitaxel causes severe allergic, hypersensitivity, and anaphylactic reactions. Nanoparticle albumin-bound paclitaxel was recently developed and provides an advantage over solvent-based paclitaxel in avoiding solvent/surfactant-related adverse events. The aim of this study was to assess the adverse event profiles of solvent-based paclitaxel and nanoparticle albumin-bound paclitaxel formulations using data from the spontaneous adverse event reporting system of the US Food and Drug Administration Adverse Event Reporting System database. Methods: This study relied on Medical Dictionary for Regulatory Activities preferred terms and standardized queries, and calculated the reporting ratio and reporting odds ratios of paclitaxel formulations. Results: Of 8,867,135 reports recorded in the US Food and Drug Administration Adverse Event Reporting System database from January 2004 to December 2016, 3469 and 4447 adverse events corresponded to solvent-based paclitaxel and nanoparticle albumin-bound paclitaxel, respectively. Reporting odds ratios (95% confidence interval) for anaphylactic reaction (standardized MedDRA query code: 20000021) associated with the use of solvent-based paclitaxel and nanoparticle albumin-bound paclitaxel were 1.69 (1.56–1.84) and 0.75 (0.68–0.83), respectively. Reporting odds ratio signal for anaphylactic reaction was not detected for nanoparticle albumin-bound paclitaxel. Reporting odds ratios (95% confidence interval) for acute renal failure (standardized MedDRA query code: 20000003) associated with the use of solvent-based paclitaxel and nanoparticle albumin-bound paclitaxel were 0.75 (0.58–0.98) and 1.60 (1.37–1.89), respectively. Conclusion: This is the first study to evaluate the adverse event profile of nanoparticle albumin-bound paclitaxel using US Food and Drug Administration Adverse Event Reporting System data. Considering that the US Food and Drug Administration Adverse Event Reporting System database does not allow to infer causality or risk ranking, the different reporting frequencies of anaphylactic reaction and acute renal failure between solvent-based paclitaxel and nanoparticle albumin-bound paclitaxel must be further investigated via analytical observational research.
APA, Harvard, Vancouver, ISO, and other styles
28

Savage, J. C. "The Parkfield prediction fallacy." Bulletin of the Seismological Society of America 83, no. 1 (1993): 1–6. http://dx.doi.org/10.1785/bssa0830010001.

Full text
Abstract:
Abstract The Parkfield earthquake prediction is generally stated as a 95% probability that the next moderate earthquake there should occur before January 1993. That time limit is based on a two-sided 95% confidence interval. Because at the time of the prediction (1985) it was already clear that the earthquake had not occurred prior to 1985, a one-sided 95% confidence interval would have been more appropriate. That confidence interval ended in October 1991. The Parkfield prediction was based on an extrapolation of five of the six events in the 1857 to 1966 earthquake sequence; the 1934 event was omitted because it did not fit the regularity exhibited by the other data. The fallacy in the prediction is that it did not take account of other less-contrived explanations of the Parkfield seismicity (e.g., not excluding the 1934 event). Even if the Parkfield earthquake should occur in the near future, it would be better explained by less-contrived hypotheses.
APA, Harvard, Vancouver, ISO, and other styles
29

Lu, Zhongda, Guangtao Ran, Guoliang Zhang, and Fengxia Xu. "Event-Based Nonfragile H∞ Filter Design for Networked Control Systems with Interval Time-Varying Delay." Journal of Control Science and Engineering 2018 (June 12, 2018): 1–15. http://dx.doi.org/10.1155/2018/4541586.

Full text
Abstract:
This paper first investigates the event-triggered nonfragile H∞ filter design for a class of nonlinear NCSs with interval time-varying delay. An event-triggered scheme is addressed to determine sampled data to be transmitted so that network communication resource can be saved significantly. The nonfragile filter design is assumed to include multiplicative gain variations according to the filter’s implement. Under the event-triggered scheme, the filtering error system is modeled as a system with interval time-varying delay. By constructing a new Lyapunov-Krasovskii functional and employing Wirtinger inequality, a sufficient condition is derived, which guarantees that the filtering error system is asymptotically stable with the prescribed H∞ performance. The nonfragile filter parameters are obtained by solving a set of linear matrix inequalities. Two numerical examples are given to show the usefulness and the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhao, Lihui, Lu Tian, Hajime Uno, et al. "Utilizing the integrated difference of two survival functions to quantify the treatment contrast for designing, monitoring, and analyzing a comparative clinical study." Clinical Trials 9, no. 5 (2012): 570–77. http://dx.doi.org/10.1177/1740774512455464.

Full text
Abstract:
Background Consider a comparative, randomized clinical study with a specific event time as the primary end point. In the presence of censoring, standard methods of summarizing the treatment difference are based on Kaplan–Meier curves, the logrank test, and the point and interval estimates via Cox’s procedure. Moreover, for designing and monitoring the study, one usually utilizes an event-driven scheme to determine the sample sizes and interim analysis time points. Purpose When the proportional hazards (PHs) assumption is violated, the logrank test may not have sufficient power to detect the difference between two event time distributions. The resulting hazard ratio estimate is difficult, if not impossible, to interpret as a treatment contrast. When the event rates are low, the corresponding interval estimate for the ‘hazard ratio’ can be quite large due to the fact that the interval length depends on the observed numbers of events. This may indicate that there is not enough information for making inferences about the treatment comparison even when there is no difference between two groups. This situation is quite common for a postmarketing safety study. We need an alternative way to quantify the group difference. Methods Instead of quantifying the treatment group difference using the hazard ratio, we consider an easily interpretable and model-free parameter, the integrated survival rate difference over a prespecified time interval, as an alternative. We present the inference procedures for such a treatment contrast. This approach is purely nonparametric and does not need any model assumption such as the PHs. Moreover, when we deal with equivalence or noninferiority studies and the event rates are low, our procedure would provide more information about the treatment difference. We used a cardiovascular trial data set to illustrate our approach. Results The results using the integrated event rate differences have a heuristic interpretation for the treatment difference even when the PHs assumption is not valid. When the event rates are low, for example, for the cardiovascular study discussed in this article, the procedure for the integrated event rate difference provides tight interval estimates in contrast to those based on the event-driven inference method. Limitations The design of a trial with the integrated event rate difference may be more complicated than that using the event-driven procedure. One may use simulation to determine the sample size and the estimated duration of the study. Conclusions The procedure discussed in this article can be a useful alternative to the standard PHs method in the survival analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Weatherley, D., and S. Abe. "Earthquake statistics in a Block Slider Model and a fully dynamic Fault Model." Nonlinear Processes in Geophysics 11, no. 5/6 (2004): 553–60. http://dx.doi.org/10.5194/npg-11-553-2004.

Full text
Abstract:
Abstract. We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.
APA, Harvard, Vancouver, ISO, and other styles
32

Murakami, Michio, Tsukasa Fujita, Pinqi Li, Seiya Imoto, and Tetsuo Yasutaka. "Development of a COVID-19 risk assessment model for participants at outdoor music festivals: evaluation of the validity and control measure effectiveness based on two actual events in Japan and Spain." PeerJ 10 (August 8, 2022): e13846. http://dx.doi.org/10.7717/peerj.13846.

Full text
Abstract:
We developed an environmental exposure model to estimate the coronavirus disease 2019 (COVID-19) risk among participants at outdoor music festivals and validated the model using two real events—one in Japan (Event 1) and one in Spain (Event 2). Furthermore, we considered a hypothetical situation in which Event 1 was held but enhanced measures were implemented to evaluate the extent to which the risk could be reduced by additional infection control measures, such as negative antigen tests on the day of the event, wearing of masks, disinfection of environmental surfaces, and vaccination. Among 7,392 participants, the total number of already- and newly-infected individuals who participated in Event 1 according to the new model was 47.0 (95% uncertainty interval: 12.5–185.5), which is in good agreement with the reported value (45). The risk of infection at Event 2 (1.98 × 10−2; 95% uncertainty interval: 0.55 × 10−2–6.39 × 10−2), calculated by the model in this study, was also similar to the estimated value in the previous epidemiological study (1.25 × 10−2). These results for the two events in different countries highlighted the validity of the model. Among the additional control measures in the hypothetical Event 1, vaccination, mask-wearing, and disinfection of surfaces were determined to be effective. Based on the combination of all measures, a 94% risk reduction could be achieved. In addition to setting a benchmark for an acceptable number of newly-infected individuals at the time of an event, the application of this model will enable us to determine whether it is necessary to implement additional measures, limit the number of participants, or refrain from holding an event.
APA, Harvard, Vancouver, ISO, and other styles
33

Ekta, Soni, Nagpal Arpita, and Chopra Khyati. "Atrial Fibrillation Discrimination for Real-Time ECG Monitoring Based On QT Interval Variation." Indian Journal of Science and Technology 15, no. 17 (2022): 767–77. https://doi.org/10.17485/IJST/v15i17.53.

Full text
Abstract:
Abstract <strong>Background/Objectives:</strong>&nbsp;An occasional Atrial Fibrillation (AF) event in heart rhythm should be monitored regularly, in continuous intervals. Timely detection of these anomalies in heart rhythm is required to save patients from sudden cardiac arrest.&nbsp;<strong>Method:</strong>&nbsp;A long-duration ECG categorization algorithm named AFECOC is proposed. For this one-minute-long 71 signals are attained from the Physionet&rsquo;s &ldquo;MIT-BIH arrhythmia (MA)&rdquo; and &ldquo;AF&rdquo; database. Two-stage filtering of noisy signals is employed before signal analysis. Four algorithms i.e. Error-Correcting Output Code (ECOC), Na&iuml;ve Bayes, Decision Tree, and K-Nearest Neighbor(K-NN) are applied to reduce the feature set and then signals are classified with ECOC classifier.&nbsp;<strong>Findings:</strong>&nbsp;It was found that the ECOC algorithm gives the highest accuracy of 81.95% on the complete feature set. To exclude the irrelevant features, the highest performing algorithm ECOC was used that extracts the combination of the feature sets that get most affected during AF. The combination of &rsquo;heart-beat&rsquo; and &rsquo;mean QT-interval&rsquo; are found to be the most relevant features affected during AF events. The accuracy of these two features was evaluated with four classifiers namely ECOC, Na&iuml;ve Bayes, Decision tree-based and K mean classifier and the accuracy obtained was 89.6%, 76.19%, 76.19%, and 61% respectively. It concludes that the proposed methodology achieved the highest accuracy of 89.6% with the ECOC classifier. Finally, all the AF rhythms have been checked using annotated labels for spontaneous change in QT-interval to verify the designed methodologies.&nbsp;<strong>Novelty:</strong>&nbsp;Instead of missing P-waves and RR-interval variation, recognition of mean QT interval variation-based AF event detection algorithm gives better accuracy for longer signals. Hence, it can be implemented in Real- Time continuous monitoring. <strong>Keywords:</strong> Atrial Fibrillation (AF); Classification; Error Correcting Output Code (ECOC); Feature extraction; QTinterval
APA, Harvard, Vancouver, ISO, and other styles
34

王, 妍. "The Effects of Predictive Strategy, Time Interval and Feedback on Event-Based Prospective Memory." Advances in Psychology 11, no. 10 (2021): 2357–66. http://dx.doi.org/10.12677/ap.2021.1110268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Meseguer, Jordi, Vicenç Puig, and Teresa Escobet. "Fault Isolation Module Implementation using a Timed Discrete-Event Approach based on Interval Observers." IFAC Proceedings Volumes 42, no. 8 (2009): 1563–68. http://dx.doi.org/10.3182/20090630-4-es-2003.00255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Xiao, Qiang, Frank L. Lewis, and Zhigang Zeng. "Event-Based Time-Interval Pinning Control for Complex Networks on Time Scales and Applications." IEEE Transactions on Industrial Electronics 65, no. 11 (2018): 8797–808. http://dx.doi.org/10.1109/tie.2018.2813968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Spalvieri, Arnaldo. "Event-based analysis of convergence to energy equipartition." American Journal of Physics 92, no. 3 (2024): 210–13. http://dx.doi.org/10.1119/5.0142173.

Full text
Abstract:
The paper presents a simple yet rigorous analysis of the transient of the spatial distribution of the kinetic energy of a classical ideal monoatomic dilute gas that passes from non-equilibrium to equilibrium. The proposed approach is event-based, in the sense that the evolution of the system is analyzed as a function of the random number of collisions in a given time interval. Taking a very simple yet realistic model of collision, the paper shows that collisions between atoms lead exponentially to energy equipartition.
APA, Harvard, Vancouver, ISO, and other styles
38

Torrisi, Nunzio. "Statistical Deadband: A Novel Approach for Event-Based Data Reporting." Informatics 6, no. 1 (2019): 5. http://dx.doi.org/10.3390/informatics6010005.

Full text
Abstract:
Deadband algorithms are implemented inside industrial gateways to reduce the volume of data sent across different networks. By tuning the deadband sampling resolution by a preset interval Δ , it is possible to estimate the balance between the traffic rates of networks connected by industrial SCADA gateways. This work describes the design and implementation of two original deadband algorithms based on statistical concepts derived by John Bollinger in his financial technical analysis. The statistical algorithms proposed do not require the setup of a preset interval—this is required by non-statistical algorithms. All algorithms were evaluated and compared by computing the effectiveness and fidelity over a public collection of random pseudo-periodic signals. The overall performance measured in the simulations showed better results, in terms of effectiveness and fidelity, for the statistical algorithms, while the measured computing resources were not as efficient as for the non-statistical deadband algorithms.
APA, Harvard, Vancouver, ISO, and other styles
39

Piatov, Danila, Sven Helmer, Anton Dignös, and Fabio Persia. "Cache-efficient sweeping-based interval joins for extended Allen relation predicates." VLDB Journal 30, no. 3 (2021): 379–402. http://dx.doi.org/10.1007/s00778-020-00650-5.

Full text
Abstract:
AbstractWe develop a family of efficient plane-sweeping interval join algorithms for evaluating a wide range of interval predicates such as Allen’s relationships and parameterized relationships. Our technique is based on a framework, components of which can be flexibly combined in different manners to support the required interval relation. In temporal databases, our algorithms can exploit a well-known and flexible access method, the Timeline Index, thus expanding the set of operations it supports even further. Additionally, employing a compact data structure, the gapless hash map, we utilize the CPU cache efficiently. In an experimental evaluation, we show that our approach is several times faster and scales better than state-of-the-art techniques, while being much better suited for real-time event processing.
APA, Harvard, Vancouver, ISO, and other styles
40

Terzi, Elissavet, Ariadni Skari, Stefanos Nikolaidis, Konstantinos Papadimitriou, Athanasios Kabasakalis, and Vassilis Mougios. "Relevance of a Sprint Interval Swim Training Set to the 100‐Meter Freestyle Event Based on Blood Lactate and Kinematic Variables." Journal of Human Kinetics 80, no. 1 (2021): 153–61. http://dx.doi.org/10.2478/hukin-2021-0091.

Full text
Abstract:
Abstract Sprint interval training (SIT) sets are commonly used by coaches in the training routine of swimmers competing in short-distance events; however, data regarding their relevance to competitive events are scarce. The aim of this study was to examine whether performance variables differed or correlated between a 4 × 50-m maximal swimming set (with a work-to-rest ratio of 1:4) and the 100-m freestyle event. Eleven male and 16 female competitive swimmers aged 16.1 ± 1.1 years participated in the study. All swimmers trained at least six times a week and had training experience of more than 4 years. They completed the two freestyle tests on different days, in random and counterbalanced order. In each test, speed, blood lactate, stroke rate (SR), and stroke index (SI) were measured. Speed, blood lactate, and SR were higher at the 4 × 50 m compared to the 100 m and were positively correlated between tests (p &lt; 0.001). The SI did not differ significantly, but was positively correlated between tests. Males were faster and had a higher SI than females, but genders did not differ in lactate. Since performance variables were better in the SIT set and correlated with those in the 100-m bout, we suggest that the 4 × 50-m set can be used to improve performance in the 100-m freestyle event. Moreover, this set can help coaches identify which swimmers will swim fastest in the event.
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Chun-Yi, and Po-Kai Chou. "Prediction of total landslide volume in watershed scale under rainfall events using a probability model." Open Geosciences 13, no. 1 (2021): 944–62. http://dx.doi.org/10.1515/geo-2020-0284.

Full text
Abstract:
Abstract This study established a probability model based on the landslide spatial and size probabilities to predict the possible volume and locations of landslides in watershed scale under rainfall events. First, we assessed the landslide spatial probability using a random forest landslide susceptibility model including intrinsic causative factors and extrinsic rainfall factors. Second, we calculated the landslide volume probability using the Pearson type V distribution. Lastly, these probabilities were joined to predict possible landslide volume and locations in the study area, the Taipei Water Source Domain, under rainfall events. The possible total landslide volume in the watershed changed from 1.7 million cubic meter under the event with 2-year recurrence interval to 18.2 million cubic meter under the event with 20-year recurrence interval. Approximately 62% of the total landslide volume triggered by the rainfall events was concentrated in 20% of the slope units. As the recurrence interval of the events increased, the slope units with large landslide volume tended to concentrate in the midstream of Nanshi River subwatershed. The results indicated the probability model posited can be used not only to predict total landslide volume in watershed scale, but also to determine the possible locations of the slope units with large landslide volume.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Hongrun, Jun Huang, Yong Qin, and Yuan Sun. "Hybrid Dynamic Event-Triggered Interval Observer Design for Nonlinear Cyber–Physical Systems with Disturbance." Fractal and Fractional 9, no. 2 (2025): 86. https://doi.org/10.3390/fractalfract9020086.

Full text
Abstract:
This paper investigates the state estimation problem for nonlinear cyber–physical systems (CPSs). To conserve system resources, we propose a novel hybrid dynamic event-triggered mechanism (ETM) that prevents the occurrence of Zeno behavior. This work is based on designing an interval observer under the hybrid dynamic ETM to solve the state reconstruction problem of Lipschitz nonlinear CPSs subject to disturbances. That is, the designed triggering mechanism is integrated into the design of the Interval Observer (IO), resulting in a hybrid dynamic event-triggered interval observer (HDETIO), and the system stability and robustness are proved using a Lyapunov function, demonstrating that the observer can effectively provide interval estimation for CPSs with nonlinearity and disturbances. Compared to existing work, the primary contribution of this work is its ability to pre-specify the minimum inter-event time (MIET) and apply it to interval state estimation, enhancing its practicality for real-world physical systems. Finally, the correctness and effectiveness of the designed hybrid dynamic ETM and IO framework are validated with an example.
APA, Harvard, Vancouver, ISO, and other styles
43

Murtiyasa, Budi, Afifah Ma'rufi, and Mohd Asrul Affendi bin Abdullah. "Undergraduate students’ errors on interval estimation based on variance neglect." Jurnal Elemen 8, no. 1 (2022): 161–74. http://dx.doi.org/10.29408/jel.v8i1.4529.

Full text
Abstract:
Interval estimation is an important topic, especially in drawing conclusions on an event. Mathematics education students must possess the skill to formulate and use interval estimation. The errors of mathematics education students in formulating wrong interval estimates indicate a low understanding of interval estimation. This study explores the errors of mathematics education students in interpreting the variance in the questions regarding selecting the proper test statistic to formulate the interval estimation of mean accurately. Respondents in this study involved 36 students of mathematics education (N = 9 males, N = 27 females). This research is qualitative research with a qualitative descriptive approach. Data collection was carried out using the respondents’ ability test and interviews. The respondents’ ability test instrument was tested on 36 students and declared valid where r-count r-table with r-table of 0.3291, and declared reliable with a Cronbach Alpha value of 0.876 0.6. Through an exploratory approach, data were analyzed by categorizing, reducing, and interpreting to conclude students' abilities and thinking methods in formulating interval estimation of the mean based on the variance in questions. The results showed that mathematics education students neglected the variance, so they could not determine the test statistics correctly, resulting in error interval estimates. This study provides insight into the thinking methods of mathematics education students on variance in interval estimation problems in the hope of anticipating errors in formulating interval estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
44

Xu, Fuyu, and Kate Beard. "A Unifying Framework for Analysis of Spatial-Temporal Event Sequence Similarity and Its Applications." ISPRS International Journal of Geo-Information 10, no. 9 (2021): 594. http://dx.doi.org/10.3390/ijgi10090594.

Full text
Abstract:
Measures of similarity or differences between data objects are applied frequently in geography, biology, computer science, linguistics, logic, business analytics, and statistics, among other fields. This work focuses on event sequence similarity among event sequences extracted from time series observed at spatially deployed monitoring locations with the aim of enhancing the understanding of process similarity over time and geospatial locations. We present a framework for a novel matrix-based spatiotemporal event sequence representation that unifies punctual and interval-based representation of events. This unified representation of spatiotemporal event sequences (STES) supports different event data types and provides support for data mining and sequence classification and clustering. The similarity measure is based on the Jaccard index with temporal order constraints and accommodates different event data types. The approach is demonstrated through simulated data examples and the performance of the similarity measures is evaluated with a k-nearest neighbor algorithm (k-NN) classification test on synthetic datasets. As a case study, we demonstrate the use of these similarity measures in a spatiotemporal analysis of event sequences extracted from space time series of a water quality monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
45

Lentle, R. G., K. J. Stafford, M. A. Potter, B. P. Springett, and S. Haslett. "The temporal character of feeding behaviour in captive tammar wallabies (Macropus eugenii Desmarest)." Australian Journal of Zoology 46, no. 6 (1998): 579. http://dx.doi.org/10.1071/zo98027.

Full text
Abstract:
Four tammar wallabies, maintained in a fixed 12 : 12 light : dark cycle, were fed ad libitum, one of three foods, of differing nutrient density and fibre content, consecutively, each for a period of two weeks. During the second week, food consumption was assessed daily and the temporal feeding pattern was monitored by visible and infrared video recording. Apart from a short rest period around noon, feeding continued throughout the 24-hour cycle, peaking crepuscularly. Total daily feeding time corrected to metabolic body weight was significantly longer, but dry-matter intake corrected to metabolic body weight was significantly lower than that of larger macropod species, indicating greater investment in chewing. Feed-event duration, inter-feed-event interval, rate of feeding, and dry matter intake all increased significantly on pelleted foods of low nutritional density. Rate of feeding and feed-event duration increased significantly on diced carrot such that dry-matter intake was not significantly different to that on high-quality pelleted food. Survivorship curves of inter-feed-event intervals were predominantly linear. This and the consistently higher positive correlations between the duration of individual feed events and inter-feed-event intervals than between meals and inter-meal intervals, indicated a nibbling rather than a meal-based feeding strategy. Levels of correlation of feed-event duration with inter-feed-event interval were generally low but there was a significant increase in positive correlation when food of lower quality was given. The duration of successive feed events tended to increase on low-quality and decrease on high-quality food more consistently than did successive inter-feed-event intervals.
APA, Harvard, Vancouver, ISO, and other styles
46

Mohsin Saeed, Noora, and Faiz Ahmed Mohamed Elfaki. "Parametric Weibull Model Based on Imputations Techniques for Partly Interval Censored Data." Austrian Journal of Statistics 49, no. 3 (2020): 30–37. http://dx.doi.org/10.17713/ajs.v49i3.1027.

Full text
Abstract:
The term survival analysis has been used in a broad sense to describe collection of statistical procedures for data analysis for which the outcome variable of interest is time until an event occurs, the time to failure of an experimental unit might be censored and this can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, the analysis of this model is conducted based on parametric Weibull model via PIC data. Moreover, two imputation techniques are used, which are: left point and right point. The effectiveness of the proposed model is tested through numerical analysis on simulated and secondary data sets.
APA, Harvard, Vancouver, ISO, and other styles
47

Naganuma, Misa, Yumi Motooka, Sayaka Sasaoka, et al. "Analysis of adverse events of renal impairment related to platinum-based compounds using the Japanese Adverse Drug Event Report database." SAGE Open Medicine 6 (January 1, 2018): 205031211877247. http://dx.doi.org/10.1177/2050312118772475.

Full text
Abstract:
Objectives: Platinum compounds cause several adverse events, such as nephrotoxicity, gastrointestinal toxicity, myelosuppression, ototoxicity, and neurotoxicity. We evaluated the incidence of renal impairment as adverse events are related to the administration of platinum compounds using the Japanese Adverse Drug Event Report database. Methods: We analyzed adverse events associated with the use of platinum compounds reported from April 2004 to November 2016. The reporting odds ratio at 95% confidence interval was used to detect the signal for each renal impairment incidence. We evaluated the time-to-onset profile of renal impairment and assessed the hazard type using Weibull shape parameter and used the applied association rule mining technique to discover undetected relationships such as possible risk factor. Results: In total, 430,587 reports in the Japanese Adverse Drug Event Report database were analyzed. The reporting odds ratios (95% confidence interval) for renal impairment resulting from the use of cisplatin, oxaliplatin, carboplatin, and nedaplatin were 2.7 (2.5–3.0), 0.6 (0.5–0.7), 0.8 (0.7–1.0), and 1.3 (0.8–2.1), respectively. The lower limit of the reporting odds ratio (95% confidence interval) for cisplatin was &gt;1. The median (lower–upper quartile) onset time of renal impairment following the use of platinum-based compounds was 6.0–8.0 days. The Weibull shape parameter β and 95% confidence interval upper limit of oxaliplatin were &lt;1. In the association rule mining, the score of lift for patients who were treated with cisplatin and co-administered furosemide, loxoprofen, or pemetrexed was high. Similarly, the scores for patients with hypertension or diabetes mellitus were high. Conclusion: Our findings suggest a potential risk of renal impairment during cisplatin use in real-world setting. The present findings demonstrate that the incidence of renal impairment following cisplatin use should be closely monitored when patients are hypertensive or diabetic, or when they are co-administered furosemide, loxoprofen, or pemetrexed. In addition, healthcare professionals should closely assess a patient’s background prior to treatment.
APA, Harvard, Vancouver, ISO, and other styles
48

Wei, Guoliang, Linlin Liu, Licheng Wang, and Derui Ding. "Event-triggered control for discrete-time systems with unknown nonlinearities: an interval observer-based approach." International Journal of Systems Science 51, no. 6 (2020): 1019–31. http://dx.doi.org/10.1080/00207721.2020.1746441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

TERENZI, ROBERTO. "n-FOLD TIME COINCIDENCES ON IGEC GRAVITATIONAL WAVE EVENTS DATA USING DATA-BASE TECHNOLOGY." International Journal of Modern Physics D 09, no. 03 (2000): 347–51. http://dx.doi.org/10.1142/s0218271800000402.

Full text
Abstract:
Gravitational events on IGEC data are marked with their detection time td. Although td takes into account phase shifts due to hardware and software, it could be statistically retarded or anticipated by noise with respect to the arrival time ta of a gravitational wave. Then we can estimate a time interval Δ for each event so that we can consider the ta of the event equally likely in Δ. Consequently we can define a n-fold coincidence as a n-tuple of events whose associated time intervals have a non-empty intersection. Using this definition and taking into account some time interval properties we show that a n-fold coincidence set can be computed by an appropriate selection of events from the 2-fold coincidence sets. Furthermore, we stress that a data base management system (DBMS) is a very effective tool to store IGEC data (i.e. events) and to implement n-fold coincidence analysis based on the above considerations. Finally we propose to use a DBMS to build a database of both events and n-fold coincidence analysis data in order to have an efficient and powerful tool for data retrieval and further analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Asada, Saori, Hiroshi Morita, Atsuyuki Watanabe, et al. "Indication and prognostic significance of programmed ventricular stimulation in asymptomatic patients with Brugada syndrome." EP Europace 22, no. 6 (2020): 972–79. http://dx.doi.org/10.1093/europace/euaa003.

Full text
Abstract:
Abstract Aims To establish the indication for programmed ventricular stimulation (PVS) for asymptomatic patients with Brugada syndrome (BrS), we evaluated the prognostic significance of PVS based on abnormal electrocardiogram (ECG) markers. Methods and results One hundred and twenty-five asymptomatic patients with BrS were included. We performed PVS at two sites of the right ventricle with up to three extrastimuli [two pacing cycle lengths and minimum coupling interval (MCI) of 180 ms]. We followed the patients for 133 months and evaluated ventricular fibrillation (VF) events. Fragmented QRS (fQRS) and Tpeak-Tend (Tpe) interval were evaluated as ECG markers for identifying high-risk patients. Fragmented QRS and long Tpe interval (≥100 ms) were observed in 66 and 37 patients, respectively. Ventricular fibrillation was induced by PVS in 60 patients. During follow-up, 10 patients experienced VF events. Fragmented QRS, long Tpe interval, and PVS-induced VF with an MCI of 180 ms or up to two extrastimuli were associated with future VF events (fQRS: P = 0.015, Tpe ≥ 100 ms: P = 0.038, VF induction: P &amp;lt; 0.001). However, PVS-induced VF with an MCI of 200 ms was less specific (P = 0.049). The frequencies of ventricular tachyarrhythmia events during follow-up were 0%/year with no ECG markers and 0.1%/year with no VF induction. The existence of two ECG factors with induced VF was strongly associated with future VF events (event rate: 4.4%/year, P &amp;lt; 0.001), and the existence of one ECG factor with induced VF was also associated (event rate: 1.3%/year, P = 0.011). Conclusion We propose PVS with a strict protocol for asymptomatic patients with fQRS and/or long Tpe interval to identify high-risk patients.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography