To see the other types of publications on this topic, follow the link: Predictive Complex Event Processing.

Journal articles on the topic 'Predictive Complex Event Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Predictive Complex Event Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jia, Yunsong, Shuaiqi Huang, and Xiang Li. "Complex event processing system for IoT greenhouse." E3S Web of Conferences 267 (2021): 01048. http://dx.doi.org/10.1051/e3sconf/202126701048.

Full text
Abstract:
Greenhouse is an important part of facility agriculture and a typical application scenario of modern agricultural technology. The greenhouse environment has the characteristics of nonlinearity, strong coupling, large inertia, and multiple disturbances. There are many environmental factors and it is a typical complex system [7]. In smart greenhouses, control commands are mostly triggered by complex events with multi-dimensional information. In this paper, by building the aggregation structure of complex events in the greenhouse, the technology is applied in the greenhouse as a whole. The core innovations of this paper are as follows: through the analysis of the information transmission process in the greenhouse, combined with the characteristics of the scene, a CEP information structure with predictive modules is formed, which is conducive to the popularization and application of CEP technology in the agricultural field. Pointed out the importance of extreme conditions in the prediction of the greenhouse environment for model evaluation. By improving the loss function in the machine learning algorithm, the prediction performance of a variety of algorithms under this condition has been improved. Applying CEP technology to intelligent greenhouse control scenarios, a set of practical complex event processing systems for greenhouse control has been formed.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yongheng, Hui Gao, and Guidan Chen. "Predictive complex event processing based on evolving Bayesian networks." Pattern Recognition Letters 105 (April 2018): 207–16. http://dx.doi.org/10.1016/j.patrec.2017.05.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nawaz, Falak, Naeem Khalid Janjua, and Omar Khadeer Hussain. "PERCEPTUS: Predictive complex event processing and reasoning for IoT-enabled supply chain." Knowledge-Based Systems 180 (September 2019): 133–46. http://dx.doi.org/10.1016/j.knosys.2019.05.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mdhaffar, Afef, Ismael Bouassida Rodriguez, Khalil Charfi, Leila Abid, and Bernd Freisleben. "CEP4HFP: Complex Event Processing for Heart Failure Prediction." IEEE Transactions on NanoBioscience 16, no. 8 (December 2017): 708–17. http://dx.doi.org/10.1109/tnb.2017.2769671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zámečníková, Eva, and Jitka Kreslíková. "Performance Measurement of Complex Event Platforms." Journal of information and organizational sciences 40, no. 2 (December 9, 2016): 237–54. http://dx.doi.org/10.31341/jios.40.2.5.

Full text
Abstract:
The aim of this paper is to find and compare existing solutions of complex event processing platforms (CEP). CEP platforms generally serve for processing and/or predicting of high frequency data. We intend to use CEP platform for processing of complex time series and integrate a solution for newly proposed method of decision making. The decision making process will be described by formal grammar. As there are lots of CEP solutions we will take the following characteristics under consideration - the processing in real time, possibility of processing of high volume data from multiple sources, platform independence, platform allowing integration with user solution and open license. At first we will talk about existing CEP tools and their specific way of use in praxis. Then we will mention the design of method for formalization of business rules used for decision making. Afterwards, we focus on two platforms which seem to be the best fit for integration of our solution and we will list the main pros and cons of each approach. Next part is devoted to benchmark platforms for CEP. Final part is devoted to experimental measurements of platform with integrated method for decision support.
APA, Harvard, Vancouver, ISO, and other styles
6

Terroso-Sáenz, Fernando, Jesús Cuenca-Jara, Aurora González-Vidal, and Antonio F. Skarmeta. "Human Mobility Prediction Based on Social Media with Complex Event Processing." International Journal of Distributed Sensor Networks 12, no. 9 (September 2016): 5836392. http://dx.doi.org/10.1177/155014775836392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arwan, Achmad. "Prediction of Increasing Production Activities using Combination of Query Aggregation on Complex Events Processing and Neural Network." Register: Jurnal Ilmiah Teknologi Sistem Informasi 2, no. 2 (July 1, 2016): 79. http://dx.doi.org/10.26594/r.v2i2.550.

Full text
Abstract:
AbstrakProduksi, order, penjualan, dan pengiriman adalah serangkaian event yang saling terkait dalam industri manufaktur. Selanjutnya hasil dari event tersebut dicatat dalam event log. Complex Event Processing adalah metode yang digunakan untuk menganalisis apakah terdapat pola kombinasi peristiwa tertentu (peluang/ancaman) yang terjadi pada sebuah sistem, sehingga dapat ditangani secara cepat dan tepat. Jaringan saraf tiruan adalah metode yang digunakan untuk mengklasifikasi data peningkatan proses produksi. Hasil pencatatan rangkaian proses yang menyebabkan peningkatan produksi digunakan sebagai data latih untuk mendapatkan fungsi aktivasi dari jaringan saraf tiruan. Penjumlahan hasil catatan event log dimasukkan ke input jaringan saraf tiruan untuk perhitungan nilai aktivasi. Ketika nilai aktivasi lebih dari batas yang ditentukan, maka sistem mengeluarkan sinyal untuk meningkatkan produksi, jika tidak, sistem tetap memantau kejadian. Hasil percobaan menunjukkan bahwa akurasi dari metode ini adalah 77% dari 39 rangkaian aliran event.Kata kunci: complex event processing, event, jaringan saraf tiruan, prediksi peningkatan produksi, proses. AbstractProductions, orders, sales, and shipments are series of interrelated events within manufacturing industry. Further these events were recorded in the event log. Complex event processing is a method that used to analyze whether there are patterns of combinations of certain events (opportunities / threats) that occur in a system, so it can be addressed quickly and appropriately. Artificial neural network is a method that we used to classify production increase activities. The series of events that cause the increase of the production used as a dataset to train the weight of neural network which result activation value. An aggregate stream of events inserted into the neural network input to compute the value of activation. When the value is over a certain threshold (the activation value results from training process), the system will issue a signal to increase production, otherwise system will keep monitor the events. Experiment result shows that the accuracy of this method is 77% for 39 series of event streams.Keywords: complex event processing, event, neural networks, process, production increase prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Fu, Bin Bin, and Jie Zhu. "A Research on Complex Event Processing Technology Based on Smart Logistic System." Applied Mechanics and Materials 722 (December 2014): 430–35. http://dx.doi.org/10.4028/www.scientific.net/amm.722.430.

Full text
Abstract:
With IOT technology developing and the cost reducing, Its application in supply chain is a matter of time. Smart logistic system is one of the IOT technology application in supply chain which solve difficult problems, such as acquisition underlying data, information transfer and so on. we need to achieve higher level application and solve more complex problems such as improving inventory management accuracy, reducing supply chain management cost, improving accuracy of supply and demand prediction, supply chain's rapidly react ability,these need to use complex event processing technology. It will introduce how to apply complex event processing technology to supply chain system based on IOT. By this way we can sort out valuable information by processing a large number of simple event.
APA, Harvard, Vancouver, ISO, and other styles
9

Cannon, Jonathan. "Expectancy-based rhythmic entrainment as continuous Bayesian inference." PLOS Computational Biology 17, no. 6 (June 9, 2021): e1009025. http://dx.doi.org/10.1371/journal.pcbi.1009025.

Full text
Abstract:
When presented with complex rhythmic auditory stimuli, humans are able to track underlying temporal structure (e.g., a “beat”), both covertly and with their movements. This capacity goes far beyond that of a simple entrained oscillator, drawing on contextual and enculturated timing expectations and adjusting rapidly to perturbations in event timing, phase, and tempo. Previous modeling work has described how entrainment to rhythms may be shaped by event timing expectations, but sheds little light on any underlying computational principles that could unify the phenomenon of expectation-based entrainment with other brain processes. Inspired by the predictive processing framework, we propose that the problem of rhythm tracking is naturally characterized as a problem of continuously estimating an underlying phase and tempo based on precise event times and their correspondence to timing expectations. We present two inference problems formalizing this insight: PIPPET (Phase Inference from Point Process Event Timing) and PATIPPET (Phase and Tempo Inference). Variational solutions to these inference problems resemble previous “Dynamic Attending” models of perceptual entrainment, but introduce new terms representing the dynamics of uncertainty and the influence of expectations in the absence of sensory events. These terms allow us to model multiple characteristics of covert and motor human rhythm tracking not addressed by other models, including sensitivity of error corrections to inter-event interval and perceived tempo changes induced by event omissions. We show that positing these novel influences in human entrainment yields a range of testable behavioral predictions. Guided by recent neurophysiological observations, we attempt to align the phase inference framework with a specific brain implementation. We also explore the potential of this normative framework to guide the interpretation of experimental data and serve as building blocks for even richer predictive processing and active inference models of timing.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yongheng, Xiaozan Zhang, and Zengwang Wang. "A Proactive Decision Support System for Online Event Streams." International Journal of Information Technology & Decision Making 17, no. 06 (November 2018): 1891–913. http://dx.doi.org/10.1142/s0219622018500463.

Full text
Abstract:
In-stream big data processing is an important part of big data processing. Proactive decision support systems can predict future system states and execute some actions to avoid unwanted states. In this paper, we propose a proactive decision support system for online event streams. Based on Complex Event Processing (CEP) technology, this method uses structure varying dynamic Bayesian network to predict future events and system states. Different Bayesian network structures are learned and used according to different event context. A networked distributed Markov decision processes model with predicting states is proposed as sequential decision making model. A Q-learning method is investigated for this model to find optimal joint policy. The experimental evaluations show that this method works well for congestion control in transportation system.
APA, Harvard, Vancouver, ISO, and other styles
11

Joch, Michael, Mathias Hegele, Heiko Maurer, Hermann Müller, and Lisa Katharina Maurer. "Brain negativity as an indicator of predictive error processing: the contribution of visual action effect monitoring." Journal of Neurophysiology 118, no. 1 (July 1, 2017): 486–95. http://dx.doi.org/10.1152/jn.00036.2017.

Full text
Abstract:
The error (related) negativity (Ne/ERN) is an event-related potential in the electroencephalogram (EEG) correlating with error processing. Its conditions of appearance before terminal external error information suggest that the Ne/ERN is indicative of predictive processes in the evaluation of errors. The aim of the present study was to specifically examine the Ne/ERN in a complex motor task and to particularly rule out other explaining sources of the Ne/ERN aside from error prediction processes. To this end, we focused on the dependency of the Ne/ERN on visual monitoring about the action outcome after movement termination but before result feedback (action effect monitoring). Participants performed a semi-virtual throwing task by using a manipulandum to throw a virtual ball displayed on a computer screen to hit a target object. Visual feedback about the ball flying to the target was masked to prevent action effect monitoring. Participants received a static feedback about the action outcome (850 ms) after each trial. We found a significant negative deflection in the average EEG curves of the error trials peaking at ~250 ms after ball release, i.e., before error feedback. Furthermore, this Ne/ERN signal did not depend on visual ball-flight monitoring after release. We conclude that the Ne/ERN has the potential to indicate error prediction in motor tasks and that it exists even in the absence of action effect monitoring. NEW & NOTEWORTHY In this study, we are separating different kinds of possible contributors to an electroencephalogram (EEG) error correlate (Ne/ERN) in a throwing task. We tested the influence of action effect monitoring on the Ne/ERN amplitude in the EEG. We used a task that allows us to restrict movement correction and action effect monitoring and to control the onset of result feedback. We ascribe the Ne/ERN to predictive error processing where a conscious feeling of failure is not a prerequisite.
APA, Harvard, Vancouver, ISO, and other styles
12

Worsnop, Rochelle P., Michael Scheuerer, Thomas M. Hamill, and Julie K. Lundquist. "Generating wind power scenarios for probabilistic ramp event prediction using multivariate statistical post-processing." Wind Energy Science 3, no. 1 (June 14, 2018): 371–93. http://dx.doi.org/10.5194/wes-3-371-2018.

Full text
Abstract:
Abstract. Wind power forecasting is gaining international significance as more regions promote policies to increase the use of renewable energy. Wind ramps, large variations in wind power production during a period of minutes to hours, challenge utilities and electrical balancing authorities. A sudden decrease in wind-energy production must be balanced by other power generators to meet energy demands, while a sharp increase in unexpected production results in excess power that may not be used in the power grid, leading to a loss of potential profits. In this study, we compare different methods to generate probabilistic ramp forecasts from the High Resolution Rapid Refresh (HRRR) numerical weather prediction model with up to 12 h of lead time at two tall-tower locations in the United States. We validate model performance using 21 months of 80 m wind speed observations from towers in Boulder, Colorado, and near the Columbia River gorge in eastern Oregon. We employ four statistical post-processing methods, three of which are not currently used in the literature for wind forecasting. These procedures correct biases in the model and generate short-term wind speed scenarios which are then converted to power scenarios. This probabilistic enhancement of HRRR point forecasts provides valuable uncertainty information of ramp events and improves the skill of predicting ramp events over the raw forecasts. We compute Brier skill scores for each method with regard to predicting up- and down-ramps to determine which method provides the best prediction. We find that the Standard Schaake shuffle method yields the highest skill at predicting ramp events for these datasets, especially for up-ramp events at the Oregon site. Increased skill for ramp prediction is limited at the Boulder, CO, site using any of the multivariate methods because of the poor initial forecasts in this area of complex terrain. These statistical methods can be implemented by wind farm operators to generate a range of possible wind speed and power scenarios to aid and optimize decisions before ramp events occur.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yangkang, and Jitao Ma. "Random noise attenuation by f-x empirical-mode decomposition predictive filtering." GEOPHYSICS 79, no. 3 (May 1, 2014): V81—V91. http://dx.doi.org/10.1190/geo2013-0080.1.

Full text
Abstract:
Random noise attenuation always played an important role in seismic data processing. One of the most widely used methods for suppressing random noise was [Formula: see text] predictive filtering. When the subsurface structure becomes complex, this method suffered from higher prediction errors owing to the large number of different dip components that need to be predicted. We developed a novel denoising method termed [Formula: see text] empirical-mode decomposition (EMD) predictive filtering. This new scheme solved the problem that makes [Formula: see text] EMD ineffective with complex seismic data. Also, by making the prediction more precise, the new scheme removed the limitation of conventional [Formula: see text] predictive filtering when dealing with multidip seismic profiles. In this new method, we first applied EMD to each frequency slice in the [Formula: see text] domain and obtained several intrinsic mode functions (IMFs). Then, an autoregressive model was applied to the sum of the first few IMFs, which contained the high-dip-angle components, to predict the useful steeper events. Finally, the predicted events were added to the sum of the remaining IMFs. This process improved the prediction precision by using an EMD-based dip filter to reduce the dip components before [Formula: see text] predictive filtering. Synthetic and real data sets demonstrated the performance of our proposed method in preserving more useful energy.
APA, Harvard, Vancouver, ISO, and other styles
14

Bao, Jie, Uldis Bojars, Ranzeem Choudhury, Li Ding, Mark Greaves, Ashish Kapoor, Sandy Louchart, et al. "Reports of the AAAI 2009 Spring Symposia." AI Magazine 30, no. 3 (July 7, 2009): 89. http://dx.doi.org/10.1609/aimag.v30i3.2253.

Full text
Abstract:
The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, was pleased to present the 2009 Spring Symposium Series, held Monday through Wednesday, March 23–25, 2009 at Stanford University. The titles of the nine symposia were Agents that Learn from Human Teachers, Benchmarking of Qualitative Spatial and Temporal Reasoning Systems, Experimental Design for Real-World Systems, Human Behavior Modeling, Intelligent Event Processing, Intelligent Narrative Technologies II, Learning by Reading and Learning to Read, Social Semantic Web: Where Web 2.0 Meets Web 3.0, and Technosocial Predictive Analytics. The goal of the Agents that Learn from Human Teachers was to investigate how we can enable software and robotics agents to learn from real-time interaction with an everyday human partner. The aim of the Benchmarking of Qualitative Spatial and Temporal Reasoning Systems symposium was to initiate the development of a problem repository in the field of qualitative spatial and temporal reasoning and identify a graded set of challenges for future midterm and long-term research. The Experimental Design symposium discussed the challenges of evaluating AI systems. The Human Behavior Modeling symposium explored reasoning methods for understanding various aspects of human behavior, especially in the context of designing intelligent systems that interact with humans. The Intelligent Event Processing symposium discussed the need for more AI-based approaches in event processing and defined a kind of research agenda for the field, coined as intelligent complex event processing (iCEP). The Intelligent Narrative Technologies II AAAI symposium discussed innovations, progress, and novel techniques in the research domain. The Learning by Reading and Learning to Read symposium explored two aspects of making natural language texts semantically accessible to, and processable by, machines. The Social Semantic Web symposium focused on the real-world grand challenges in this area. Finally, the Technosocial Predictive Analytics symposium explored new methods for anticipatory analytical thinking that provide decision advantage through the integration of human and physical models.
APA, Harvard, Vancouver, ISO, and other styles
15

Joch, Michael, Mathias Hegele, Heiko Maurer, Hermann Müller, and Lisa K. Maurer. "Online Movement Monitoring Modulates Feedback Processing in Motor Learning: An Analysis of Event-Related Potentials." Journal of Motor Learning and Development 6, s1 (April 2018): S138—S153. http://dx.doi.org/10.1123/jmld.2016-0075.

Full text
Abstract:
Motor learning can be monitored by observing the development of neural correlates of error processing. Among these neural correlates, the error- and feedback-related negativity (Ne/ERN and FRN) represent error processing mechanisms. While the Ne/ERN is more related to error prediction, the FRN is found after an error is manifested. The questions the current study strives to answer are: What information is needed by the system to make error predictions and how is this represented by the Ne/ERN and FRN in a complex motor task? We reduced the information and increased the difficulty level for the prediction in a semivirtual throwing task and found no Ne/ERN but a large FRN when the action result was finally observed (hitting or missing a target). We assume that uncertainty for error prediction was too high (either due to insufficient information or due to lacking prerequisites for prediction), such that error processing had to be mainly based on feedback. The finding is in line with the reinforcement theory of learning, after which Ne/ERN and FRN should behave complementary.
APA, Harvard, Vancouver, ISO, and other styles
16

Vera-Constán, Fátima, Irune Fernández-Prieto, Joel García-Morera, and Jordi Navarra. "Predictable variations in auditory pitch modulate the spatial processing of visual stimuli: An ERP study." Seeing and Perceiving 25 (2012): 25. http://dx.doi.org/10.1163/187847612x646488.

Full text
Abstract:
We investigated whether perceiving predictable ‘ups and downs’ in acoustic pitch (as can be heard in musical melodies) can influence the spatial processing of visual stimuli as a consequence of a ‘spatial recoding’ of sound (see Foster and Zatorre, 2010; Rusconi et al., 2006). Event-related potentials (ERPs) were recorded while participants performed a color discrimination task of a visual target that could appear either above or below a centrally-presented fixation point. Each experimental trial started with an auditory isochronous stream of 11 tones including a high- and a low-pitched tone. The visual target appeared isochronously after the last tone. In the ‘non-predictive’ condition, the tones were presented in an erratic fashion (e.g., ‘high-low-low-high-high-low-high …’). In the ‘predictive condition’, the melodic combination of high- and low-pitched tones was highly predictable (e.g., ‘low-high-low-high-low …’). Within the predictive condition, the visual stimuli appeared congruently or incongruently with respect to the melody (‘… low-high-low-high-low-UP’ or ‘… low-high-low-high-low-DOWN’, respectively). Participants showed faster responses when the visual target appeared after a predictive melody. Electrophysiologically, early (25–150 ms) amplitude effects of predictability were observed in frontal and parietal regions, spreading to central regions (N1) afterwards. Predictability effects were also found in the P2–N2 complex and the P3 in central and parietal regions. Significant auditory-to-visual congruency effects were also observed in the parieto-occipital P3 component. Our findings reveal the existence of crossmodal effects of perceiving auditory isochronous melodies on visual temporal orienting. More importantly, our results suggest that pitch information can be transformed into a spatial code that shapes the spatial processing in other modalities such as vision.
APA, Harvard, Vancouver, ISO, and other styles
17

Wilson, Ryan, Patrick H. J. Mercier, Bussaraporn Patarachao, and Alessandro Navarra. "Partial Least Squares Regression of Oil Sands Processing Variables within Discrete Event Simulation Digital Twin." Minerals 11, no. 7 (June 26, 2021): 689. http://dx.doi.org/10.3390/min11070689.

Full text
Abstract:
Oil remains a major contributor to global primary energy supply and is, thus, fundamental to the continued functioning of modern society and related industries. Conventional oil and gas reserves are finite and are being depleted at a relatively rapid pace. With alternative fuels and technologies still unable to fill the gap, research and development of unconventional petroleum resources have accelerated markedly in the past 20 years. With some of the largest bitumen deposits in the world, Canada has an active oil mining and refining industry. Bitumen deposits, also called oil sands, are formed in complex geological environments and subject to a host of syn- and post-depositional processes. As a result, some ores are heterogeneous, at both individual reservoir and regional scales, which poses significant problems in terms of extractive processing. Moreover, with increased environmental awareness and enhanced governmental regulations and industry best practices, it is critical for oil sands producers to improve process efficiencies across the spectrum. Discrete event simulation (DES) is a computational paradigm to develop dynamic digital twins, including the interactions of critical variables and processes. In the case of mining systems, the digital twin includes aspects of geological uncertainty. The resulting simulations include alternate operational modes that are characterized by separate operational policies and tactics. The current DES framework has been customized to integrate predictive modelling data, generated via partial least squares (PLS) regression, in order to evaluate system-wide response to geological uncertainty. Sample computations that are based on data from Canada’s oil sands are presented, showing the framework to be a powerful tool to assess and attenuate operational risk factors in the extractive processing of bitumen deposits. Specifically, this work addresses blending control strategies prior to bitumen extraction and provides a pathway to incorporate geological variation into decision-making processes throughout the value chain.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Yinglian, and Jifeng Chen. "Traffic Change Forecast and Decision Based on Variable Structure Dynamic Bayesian Network." International Journal of Decision Support System Technology 13, no. 2 (April 2021): 45–61. http://dx.doi.org/10.4018/ijdsst.2021040103.

Full text
Abstract:
The rapid development of internet of things (IoT) and in-stream big data processing technology has brought new opportunities for the research of intelligent transportation systems. Traffic forecasting has always been a key issue in the smart transportation system. Aiming at the problem that a fixed model cannot adapt to multiple environments in traffic flow prediction and the problem of model updating for data flow, a traffic flow prediction method is proposed based on variable structure dynamic Bayesian network. Based on the complex event processing and event context, this method divides historical data through context clustering and supports cluster update through online clustering of event streams. For different clustered data, a search-scoring method is used to learn the corresponding Bayesian network structure, and a Bayesian network is approximated based on a Gaussian mixture model. When forecasting online, a suitable model or combination of models is selected according to the current context for prediction.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Jae Yun, Young Geun Yoon, Tae Keun Oh, Seunghee Park, and Sang Il Ryu. "A Study on Data Pre-Processing and Accident Prediction Modelling for Occupational Accident Analysis in the Construction Industry." Applied Sciences 10, no. 21 (November 9, 2020): 7949. http://dx.doi.org/10.3390/app10217949.

Full text
Abstract:
In the construction industry, it is difficult to predict occupational accidents because various accident characteristics arise simultaneously and organically in different types of work. Furthermore, even when analyzing occupational accident data, it is difficult to deduce meaningful results because the data recorded by the incident investigator are qualitative and include a wide variety of data types and categories. Recently, numerous studies have used machine learning to analyze the correlations in such complex construction accident data; however, heretofore the focus has been on predicting severity with various variables, and several limitations remain when deriving the correlations between features from various variables. Thus, this paper proposes a data processing procedure that can efficiently manipulate accident data using optimal machine learning techniques and derive and systematize meaningful variables to rationally approach such complex problems. In particular, among the various variables, the most influential variables are derived through methods such as clustering, chi-square, Cramer’s V, and predictor importance; then, the analysis is simplified by optimally grouping the variables. For accident data with optimal variables and elements, a predictive model is constructed between variables, using a support vector machine and decision-tree-based ensemble; then, the correlation between the dependent and independent variables is analyzed through an alluvial flow diagram for several cases. Therefore, a new processing procedure has been introduced in data preprocessing and accident prediction modelling to overcome difficulties from complex and diverse construction occupational accident data, and effective accident prevention is possible by deriving correlations of construction accidents using this process.
APA, Harvard, Vancouver, ISO, and other styles
20

LINGVAY, Iosif, Victorin Emilian TOADER, Ovidiu CIOGESCU, Adriana BORȘ, and Andrei MIHAI. "Complex System for Earthquake Prediction, Warning and Local Assessment of Seismic Events." Electrotehnica, Electronica, Automatica 69, no. 1 (February 15, 2021): 80–89. http://dx.doi.org/10.46904/eea.21.69.1.1108011.

Full text
Abstract:
A complex system for zonal earthquake prediction, warning, and local assessment of seismic events has been designed, performed, implemented, and experimented/validated. The system was designed to ensure simultaneously: the reception of warning signals following earthquakes with the epicentre on a radius of 1000 km; acquisition of local precursor data for a possible prediction of seismic events with the epicentre in the perimeter of the targeted locality and/or improvement of the database in the field of Earth physics purchased and processed centrally at the national seismic dispatcher; acquisition of data on the intensity of local seismic movements, based on which, when a predetermined threshold considered dangerous is exceeded, a real-time action order is issued for the protection of high-risk equipment and installations in operation. The realized system is structured on the national seismic dispatcher DSN (with the role of seismic data acquisition from the territory) connected by a bidirectional communication system with a local dispatcher DL which is provided with a system for acquiring and storing local seismic data (vibration detector 3D and temperature transducer mounted in a 40 m deep drilled well, radon detector and associated parameters: temperature, pressure, and humidity of the air mounted at the mouth of the drilled well). The implemented system is able, through the specialized software implemented, to take over the warning signals received from the national seismic dispatcher, to process the locally acquired data, and after the local validation of the seismic event to issue real-time action command (when exceeding values of pre-established major risk threshold) of the protections of high-risk installations in operation in the targeted perimeter. The experimentation/validation of the system, of the interconnection networks, and of the specialized software of the implemented application was done both by continuously recording the local seismic parameters, verifying the communication between DSN and DL, and by taking two warnings regarding seismic events produced (on 30.10.2020  Mw = 7, Greece and on 22.10.2020, at 20:22 hours, ML = 4 R, Vrancea, RO). By processing the data recorded during these events, the speeds of seismic waves in the respective directions were calculated. Thus, for the event of 30.10.2020 Greece, a speed of seismic waves of 7,418 km/second was determined and for the event from 22.10.2020 Vrancea, at 20:22 hours, it was calculated that the secondary waves are moving with 12,686 km/second and the surface seismic waves with 5,063 km/second. Following the analysis/comparison of acceleration intensities with the pre-set threshold level recorded locally for potentially dangerous events, it was found that these events were felt in Râmnicu Vâlcea at a level below the pre-set danger threshold and consequently, the specialized software of the application did not generate a control signal for actuating the protection of high-risk equipment in operation.
APA, Harvard, Vancouver, ISO, and other styles
21

Suboh, Mohd Zubir, Rosmina Jaafar, Nazrul Anuar Nayan, and Noor Hasmiza Harun. "ECG-based Detection and Prediction Models of Sudden Cardiac Death: Current Performances and New Perspectives on Signal Processing Techniques." International Journal of Online and Biomedical Engineering (iJOE) 15, no. 15 (December 17, 2019): 110. http://dx.doi.org/10.3991/ijoe.v15i15.11688.

Full text
Abstract:
Heart disease remains the main leading cause of death globally and around 50% of the patients died due to sudden cardiac death (SCD). Early detection and prediction of SCD have become an important topic of research and it is crucial for cardiac patient’s survival. Electrocardiography (ECG) has always been the first screening method for patient with cardiac complaints and it is proven as an important predictor of SCD. ECG parameters such as RR interval, QT duration, QRS complex curve, J-point elevation and T-wave alternan are found effective in differentiating normal and SCD subjects. The objectives of this paper are to give an overview of SCD and to analyze multiple important ECG-based SCD detection and prediction models in terms of processing techniques and performance wise. Detail discussions are made in four major stages of the models developed including ECG data, signal pre-processing and processing techniques as well as classification methods. Heart rate variability (HRV) is found as an important SCD predictor as it is widely used in detecting or predicting SCD. Studies showed the possibility of SCD to be detected as early as one hour prior to the event using linear and non-linear features of HRV. Currently, up to 3 hours of analysis has been carried out. However, the best prediction models are only able to detect SCD at 6 minutes before the event with acceptable accuracy of 92.77%. A few arguments and recommendation in terms of data preparation, processing and classification techniques, as well as utilizing photoplethysmography with ECG are pointed out in this paper so that future analysis can be done with better accuracy of SCD detection accuracy.
APA, Harvard, Vancouver, ISO, and other styles
22

Martinez-Caro, J. M., and M. D. Cano. "On the Identification and Prediction of Stalling Events to Improve QoE in Video Streaming." Electronics 10, no. 6 (March 22, 2021): 753. http://dx.doi.org/10.3390/electronics10060753.

Full text
Abstract:
Monitoring the Quality of user Experience is a challenge for video streaming services. Models for Quality of User Experience (QoE) evaluation such as the ITU-T Rec. P.1203 are very promising. Among the input data that they require are the occurrence and duration of stalling events. A stalling even5 is an interruption in the playback of multimedia content, and its negative impact on QoE is immense. Given the idiosyncrasy of this type of event, to count it and its duration is a complex task to be automated, i.e., without the participation of the user who visualizes the events or without direct access to the final device. In this work, we propose two methods to overcome these limitations in video streaming using the DASH framework. The first method is intended to detect stalling events. For simplicity, it is based on the behavior of the transport layer data and is able to classify an IP packet as belonging (or not) to a stalling event. The second method aims to predict if the next IP packet of a multimedia stream will belong to a stalling event (or not), using a recurrent neural network with a variant of the Long Short–Term Memory (LSTM). Our results show that the detection model is able to spot the occurrence of a stalling event before being experienced by the user, and the prediction model is able to forecast if the next packet will belong to a stalling event with an error rate of 10.83%, achieving an F1 score of 0.923.
APA, Harvard, Vancouver, ISO, and other styles
23

Baggio, Giosué, and André Fonseca. "Complex dynamics of semantic memory access in reading." Journal of The Royal Society Interface 9, no. 67 (June 29, 2011): 328–38. http://dx.doi.org/10.1098/rsif.2011.0289.

Full text
Abstract:
Understanding a word in context relies on a cascade of perceptual and conceptual processes, starting with modality-specific input decoding, and leading to the unification of the word's meaning into a discourse model. One critical cognitive event, turning a sensory stimulus into a meaningful linguistic sign, is the access of a semantic representation from memory. Little is known about the changes that activating a word's meaning brings about in cortical dynamics. We recorded the electroencephalogram (EEG) while participants read sentences that could contain a contextually unexpected word, such as ‘cold’ in ‘In July it is very cold outside’. We reconstructed trajectories in phase space from single-trial EEG time series, and we applied three nonlinear measures of predictability and complexity to each side of the semantic access boundary, estimated as the onset time of the N400 effect evoked by critical words. Relative to controls, unexpected words were associated with larger prediction errors preceding the onset of the N400. Accessing the meaning of such words produced a phase transition to lower entropy states, in which cortical processing becomes more predictable and more regular. Our study sheds new light on the dynamics of information flow through interfaces between sensory and memory systems during language processing.
APA, Harvard, Vancouver, ISO, and other styles
24

Marshall, Robert E., and Katherine L. Horgan. "Multi-wavelength radar target detection in an extreme advection duct event." International Journal of Microwave and Wireless Technologies 3, no. 3 (April 4, 2011): 373–81. http://dx.doi.org/10.1017/s1759078711000225.

Full text
Abstract:
Near sea surface radio frequency (RF) refraction is four dimensional (4D) and can significantly impact the performance of radar systems. The refractivity field is dictated by the vertical thermodynamic structure of the constantly evolving marine atmospheric boundary layer (MABL). Logistical and budgetary restraints on meteorological measurements over water to capture the spatio-temporal structure of refractivity fields influencing radar performance have limited the knowledge of how and why radar performance is azimuth, range, and time dependent. Rapidly increasing computer processing speeds and decreasing memory capacity costs have supported the horizontal and vertical resolution requirements for mesoscale numerical weather prediction (NWP) models to resolve the thermodynamic structure in the MABL. Once modeled, refractivity structure is easily calculated from the thermodynamic structure. Mesoscale NWP models coupled with modern parabolic equation radar performance models can support the prediction of 4D radar performance in challenging non-homogeneous, near surface refractivity fields at the time and location of the modeler's choice. The NWP modeling presented in this paper demonstrates how large-scale offshore flow of warm and dry air over colder seas produces strong near surface RF trapping. Large land-sea temperature differences can produce near shore sea breezes and surface-based ducts. This paper describes modeled radar performance in such a complex ducting structure over the Persian Gulf during large-scale northwest atmospheric flow. The refractivity field was resolved by the Coupled Ocean/ Atmosphere Mesoscale Prediction System (COAMPS® is a registered trademark of the Naval Research Laboratory) and the notional radar performance was modeled by the advanced refractive effects prediction system (AREPS). The results indicate strong spatial and wavelength-dependent enhancements and degradations in radar performance relative to a standard atmosphere.
APA, Harvard, Vancouver, ISO, and other styles
25

Hale, Kelly S., Leah M. Reeves, Par Axelsson, and Kay M. Stanney. "Validation of Predictive Workload Component of the Multimodal Information Design Support (Mids) System." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 12 (September 2005): 1162–66. http://dx.doi.org/10.1177/154193120504901214.

Full text
Abstract:
Operators in military C4ISR environments are required to rapidly assess and respond to critical events accurately while monitoring ongoing operations. In order to assist in designing complex display systems to support C4ISR operators, it is critical to understand when and why information displayed exceeds human capacity. Common metrics for evaluating operator overload are subjective report, which rely on self-reporting techniques (e.g., NASA/TLX, SART). A new design tool, the Multimodal Information Design Support (MIDS) system, predicts times of operator overload and offers multimodal design guidelines to streamline cognitive processing, thus alleviating times of operator workload and optimizing situation awareness. This paper empirically validates MIDS” predictive power in determining situations which may cause operator overload by comparing MIDS output to subjective reports of workload and SA during C4ISR operations. Future studies will validate MIDS” design capabilities through redesign and evaluation of performance, workload and SA on the optimized C4ISR task environment.
APA, Harvard, Vancouver, ISO, and other styles
26

Pérez-Valero, Jesús, M. Victoria Caballero Pintado, Francisco Melgarejo, Antonio-Javier García-Sánchez, Joan Garcia-Haro, Francisco García Córdoba, José A. García Córdoba, et al. "Symbolic Recurrence Analysis of RR Interval to Detect Atrial Fibrillation." Journal of Clinical Medicine 8, no. 11 (November 2, 2019): 1840. http://dx.doi.org/10.3390/jcm8111840.

Full text
Abstract:
Atrial fibrillation (AF) is a sustained cardiac arrhythmia associated with stroke, heart failure, and related health conditions. Though easily diagnosed upon presentation in a clinical setting, the transient and/or intermittent emergence of AF episodes present diagnostic and clinical monitoring challenges that would ideally be met with automated ambulatory monitoring and detection. Current approaches to address these needs, commonly available both in smartphone applications and dedicated technologies, combine electrocardiogram (ECG) sensors with predictive algorithms to detect AF. These methods typically require extensive preprocessing, preliminary signal analysis, and the integration of a wide and complex array of features for the detection of AF events, and are consequently vulnerable to over-fitting. In this paper, we introduce the application of symbolic recurrence quantification analysis (SRQA) for the study of ECG signals and detection of AF events, which requires minimal pre-processing and allows the construction of highly accurate predictive algorithms from relatively few features. In addition, this approach is robust against commonly-encountered signal processing challenges that are expected in ambulatory monitoring contexts, including noisy and non-stationary data. We demonstrate the application of this method to yield a highly accurate predictive algorithm, which at optimal threshold values is 97.9% sensitive, 97.6% specific, and 97.7% accurate in classifying AF signals. To confirm the robust generalizability of this approach, we further evaluated its performance in the implementation of a 10-fold cross-validation paradigm, yielding 97.4% accuracy. In sum, these findings emphasize the robust utility of SRQA for the analysis of ECG signals and detection of AF. To the best of our knowledge, the proposed model is the first to incorporate symbolic analysis for AF beat detection.
APA, Harvard, Vancouver, ISO, and other styles
27

Fuster, Joaquín M., and Steven L. Bressler. "Past Makes Future: Role of pFC in Prediction." Journal of Cognitive Neuroscience 27, no. 4 (April 2015): 639–54. http://dx.doi.org/10.1162/jocn_a_00746.

Full text
Abstract:
The pFC enables the essential human capacities for predicting future events and preadapting to them. These capacities rest on both the structure and dynamics of the human pFC. Structurally, pFC, together with posterior association cortex, is at the highest hierarchical level of cortical organization, harboring neural networks that represent complex goal-directed actions. Dynamically, pFC is at the highest level of the perception–action cycle, the circular processing loop through the cortex that interfaces the organism with the environment in the pursuit of goals. In its predictive and preadaptive roles, pFC supports cognitive functions that are critical for the temporal organization of future behavior, including planning, attentional set, working memory, decision-making, and error monitoring. These functions have a common future perspective and are dynamically intertwined in goal-directed action. They all utilize the same neural infrastructure: a vast array of widely distributed, overlapping, and interactive cortical networks of personal memory and semantic knowledge, named cognits, which are formed by synaptic reinforcement in learning and memory acquisition. From this cortex-wide reservoir of memory and knowledge, pFC generates purposeful, goal-directed actions that are preadapted to predicted future events.
APA, Harvard, Vancouver, ISO, and other styles
28

Pazzaglia, Mariella. "Action and language grounding in the sensorimotor cortex." Language and Cognition 5, no. 2-3 (September 2013): 211–23. http://dx.doi.org/10.1515/langcog-2013-0015.

Full text
Abstract:
AbstractIn this article, I will comment on recent advances in the research on the intersection between language and action. On the basis of the argument proposed by Arbib, I will consider an evolutionary scenario according to which language emerged from a basic imitation mechanism devoted to action representation. I will review more appropriate data in patients who present with gesture and language disorders and add it to behavioral, neurophysiological, and neuroimaging evidence that suggests that specialized sensorimotor circuits underlie action processing and may ultimately even ground complex aspects of language. Finally, in the last part of the article, I will discuss some of the future research on the interwoven processes of production and comprehension that are involved in the predictive mechanisms of action and language.
APA, Harvard, Vancouver, ISO, and other styles
29

Rajendran, Vani G., Nicol S. Harper, and Jan W. H. Schnupp. "Auditory cortical representation of music favours the perceived beat." Royal Society Open Science 7, no. 3 (March 2020): 191194. http://dx.doi.org/10.1098/rsos.191194.

Full text
Abstract:
Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This ‘neural emphasis’ distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the ‘bottom-up’ processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.
APA, Harvard, Vancouver, ISO, and other styles
30

Kettner, R. E., S. Mahamud, H. C. Leung, N. Sitkoff, J. C. Houk, B. W. Peterson, and A. G. Barto. "Prediction of Complex Two-Dimensional Trajectories by a Cerebellar Model of Smooth Pursuit Eye Movement." Journal of Neurophysiology 77, no. 4 (April 1, 1997): 2115–30. http://dx.doi.org/10.1152/jn.1997.77.4.2115.

Full text
Abstract:
Kettner, R. E., S. Mahamud, H.-C. Leung, N. Sitkoff, J. C. Houk, B. W. Peterson, and A. G. Barto. Prediction of complex two-dimensional trajectories by a cerebellar model of smooth pursuit eye movement. J. Neurophysiol. 77: 2115–2130, 1997. A neural network model based on the anatomy and physiology of the cerebellum is presented that can generate both simple and complex predictive pursuit, while also responding in a feedback mode to visual perturbations from an ongoing trajectory. The model allows the prediction of complex movements by adding two features that are not present in other pursuit models: an array of inputs distributed over a range of physiologically justified delays, and a novel, biologically plausible learning rule that generated changes in synaptic strengths in response to retinal slip errors that arrive after long delays. To directly test the model, its output was compared with the behavior of monkeys tracking the same trajectories. There was a close correspondence between model and monkey performance. Complex target trajectories were created by summing two or three sinusoidal components of different frequencies along horizontal and/or vertical axes. Both the model and the monkeys were able to track these complex sum-of-sines trajectories with small phase delays that averaged 8 and 20 ms in magnitude, respectively. Both the model and the monkeys showed a consistent relationship between the high- and low-frequency components of pursuit: high-frequency components were tracked with small phase lags, whereas low-frequency components were tracked with phase leads. The model was also trained to track targets moving along a circular trajectory with infrequent right-angle perturbations that moved the target along a circle meridian. Before the perturbation, the model tracked the target with very small phase differences that averaged 5 ms. After the perturbation, the model overshot the target while continuing along the expected nonperturbed circular trajectory for 80 ms, before it moved toward the new perturbed trajectory. Monkeys showed similar behaviors with an average phase difference of 3 ms during circular pursuit, followed by a perturbation response after 90 ms. In both cases, the delays required to process visual information were much longer than delays associated with nonperturbed circular and sum-of-sines pursuit. This suggests that both the model and the eye make short-term predictions about future events to compensate for visual feedback delays in receiving information about the direction of a target moving along a changing trajectory. In addition, both the eye and the model can adjust to abrupt changes in target direction on the basis of visual feedback, but do so after significant processing delays.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Fusheng, Shaorong Liu, and Peiya Liu. "Complex RFID event processing." VLDB Journal 18, no. 4 (March 19, 2009): 913–31. http://dx.doi.org/10.1007/s00778-009-0139-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Di, Elke A. Rundensteiner, Han Wang, and Richard T. Ellison. "Active complex event processing." Proceedings of the VLDB Endowment 3, no. 1-2 (September 2010): 1545–48. http://dx.doi.org/10.14778/1920841.1921034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Luckham, David. "Complex event processing (CEP)." ACM SIGSOFT Software Engineering Notes 25, no. 1 (January 2000): 99. http://dx.doi.org/10.1145/340855.341080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Eckert, Michael, and François Bry. "Complex Event Processing (CEP)." Informatik-Spektrum 32, no. 2 (March 5, 2009): 163–67. http://dx.doi.org/10.1007/s00287-009-0329-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Yibing, Xianhong Xie, Shanshan Meng, Dandan Wu, Yuchao Chen, Fuxiao Jiang, and Bowen Zhu. "Magnitude Agreement, Occurrence Consistency, and Elevation Dependency of Satellite-Based Precipitation Products over the Tibetan Plateau." Remote Sensing 12, no. 11 (May 29, 2020): 1750. http://dx.doi.org/10.3390/rs12111750.

Full text
Abstract:
Satellite remote sensing is a practical technique to estimate global precipitation with adequate spatiotemporal resolution in ungauged regions. However, the performance of satellite-based precipitation products is variable and uncertain for the Tibetan Plateau (TP) because of its complex terrain and climate conditions. In this study, we evaluated the abilities of nine widely used satellite-based precipitation products over the Eastern Tibetan Plateau (ETP) and quantified precipitation dynamics over the entire TP. The evaluation was carried out from three aspects, i.e., magnitude agreement, occurrence consistency, and elevation dependency, from grid-cell to regional scales. The results show that the nine satellite-based products exhibited different agreement with gauge-based reference data with median correlation coefficients ranging from 0.15 to 0.95. Three products (climate hazards group infrared precipitation with stations (CHIRPS), multi-source weighted-ensemble precipitation (MSWEP), and tropical rainfall measuring mission multi-satellite precipitation analysis (TMPA)) generally presented the best performance with the reference data, even in complex terrain regions, given their root mean square errors (RMSE) of less than 25 mm/mon. The climate prediction center merged analysis of precipitation (CMAP) product has relatively coarse spatial resolution, but it also exhibited good performance with a bias of less than 20% in watershed scale. Two other products (precipitation estimation from remotely sensed information using artificial neural networks-cloud classification system (PER-CCS) and climate prediction center morphing technique-raw (CMORPH-RAW)) overestimated precipitation with median RMSEs of 87 mm/mon and 45 mm/mon, respectively. All the precipitation products generally exhibited better agreement with the reference data for rainy season and lower-elevation regions. All of the products captured precipitation occurrence well, with hit event over 60%, and similar percentages of missed and false event. According to the evaluation, the four products (CHIRPS, MSWEP, TMPA, and CMAP) revealed that the annual precipitation over the TP fluctuated between 333 mm/yr and 488 mm/yr during the period 2003 to 2015. The study indicates the importance of integration of multiple data sources and post-processing (e.g., gauge data fusion and elevation correction) for satellite-based products and have implications for selection of suitable precipitation products for hydrological modeling and water resources assessment for the TP.
APA, Harvard, Vancouver, ISO, and other styles
36

Hermle, Doris, Markus Keuschnig, Ingo Hartmeyer, Robert Delleske, and Michael Krautblatter. "Timely prediction potential of landslide early warning systems with multispectral remote sensing: a conceptual approach tested in the Sattelkar, Austria." Natural Hazards and Earth System Sciences 21, no. 9 (September 8, 2021): 2753–72. http://dx.doi.org/10.5194/nhess-21-2753-2021.

Full text
Abstract:
Abstract. While optical remote sensing has demonstrated its capabilities for landslide detection and monitoring, spatial and temporal demands for landslide early warning systems (LEWSs) had not been met until recently. We introduce a novel conceptual approach to structure and quantitatively assess lead time for LEWSs. We analysed “time to warning” as a sequence: (i) time to collect, (ii) time to process and (iii) time to evaluate relevant optical data. The difference between the time to warning and “forecasting window” (i.e. time from hazard becoming predictable until event) is the lead time for reactive measures. We tested digital image correlation (DIC) of best-suited spatiotemporal techniques, i.e. 3 m resolution PlanetScope daily imagery and 0.16 m resolution unmanned aerial system (UAS)-derived orthophotos to reveal fast ground displacement and acceleration of a deep-seated, complex alpine mass movement leading to massive debris flow events. The time to warning for the UAS/PlanetScope totals 31/21 h and is comprised of time to (i) collect – 12/14 h, (ii) process – 17/5 h and (iii) evaluate – 2/2 h, which is well below the forecasting window for recent benchmarks and facilitates a lead time for reactive measures. We show optical remote sensing data can support LEWSs with a sufficiently fast processing time, demonstrating the feasibility of optical sensors for LEWSs.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Di, Elke A. Rundensteiner, and Richard T. Ellison. "Active complex event processing over event streams." Proceedings of the VLDB Endowment 4, no. 10 (July 2011): 634–45. http://dx.doi.org/10.14778/2021017.2021021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jayasekara, Sachini, Sameera Kannangara, Tishan Dahanayakage, Isuru Ranawaka, Srinath Perera, and Vishaka Nanayakkara. "Wihidum: Distributed complex event processing." Journal of Parallel and Distributed Computing 79-80 (May 2015): 42–51. http://dx.doi.org/10.1016/j.jpdc.2015.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Fassan, Matteo. "Molecular Diagnostics in Pathology: Time for a Next-Generation Pathologist?" Archives of Pathology & Laboratory Medicine 142, no. 3 (March 1, 2018): 313–20. http://dx.doi.org/10.5858/arpa.2017-0269-ra.

Full text
Abstract:
Context.— Comprehensive molecular investigations of mainstream carcinogenic processes have led to the use of effective molecular targeted agents in most cases of solid tumors in clinical settings. Objective.— To update readers regarding the evolving role of the pathologist in the therapeutic decision-making process and the introduction of next-generation technologies into pathology practice. Data Sources.— Current literature on the topic, primarily sourced from the PubMed (National Center for Biotechnology Information, Bethesda, Maryland) database, were reviewed. Conclusions.— Adequate evaluation of cytologic-based and tissue-based predictive diagnostic biomarkers largely depends on both proper pathologic characterization and customized processing of biospecimens. Moreover, increased requests for molecular testing have paralleled the recent, sharp decrease in tumor material to be analyzed—material that currently comprises cytology specimens or, at minimum, small biopsies in most cases of metastatic/advanced disease. Traditional diagnostic pathology has been completely revolutionized by the introduction of next-generation technologies, which provide multigene, targeted mutational profiling, even in the most complex of clinical cases. Combining traditional and molecular knowledge, pathologists integrate the morphological, clinical, and molecular dimensions of a disease, leading to a proper diagnosis and, therefore, the most-appropriate tailored therapy.
APA, Harvard, Vancouver, ISO, and other styles
40

Xiangsheng, Kong. "RFID Event Analysis Based on Complex Event Processing." International Journal of Online Engineering (iJOE) 10, no. 1 (February 1, 2014): 5. http://dx.doi.org/10.3991/ijoe.v10i1.3049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Y. H., K. Cao, and X. M. Zhang. "Complex event processing over distributed probabilistic event streams." Computers & Mathematics with Applications 66, no. 10 (December 2013): 1808–21. http://dx.doi.org/10.1016/j.camwa.2013.06.032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Escolme, Angela, Ron F. Berry, Julie Hunt, Scott Halley, and Warren Potma. "Predictive Models of Mineralogy from Whole-Rock Assay Data: Case Study from the Productora Cu-Au-Mo Deposit, Chile." Economic Geology 114, no. 8 (December 1, 2019): 1513–42. http://dx.doi.org/10.5382/econgeo.2019.4650.

Full text
Abstract:
Abstract Mineralogy is a fundamental characteristic of a given rock mass throughout the mining value chain. Understanding bulk mineralogy is critical when making predictions on processing performance. However, current methods for estimating complex bulk mineralogy are typically slow and expensive. Whole-rock geochemical data can be utilized to estimate bulk mineralogy using a combination of ternary diagrams and bivariate plots to classify alteration assemblages (alteration mapping), a qualitative approach, or through calculated mineralogy, a predictive quantitative approach. Both these techniques were tested using a data set of multielement geochemistry and mineralogy measured by semiquantitative X-ray diffraction data from the Productora Cu-Au-Mo deposit, Chile. Using geochemistry, samples from Productora were classified into populations based on their dominant alteration assemblage, including quartz-rich, Fe oxide, sodic, potassic, muscovite (sericite)- and clay-alteration, and least altered populations. Samples were also classified by their dominant sulfide mineralogy. Results indicate that alteration mapping through a range of graphical plots provides a rapid and simple appraisal of dominant mineral assemblage, which closely matches the measured mineralogy. In this study, calculated mineralogy using linear programming was also used to generate robust quantitative estimates for major mineral phases, including quartz and total feldspars as well as pyrite, iron oxides, chalcopyrite, and molybdenite, which matched the measured mineralogy data extremely well (R2 values greater than 0.78, low to moderate root mean square error). The results demonstrate that calculated mineralogy can be applied in the mining environment to significantly increase bulk mineralogy data and quantitatively map mineralogical variability. This was useful even though several minerals were challenging to model due to compositional similarities and clays and carbonates could not be predicted accurately.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Lifeng, and Jhon Silva-Castro. "Synthesis of single-hole signatures by group delay for ground vibration control in rock blasting." Journal of Vibration and Control 26, no. 13-14 (December 24, 2019): 1273–84. http://dx.doi.org/10.1177/1077546319892435.

Full text
Abstract:
Prediction and control of ground vibrations become essential as with the development of neighborhoods in the proximity of active mining operations or the need for new infrastructure in urban centers, both requiring the use of blasting. Novel ground vibration prediction models attempt to reproduce a whole vibration waveform from a blast and are based, in most cases, on the collection of vibrational information from a single blasthole. A single blasthole should have the same characteristics (geometry and weights of explosives) as the blastholes used in production shots. In some cases, the collection of the fundamental information (the signature) is straightforward. In more complex cases, the fundamental information from ground vibration data is collected from previous production shots. This study presents a novel methodology to assess the fundamental ground vibration information (the signature) using known information such as one event waveform (a production shot waveform) and the timing sequence used (the comb function) for the shot. The methodology is based on the analysis of group delay, a concept widely used in signal processing, and is modified here for the analysis of ground vibration waveforms. The methodology is developed using real data collected in coal and quarry mining operations, and at the end of this document, one case study with step-by-step calculations is presented to show the benefits of the methodology.
APA, Harvard, Vancouver, ISO, and other styles
44

Bruns, Julian, Florian Micklich, Johannes Kutterer, Andreas Abecker, and Philipp Zehnder. "Spatial Operators for Complex Event Processing." GI_Forum 1 (2020): 107–23. http://dx.doi.org/10.1553/giscience2020_02_s107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

PI, Ming-feng, and Fei-qi DENG. "Manufacturing-oriented RFID complex event processing." Journal of Computer Applications 30, no. 10 (December 28, 2010): 2768–70. http://dx.doi.org/10.3724/sp.j.1087.2010.02768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Cugola, Gianpaolo, and Alessandro Margara. "Complex event processing with T-REX." Journal of Systems and Software 85, no. 8 (August 2012): 1709–28. http://dx.doi.org/10.1016/j.jss.2012.03.056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Leavitt, Neal. "Complex-Event Processing Poised for Growth." Computer 42, no. 4 (April 2009): 17–20. http://dx.doi.org/10.1109/mc.2009.109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Scheevel, J. R., and K. Payrazyan. "Principal Component Analysis Applied to 3D Seismic Data for Reservoir Property Estimation." SPE Reservoir Evaluation & Engineering 4, no. 01 (February 1, 2001): 64–72. http://dx.doi.org/10.2118/69739-pa.

Full text
Abstract:
Summary We apply a common statistical tool, Principal Component Analysis (PCA) to the problem of direct property estimation from three-dimensional (3D) seismic-amplitude data. We use PCA in a novel way to successfully make detailed effective porosity predictions in channelized sand and shale. The novelty of this use revolves around the sampling method, which consists of a small vertical sampling window applied by sliding along each vertical trace in a cube of seismic-amplitude data. The window captures multiple, vertically adjacent amplitude samples, which are then treated as vectors for purposes of the PCA analysis. All vectors from all sample window locations within the seismic-data volume form the set of input vectors for the PCA algorithm. Final output from the PCA algorithm can be a cube of assigned classes, whose clustering is based on the values of the most significant principal components (PC's). The clusters are used as a categorical variable when predicting reservoir properties away from well control. The novelty in this approach is that PCA analysis is used to analyze covariance relationships between all vector elements (neighboring amplitude values) by using the statistical mass of the large number of vectors sampled in the seismic data set. Our approach results in a powerful signal-analysis method that is statistical in nature. We believe it offers data-driven objectivity and a potential for property extraction not easily achieved in model-driven fourier-based time-series methods of analysis (digital signal processing). We evaluate the effectiveness of our method by applying a cross-validation technique, alternately withholding each of the three wells drilled in the area and computing predicted effective porosity (PHIE) estimates at the withheld location by using the remaining two wells as hard data. This process is repeated three times, each time excluding only one of the wells as a blind control case. In each of the three blind control wells, our method predicts accurate estimates of sand/shale distribution in the well and effective porosity-thickness product values. The method properly predicts a low sand-to-shale ratio at the blind well location, even when the remaining two hard data wells contain only high sand-to-shale ratios. Good predictive results from this study area make us optimistic that the method is valuable for general reservoir property prediction from 3D seismic data, especially in areas of rapid lateral variation of the reservoir. We feel that this method of predicting properties from the 3D seismic is preferable to traditional, solely variogram-based geostatistical estimation methods. Such methods have difficulty capturing the detailed lithology distribution when limited by the hard data control's sampling bias. This problem is especially acute in areas where rapid lateral geological variation is the rule. Our method effectively overcomes this limitation because it provides a deterministic soft template for reservoir-property distributions. Introduction Reservoir Prediction from Seismic. The use of the reflection seismic-attribute data for the prediction of detailed reservoir properties began at least as early as 1969.1 Use of seismic attributes for reservoir prediction has accelerated in recent years, especially with the advent of widely available high-quality 3D seismic data. In practice, a seismic attribute is any property derived from the seismic reflection (amplitude) signal during or after final processing. Any attributes may be compared with a primary reservoir property or lithology in an attempt to devise a method of attribute-guided prediction of the primary property away from well control. The prediction method can vary from something as simple as a linear multiplier (single attribute) to multi-attribute analysis with canonical correlation techniques,2 geostatistical methods,3 or fully nonlinear, fuzzy methods.4 The pace of growth in prediction methodologies using seismic attributes seems to be outpaced only by the proliferation in the number and types of seismic attributes reported in the literature.5 As more researchers find predictive success with one or more new attributes, the list of viable reservoir-predictive attributes continues to grow. Chen and Sidney6 have cataloged more than 60 common seismic attributes along with a description of their apparent significance and use. Despite the rich history of seismic attribute in reservoir prediction, the practice remains difficult and uncertain. The bulk of this uncertainty arises from the unclear nature of the physics connecting many demonstrably useful attributes to a corresponding reservoir property. Because of the complex and varied physical processes responsible for various attributes, the unambiguous use of attributes for direct reservoir prediction will likely remain a challenge for years to come. In addition to the questions about the physical origin of some attributes, there is the possibility of encountering statistical pitfalls while using multiple attributes for empirical reservoir-property prediction. For example, it has been demonstrated that as the number of attributes used in an evaluation increases, the potential arises that one or more attributes will produce a false correlation with well data.7 Also, many attributes are derived with similar signal-processing methods and can, in some cases, be considered largely redundant with respect to their seismic-signal description. Lendzionowski et al.8 maintain that the maximum number of independent attributes required to fully describe a trace segment is a quantity 2BT, where B=bandwidth (Hz) and T=trace-segment length (sec). If this is supportable, it suggests that most of the more common attributes are at least partially redundant. The danger of such redundancy is that it falsely enhances statistical correlation with the well property. Doing so may suggest that many seemingly independent seismic attributes display similar well-property trends. Finally, the use of a particular approach with attributes involves some subjectivity and prior experience on the part of the practitioner to be successful and reproducible. This is a source of potential error that cannot be quantified but also, in most cases, cannot be avoided. The most successful workers in the field of reservoir prediction from seismic, not coincidentally, are also the most experienced in the field.
APA, Harvard, Vancouver, ISO, and other styles
49

Jaffe, C. Carl. "Measures of Response: RECIST, WHO, and New Alternatives." Journal of Clinical Oncology 24, no. 20 (July 10, 2006): 3245–51. http://dx.doi.org/10.1200/jco.2006.06.5599.

Full text
Abstract:
RECIST (Response Evaluation Criteria in Solid Tumors) is a widely employed method introduced in 2000 to assess change in tumor size in response to therapy. The simplicity of the technique, however, contrasts sharply with the increasing sophistication of imaging instrumentation. Anatomically based imaging measurement, although supportive of drug development and key to some accelerated drug approvals, is being pressed to improve its methodologic robustness, particularly in the light of more functionally-based imaging that is sensitive to tissue molecular response such as fluorodeoxyglucose positron emission tomography. Nevertheless ready availability of computed tomography and magnetic resonance imaging machines largely assures anatomically based imaging a continuing role in clinical trials for the foreseeable future. Recent advances in image processing enabled by the computational power of modern clinical scanners open a considerable opportunity to characterize tumor response to therapy as a complement to image acquisition. Various alternative quantitative volumetric approaches have been proposed but have yet to gain wide acceptance by clinical and regulatory communities, nor have these more complex techniques shown incontrovertible evidence of greater reproducibility or predictive value of clinical events and outcome. Unless plans are created for clinical trials that incorporate the design needed to prove the added value and unique clinical utility of these novel approaches, any theoretical benefit of these more elaborate methods could remain unfulfilled.
APA, Harvard, Vancouver, ISO, and other styles
50

Mayer, Ruben, Boris Koldehofe, and Kurt Rothermel. "Predictable Low-Latency Event Detection With Parallel Complex Event Processing." IEEE Internet of Things Journal 2, no. 4 (August 2015): 274–86. http://dx.doi.org/10.1109/jiot.2015.2397316.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography