To see the other types of publications on this topic, follow the link: Predictive Complex Event Processing.

Dissertations / Theses on the topic 'Predictive Complex Event Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Predictive Complex Event Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kammoun, Abderrahmen. "Enhancing Stream Processing and Complex Event Processing Systems." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES012.

Full text
Abstract:
Alors que de plus en plus d'objets et d'appareils sensoriels connectés font partie de notre vie quotidienne, la masse d'informations circulant à grande vitesse ne cesse d'augmenter. Cette énorme quantité de données produites à des débits élevés exige une compréhension rapide pour être utile dans divers domaines d'activité telles que l'internet des objets, la santé, la gestion de l'énergie, etc. Les techniques traditionnelles de stockage et de traitement de données se sont révélées inefficaces ou inadaptables pour gérer ce flux de données. Cette thèse a pour objectif de proposer des solutions optimales à deux problèmes de recherche sur la gestion de flux de données. La première concerne l’optimisation de la résolution de requêtes continues complexes par les systèmes de détection d'événements complexes (CEP). La seconde aux problèmes liées à la prédiction des événement complexes fondée sur l’apprentissage de l’historique du système. Premièrement, nous avons proposé un modèle de recalcul pour le traitement de requêtes complexes, basé sur une indexation multidimensionnelle et des algorithmes de jointures optimisés. Deuxièmement, nous avons conçu un CEP prédictif qui utilise des informations historiques pour prédire des événements complexes futurs. Pour utiliser efficacement l'information historique, nous utilisons un espace de séquences historiques à N dimensions. Par conséquent, la prédiction peut être effectuée en répondant aux requêtes d’intervalles sur cet espace de séquences historiques. La pertinence des résultats obtenus, notamment par l'application de nos algorithmes et approches lors de challenges internationaux démontre la viabilité des méthodes que nous proposons
As more and more connected objects and sensory devices are becoming part of our daily lives, the sea of high-velocity information flow is growing. This massive amount of data produced at high rates requires rapid insight to be useful in various applications such as the Internet of Things, health care, energy management, etc. Traditional data storage and processing techniques are proven inefficient. This gives rise to Data Stream Management and Complex Event Processing (CEP) systems.This thesis aims to provide optimal solutions for complex and proactive queries. Our proposed techniques, in addition to CPU and memory efficiency, enhance the capabilities of existing CEP systems by adding predictive feature through real-time learning. The main contributions of this thesis are as follows:We proposed various techniques to reduce the CPU and memory requirements of expensive queries. These operators result in exponential complexity both in terms of CPU and memory. Our proposed recomputation and heuristic-based algorithm reduce the costs of these operators. These optimizations are based on enabling efficient multidimensional indexing using space-filling curves and by clustering events into batches to reduce the cost of pair-wise joins.We designed a novel predictive CEP system that employs historical information to predict future complex events. We proposed a compressed index structure, range query processing techniques and an approximate summarizing technique over the historical space.The applicability of our techniques over the real-world problems presented has produced further customize-able solutions that demonstrate the viability of our proposed methods
APA, Harvard, Vancouver, ISO, and other styles
2

Eckert, Michael. "Complex Event Processing with XChangeEQ." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-94051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sazegarnejad, Mohammad Ali. "A model for complex event processing." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2009. http://digitalcommons.auctr.edu/dissertations/1510.

Full text
Abstract:
Advances in sensor technology will revolutionize the way that real-world events are collected and interpreted. The ability to ubiquitously capture data will generate an unprecedented amount of data making distributed data management and decision making key challenges in the deployment of this technology. The demands for intelligently managing real-time data and integrating it into applicable business processes have propelled the emergence of a new breed of distributed software systems. The challenges are broader than simply creating a software platform to manage and integrate the sheer volume of sensor data. Mechanisms that permit the application of contextual and application knowledge into the distributed decision making infrastructure are required. The design of such software is based on the theory of event which permits events to be states, or processes. In managing real-time data and information from distributed heterogeneous sensors, the notion of the event is attractive for several reasons. First, modeling data in terms of events parallels the way humans conceptualize and relate information. Second, the notion of events, especially the differentiation between significant and non-significant 1 events may be used to filter data. Third, the definition of an event provides an implicit data wrapper may be used to link sensor data through event relationships. These relationships may be used to reason in an enterprise application context. Finally, the event-based approach is well suited to associating autonomous, heterogeneous sensor nodes by means of the inherent properties of events such as time and space. Thus these sensor nodes may be integrated into a complex decision making networks through eventbased communication. In this thesis, the design and development of a distributed software platform which can acquire data from heterogeneous sensors, integrate, and provide distributed decision support is described. Raw data is processed at multiple levels of abstraction and using context infonnation combined to form higher-level events that enable real time decision making. A multi-layered event representation and reasoning model is implemented that feeds sensory data derived from low level sensors into higher-level event structures. Then, it can be exploited by appropriate event handlers. Alternate approaches to the “sense making” problem are discussed and the advantages of the proposed model is explained.
APA, Harvard, Vancouver, ISO, and other styles
4

Keskisärkkä, Robin. "Towards Semantically Enabled Complex Event Processing." Licentiate thesis, Linköpings universitet, Interaktiva och kognitiva system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-141554.

Full text
Abstract:
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Di. "Extending Complex Event Processing for Advanced Applications." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-dissertations/235.

Full text
Abstract:
Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology.
APA, Harvard, Vancouver, ISO, and other styles
6

Qi, Yingmei. "High Performance Analytics in Complex Event Processing." Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-theses/2.

Full text
Abstract:
Complex Event Processing (CEP) is the technical choice for high performance analytics in time-critical decision-making applications. Although current CEP systems support sequence pattern detection on continuous event streams, they do not support the computation of aggregated values over the matched sequences of a query pattern. Instead, aggregation is typically applied as a post processing step after CEP pattern detection, leading to an extremely inefficient solution for sequence aggregation. Meanwhile, the state-of-art aggregation techniques over traditional stream data are not directly applicable in the context of the sequence-semantics of CEP. In this paper, we propose an approach, called A-Seq, that successfully pushes the aggregation computation into the sequence pattern detection process. A-Seq succeeds to compute aggregation online by dynamically recording compact partial sequence aggregation without ever constructing the to-be-aggregated matched sequences. Techniques are devised to tackle all the key CEP- specific challenges for aggregation, including sliding window semantics, event purging, as well as sequence negation. For scalability, we further introduce the Chop-Connect methodology, that enables sequence aggregation sharing among queries with arbitrary substring relationships. Lastly, our cost-driven optimizer selects a shared execution plan for effectively processing a workload of CEP aggregation queries. Our experimental study using real data sets demonstrates over four orders of magnitude efficiency improvement for a wide range of tested scenarios of our proposed A-Seq approach compared to the state-of-art solutions, thus achieving high-performance CEP aggregation analytics.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Dazhi. "NEEL+: Supporting Predicates for Nested Complex Event Processing." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/991.

Full text
Abstract:
"Complex event processing (CEP) has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. These monitoring applications must detect complex event pattern sequences in event streams. However, the state-of-art in the CEP literature such as SASE, ZStream or Cayuga either do not support the specification of nesting for pattern queries altogether or they limit the nesting of non-occurrence expressions over composite event types. A recent work by Liu et al proposed a nested complex event pattern expression language, called NEEL (Nested Complex Event Language), that supports the specification of the non-occurrence over complex expressions. However, their work did not carefully consider predicate handling in these nested queries, especially in the context of complex negation. Yet it is well-known that predicate specification is a critical component of any query language. To overcome this gap, we now design a nested complex event pattern expression language called NEEL+, as an extension of the NEEL language, specifying nested CEP queries with predicates. We rigorously define the syntax and semantics of the NEEL+ language, with particular focus on predicate scoping and predicate placement. Accordingly, we introduce a top-down execution paradigm which recursively computes a nested NEEL+ query from the outermost query to the innermost one. We integrate predicate evaluation as part of the overall query evaluation process. Moreover, we design two optimization techniques that reduce the computation costs for processing NEEL+ queries. One, the intra-query method, called predicate push-in, optimizes each individual query component of a nested query by pushing the predicate evaluation into the process of computing the query rather than evaluating predicates at the end of the computation of that particular query. Two, the inter-query method, called predicate shortcutting, optimizes inter-query predicate evaluation. That is, it evaluates the predicates that correlate different query components within a nested query by exploiting a light weight predicate short cut. The NEEL+ system caches values of the equivalence attributes from the incoming data stream. When the computation starts, the system checks the existence of the attribute value of the outer query component in the cache and the predicate acts as a shortcut to early terminate the computation. Lastly, we conduct experimental studies to evaluate the CPU processing resources of the NEEL+ System with and without optimization techniques using real-world stock trading data streams. Our results confirm that our optimization techniques when applied to NEEL+ in a rich variety of cases result in a 10 fold faster query processing performance than the NEEL+ system without optimization."
APA, Harvard, Vancouver, ISO, and other styles
8

Ray, Medhabi. "Optimized Nested Complex Event Processing Using Continuous Caching." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/1060.

Full text
Abstract:
"Complex Event Processing (CEP) has become increasingly important for tracking and monitoring anomalies and trends in event streams emitted from business processes such as supply chain management to online stores in e-commerce. These monitoring applications submit complex event queries to track sequences of events that match a given pattern. While the state-of-the-art CEP systems mostly focus on the execution of flat sequence queries, we instead support the execution of nested CEP queries specified by the (NEsted Event Language) NEEL. However the iterative execution often results in the repeated recomputation of similar or even identical results for nested sub- expressions as the window slides over the event stream. This work proposes to optimize NEEL execution performance by caching intermediate results. In particular a method of applying selective caching of intermediate results called Continuous Sliding Caching technique has been designed. Then a further optimization of the previous technique which we call the Semantic Caching and the Continuous Semantic Caching have been proposed. Techniques for incrementally loading, purging and exploiting the cache content are described. Our experimental study using real- world stock trades evaluates the performance of our proposed caching strategies for different query types."
APA, Harvard, Vancouver, ISO, and other styles
9

Rozet, Allison M. "Shared Complex Event Trend Aggregation." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1379.

Full text
Abstract:
Streaming analytics deploy Kleene pattern queries to detect and aggregate event trends against high-rate data streams. Despite increasing workloads, most state-of-the-art systems process each query independently, thus missing cost-saving sharing opportunities. Sharing complex event trend aggregation poses several technical challenges. First, the execution of nested and diverse Kleene patterns is difficult to share. Second, we must share aggregate computation without the exponential costs of constructing the event trends. Third, not all sharing opportunities are beneficial because sharing aggregation introduces overhead. We propose a novel framework, Muse (Multi-query Snapshot Execution), that shares aggregation queries with Kleene patterns while avoiding expensive trend construction. It adopts an online sharing strategy that eliminates re-computations for shared sub-patterns. To determine the beneficial sharing plan, we introduce a cost model to estimate the sharing benefit and design the Muse refinement algorithm to efficiently select robust sharing candidates from the search space. Finally, we explore optimization decisions to further improve performance. Our experiments over a wide range of scenarios demonstrate that Muse increases throughput by 4 orders of magnitude compared to state-of-the-art approaches with negligible memory requirements.
APA, Harvard, Vancouver, ISO, and other styles
10

Gillani, Syed. "Semantically-enabled stream processing and complex event processing over RDF graph streams." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES055/document.

Full text
Abstract:
Résumé en français non fourni par l'auteur
There is a paradigm shift in the nature and processing means of today’s data: data are used to being mostly static and stored in large databases to be queried. Today, with the advent of new applications and means of collecting data, most applications on the Web and in enterprises produce data in a continuous manner under the form of streams. Thus, the users of these applications expect to process a large volume of data with fresh low latency results. This has resulted in the introduction of Data Stream Processing Systems (DSMSs) and a Complex Event Processing (CEP) paradigm – both with distinctive aims: DSMSs are mostly employed to process traditional query operators (mostly stateless), while CEP systems focus on temporal pattern matching (stateful operators) to detect changes in the data that can be thought of as events. In the past decade or so, a number of scalable and performance intensive DSMSs and CEP systems have been proposed. Most of them, however, are based on the relational data models – which begs the question for the support of heterogeneous data sources, i.e., variety of the data. Work in RDF stream processing (RSP) systems partly addresses the challenge of variety by promoting the RDF data model. Nonetheless, challenges like volume and velocity are overlooked by existing approaches. These challenges require customised optimisations which consider RDF as a first class citizen and scale the processof continuous graph pattern matching. To gain insights into these problems, this thesis focuses on developing scalable RDF graph stream processing, and semantically-enabled CEP systems (i.e., Semantic Complex Event Processing, SCEP). In addition to our optimised algorithmic and data structure methodologies, we also contribute to the design of a new query language for SCEP. Our contributions in these two fields are as follows: • RDF Graph Stream Processing. We first propose an RDF graph stream model, where each data item/event within streams is comprised of an RDF graph (a set of RDF triples). Second, we implement customised indexing techniques and data structures to continuously process RDF graph streams in an incremental manner. • Semantic Complex Event Processing. We extend the idea of RDF graph stream processing to enable SCEP over such RDF graph streams, i.e., temporalpattern matching. Our first contribution in this context is to provide a new querylanguage that encompasses the RDF graph stream model and employs a set of expressive temporal operators such as sequencing, kleene-+, negation, optional,conjunction, disjunction and event selection strategies. Based on this, we implement a scalable system that employs a non-deterministic finite automata model to evaluate these operators in an optimised manner. We leverage techniques from diverse fields, such as relational query optimisations, incremental query processing, sensor and social networks in order to solve real-world problems. We have applied our proposed techniques to a wide range of real-world and synthetic datasets to extract the knowledge from RDF structured data in motion. Our experimental evaluations confirm our theoretical insights, and demonstrate the viability of our proposed methods
APA, Harvard, Vancouver, ISO, and other styles
11

O'Keeffe, Daniel Brendan. "Distributed complex event detection for pervasive computing." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Teymourian, Kia [Verfasser]. "A Framework for Knowledge-Based Complex Event Processing / Kia Teymourian." Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1063331803/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Ming. "Robust Complex Event Pattern Detection over Streams." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-dissertations/90.

Full text
Abstract:
Event stream processing (ESP) has become increasingly important in modern applications. In this dissertation, I focus on providing a robust ESP solution by meeting three major research challenges regarding the robustness of ESP systems: (1) while event constraint of the input stream is available, applying such semantic information in the event processing; (2) handling event streams with out-of-order data arrival and (3) handling event streams with interval-based temporal semantics. The following are the three corresponding research tasks completed by the dissertation: Task I - Constraint-Aware Complex Event Pattern Detection over Streams. In this task, a framework for constraint-aware pattern detection over event streams is designed, which on the fly checks the query satisfiability / unsatisfiability using a lightweight reasoning mechanism and adjusts the processing strategy dynamically by producing early feedback, releasing unnecessary system resources and terminating corresponding pattern monitor. Task II - Complex Event Pattern Detection over Streams with Out-of-Order Data Arrival. In this task, a mechanism to address the problem of processing event queries specified over streams that may contain out-of-order data is studied, which provides new physical implementation strategies for the core stream algebra operators such as sequence scan, pattern construction and negation filtering. Task III - Complex Event Pattern Detection over Streams with Interval-Based Temporal Semantics. In this task, an expressive language to represent the required temporal patterns among streaming interval events is introduced and the corresponding temporal operator ISEQ is designed.
APA, Harvard, Vancouver, ISO, and other styles
14

Poppe, Olga. "Event stream analytics." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/530.

Full text
Abstract:
Advances in hardware, software and communication networks have enabled applications to generate data at unprecedented volume and velocity. An important type of this data are event streams generated from financial transactions, health sensors, web logs, social media, mobile devices, and vehicles. The world is thus poised for a sea-change in time-critical applications from financial fraud detection to health care analytics empowered by inferring insights from event streams in real time. Event processing systems continuously evaluate massive workloads of Kleene queries to detect and aggregate event trends of interest. Examples of these trends include check kites in financial fraud detection, irregular heartbeat in health care analytics, and vehicle trajectories in traffic control. These trends can be of any length. Worst yet, their number may grow exponentially in the number of events. State-of-the-art systems do not offer practical solutions for trend analytics and thus suffer from long delays and high memory costs. In this dissertation, we propose the following event trend detection and aggregation techniques. First, we solve the trade-off between CPU processing time and memory usage while computing event trends over high-rate event streams. Namely, our event trend detection approach guarantees minimal CPU processing time given limited memory. Second, we compute online event trend aggregation at multiple granularity levels from fine (per matched event), to medium (per event type), to coarse (per pattern). Thus, we minimize the number of aggregates – reducing both time and space complexity compared to the state-of-the-art approaches. Third, we share intermediate aggregates among multiple event sequence queries while avoiding the expensive construction of matched event sequences. In several comprehensive experimental studies, we demonstrate the superiority of the proposed strategies over the state-of-the-art techniques with respect to latency, throughput, and memory costs.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Mo. "Extending Event Sequence Processing:New Models and Optimization Techniques." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/167.

Full text
Abstract:
Many modern applications, including online financial feeds, tag-based mass transit systems and RFID-based supply chain management systems transmit real-time data streams. There is a need for event stream processing technology to analyze this vast amount of sequential data to enable online operational decision making. This dissertation focuses on innovating several techniques at the core of a scalable E-Analytic system to achieve efficient, scalable and robust methods for in-memory multi-dimensional nested pattern analysis over high-speed event streams. First, I address the problem of processing flat pattern queries on event streams with out-of-order data arrival. I design two alternate solutions: aggressive and conservative strategies respectively. The aggressive strategy produces maximal output under the optimistic assumption that out-of-order event arrival is rare. The conservative method works under the assumption that out-of-order data may be common, and thus produces output only when its correctness can be guaranteed. Second, I design the integration of CEP and OLAP techniques (ECube model) for efficient multi-dimensional event pattern analysis at different abstraction levels. Strategies of drill-down (refinement from abstract to specific patterns) and of roll-up (generalization from specific to abstract patterns) are developed for the efficient workload evaluation. I design a cost-driven adaptive optimizer called Chase that exploits reuse strategies for optimal E-Cube hierarchy execution. Then, I explore novel optimization techniques to support the high- performance processing of powerful nested CEP patterns. A CEP query language called NEEL, is designed to express nested CEP pattern queries composed of sequence, negation, AND and OR operators. To allow flexible execution ordering, I devise a normalization procedure that employs rewriting rules for flattening a nested complex event expression. To conserve CPU and memory consumption, I propose several strategies for efficient shared processing of groups of normalized NEEL subexpressions. Our comprehensive experimental studies, using both synthetic as well as real data streams demonstrate superiority of our proposed strategies over alternate methods in the literature in both effectiveness and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
16

Reiche, Martin. "Characterizing predictive auditory processing with EEG." Doctoral thesis, Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-226275.

Full text
Abstract:
Predictive coding theorizes the capacity of neural structures to form predictions about forthcoming sensory events based on previous sensory input. This concept increasingly gains attention within experimental psychology and cognitive neuroscience. In auditory research, predictive coding has become a useful model that elegantly explains different aspects of auditory sensory processing and auditory perception. Many of these aspects are backed up by experimental evidence. However, certain fundamental features of predictive auditory processing have not been addressed so far by experimental investigations, like correlates of neural predictions that show up before the onset of an expected event. Four experiments were designed to investigate the proposed mechanism under more realistic conditions as compared to previous studies by manipulating different aspects of predictive (un)certainty, thereby examining the ecological validity of predictive processing in audition. Moreover, predictive certainty was manipulated gradually across five conditions from unpredictable to fully predictable in linearly increasing steps which drastically decreases the risk of discovering incidental findings. The results obtained from the conducted experiments partly confirm the results from previous studies by demonstrating effects of predictive certainty on ERPs in response to omissions of potentially predictable stimuli. Furthermore, results partly suggest that the auditory system actively engages in stimulus predictions in a literal sense as evidenced by gradual modulations of pre-stimulus ERPs associated with different degrees of predictive certainty. However, the current results remain inconsistent because the observed effects were relatively small and could not consistently be replicated in all follow-up experiments. The observed effects could be regained after accumulating the data across all experiments in order to increase statistical power. However, certain questions remain unanswered regarding a valid interpretation of the results in terms of predictive coding. Based on the current state of results, recommendations for future investigations are provided at the end of the current thesis in order to improve certain methodological aspects of investigating predictive coding in audition, including considerations on the design of experiments, possible suitable measures to investigate predictive coding in audition, recommendations for data acquisition and data analysis as well as recommendations for publication of results.
APA, Harvard, Vancouver, ISO, and other styles
17

Hermosillo, Gabriel. "Towards Creating Context-Aware Dynamically-Adaptable Business Processes Using Complex Event Processing." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00709303.

Full text
Abstract:
En plus de l'utilisation des appareils ubiquitaires qui continue à croître, nous avons accès de plus en plus à d'informations dites contextuelles. Ces informations permettent de connaître l'état de notre environnement et nous aident à prendre les décisions de notre vie quotidienne en fonction du contexte dans lequel nous nous positionnons. Les processus métiers informatiques sont de plus en plus en expansion, alors que les activités qu'ils traitent deviennent automatisées. Toutefois, lorsqu'il s'agit de processus métiers dans un domaine particulier, il y a un manque d'intégration avec ces informations contextuelles. Nous pouvons envisager actuellement une situation donnée dans une partie bien définie du processus à un moment donné et prendre une décision basée sur cette information, mais nous ne pouvons pas contrôler ces informations contextuelles en temps réel et adapter le processus en conséquence, comme nous le faisons dans la vie normale. De plus, la nature statique des processus métiers ne leur permet pas d'être modifiés dynamiquement, les rendant ainsi moins utiles dans un nouveau contexte. Si nous voulons changer le comportement d'un processus métier, nous devons le stopper, le modifier et le redéployer entièrement, ce qui fait perdre toutes les exécutions en cours et l'information associée. Pour répondre à ces problèmes, dans cette thèse, nous présentons le cadre logiciel CEVICHE. Nous proposons une approche qui permet de représenter des processus métiers sensibles au contexte où les informations de contexte sont considérées comme des événements contrôlés en temps réel. Pour cela, nous nous basons sur une nouvelle approche appelée Complex Event Processing (CEP). En utilisant un outil externe pour contrôler le contexte en temps réel, nous sommes alors en mesure de dépasser les limites d'accés à l'information uniquement à des endroits bien précis du processus. Cependant, la connaissance de ces événements ne suffit pas. Nous avons, de plus, besoin d'être capable d'adapter nos processus en conséquence à l'exécution. Avec CEVICHE, nous intégrons les informations obtenues à partir du contexte avec la capacité d'adaptation des processus métiers en cours d'exécution. De plus, l'une des originalités du cadre logiciel CEVICHE vient de la définition d'une opération de désadaptation et de sa mise en œuvre. Défaire l'adaptation peut facilement se passer mal et conduire à des états non désirés et rendre les processus instables. Naïvement considérée comme une tâche triviale, cette question a été peu considérée quand on regarde les approches dynamiques actuelles. Nous proposons donc un modèle formel de ce mécanisme dans CEVICHE. La réalisation du cadre logiciel CEVICHE offre des propriétés de flexibilité et de dynamicité aux processus métiers en se basant sur une approche à composants, permettant ainsi la modification des liaisons en cours d'exécution. En outre, avec CEVICHE, nous apportons une propriété de stabilité au niveau du traitement des événements complexes. Comme toute nouvelle approche, le traitement des événements complexes n'est pas normalisé et est en cours d'évolution, chaque outil utilisant son propre langage pour les exprimer. En définissant notre propre langage, Adaptive Business Process Language (ABPL), comme un langage pivot, CEVICHE facilite l'utilisation de CEP sans les inconvénients de l'adoption anticipée de l'approche. Nous utilisons une technique de type plug-in qui permet aux événements définis en ABPL d'être utilisés dans pratiquement n'importe quel moteur CEP. Cette approche rend les règles de traitement des événements plus faciles à maintenir, car nous centralisons la mise à jour au niveau du plug-in lorsque le langage CEP évolue, ou si nous décidons l'utilisation d'un autre outil, au lieu de mettre à jour toutes les définitions d'événements. Finalement, nous avons validé notre approche en mettant en œuvre un scénario de crise nucléaire, avec des cas d'utilisation qui impliquent de nombreux acteurs, des informations de contexte et des conditions d'adaptation.
APA, Harvard, Vancouver, ISO, and other styles
18

Kaya, Muammer Ozge. "A Complex Event Processing Framework Implementation Using Heterogeneous Devices In Smart Environments." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614152/index.pdf.

Full text
Abstract:
Significant developments in microprocessor and sensor technology make wirelessly connected small computing devices widely available
hence they are being used frequently to collect data from the environment. In this study, we construct a framework in order to extract high level information in an environment containing such pervasive computing devices. In the framework, raw data originating from wireless sensors are collected using an event driven system and converted to simple events for transmission over a network to a central processing unit. We also utilize complex event processing approach incorporating temporal constraints, aggregation and sequencing of events in order to define complex events for extracting high level information from the collected simple events. We develop a prototype using easily accessible hardware and set it up in a classroom within our university. The results demonstrate the feasibility of our approach, ease of deployment and successful application of the complex event processing framework.
APA, Harvard, Vancouver, ISO, and other styles
19

Ottenwälder, Beate [Verfasser], and Kurt [Akademischer Betreuer] Rothermel. "Mobility-awareness in complex event processing systems / Beate Ottenwälder ; Betreuer: Kurt Rothermel." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2016. http://d-nb.info/1118368347/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sejdovic, Suad [Verfasser], and Y. [Akademischer Betreuer] Sure-Vetter. "Situation Management with Complex Event Processing / Suad Sejdovic ; Betreuer: Y. Sure-Vetter." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1167309049/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gao, Feng. "Complex medical event detection using temporal constraint reasoning." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=153271.

Full text
Abstract:
The Neonatal Intensive Care Unit (NICU) is a hospital ward specializing in looking after premature and ill newborn babies. Working in such a busy and complex environment is not easy and sophisticated equipment is used to help the daily work of the medical staff . Computers are used to analyse the large amount of monitored data and extract hidden information, e.g. to detect interesting events. Unfortunately, one group of important events lacks features that are recognizable by computers. This group includes the actions taken by the medical sta , for example two actions related to the respiratory system: inserting an endotracheal tube into a baby’s trachea (ET Intubating) or sucking out the tube (ET Suctioning). These events are very important building blocks for other computer applications aimed at helping the sta . In this research, a strategy for detecting these medical actions based on contextual knowledge is proposed. This contextual knowledge specifies what other events normally occur with each target event and how they are temporally related to each other. The idea behind this strategy is that all medical actions are taken for di erent purposes hence may have di erent procedures (contextual knowledge) for performing them. This contextual knowledge is modelled using a point based framework with special attention given to various types of uncertainty. Event detection consists in searching for consistent matching between a model based on the contextual knowledge and the observed event instances - a Temporal Constraint Satisfaction Problem (TCSP). The strategy is evaluated by detecting ET Intubating and ET Suctioning events, using a specially collected NICU monitoring dataset. The results of this evaluation are encouraging and show that the strategy is capable of detecting complex events in an NICU.
APA, Harvard, Vancouver, ISO, and other styles
22

Mayer, Ruben [Verfasser], and Kurt [Akademischer Betreuer] Rothermel. "Window-based data parallelization in complex event processing / Ruben Mayer ; Betreuer: Kurt Rothermel." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2018. http://d-nb.info/1156604192/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Martin, André. "Minimizing Overhead for Fault Tolerance in Event Stream Processing Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210251.

Full text
Abstract:
Event Stream Processing (ESP) is a well-established approach for low-latency data processing enabling users to quickly react to relevant situations in soft real-time. In order to cope with the sheer amount of data being generated each day and to cope with fluctuating workloads originating from data sources such as Twitter and Facebook, such systems must be highly scalable and elastic. Hence, ESP systems are typically long running applications deployed on several hundreds of nodes in either dedicated data-centers or cloud environments such as Amazon EC2. In such environments, nodes are likely to fail due to software aging, process or hardware errors whereas the unbounded stream of data asks for continuous processing. In order to cope with node failures, several fault tolerance approaches have been proposed in literature. Active replication and rollback recovery-based on checkpointing and in-memory logging (upstream backup) are two commonly used approaches in order to cope with such failures in the context of ESP systems. However, these approaches suffer either from a high resource footprint, low throughput or unresponsiveness due to long recovery times. Moreover, in order to recover applications in a precise manner using exactly once semantics, the use of deterministic execution is required which adds another layer of complexity and overhead. The goal of this thesis is to lower the overhead for fault tolerance in ESP systems. We first present StreamMine3G, our ESP system we built entirely from scratch in order to study and evaluate novel approaches for fault tolerance and elasticity. We then present an approach to reduce the overhead of deterministic execution by using a weak, epoch-based rather than strict ordering scheme for commutative and tumbling windowed operators that allows applications to recover precisely using active or passive replication. Since most applications are running in cloud environments nowadays, we furthermore propose an approach to increase the system availability by efficiently utilizing spare but paid resources for fault tolerance. Finally, in order to free users from the burden of choosing the correct fault tolerance scheme for their applications that guarantees the desired recovery time while still saving resources, we present a controller-based approach that adapts fault tolerance at runtime. We furthermore showcase the applicability of our StreamMine3G approach using real world applications and examples.
APA, Harvard, Vancouver, ISO, and other styles
24

Luthra, Manisha [Verfasser], Ralf [Akademischer Betreuer] Steinmetz, Boris [Akademischer Betreuer] Koldehofe, Carsten [Akademischer Betreuer] Binnig, and Heinz [Akademischer Betreuer] Köppl. "Network-centric Complex Event Processing / Manisha Luthra ; Ralf Steinmetz, Boris Koldehofe, Carsten Binnig, Heinz Köppl." Darmstadt : Universitäts- und Landesbibliothek, 2021. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-192857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cabanillas, Macias Cristina, Anne Baumgrass, and Ciccio Claudio Di. "A Conceptual Architecture for an Event-based Information Aggregation Engine in Smart Logistics." Gesellschaft für Informatik e.V, 2015. https://dl.gi.de/handle/20.500.12116/2040.

Full text
Abstract:
The field of Smart Logistics is attracting interest in several areas of research, including Business Process Management. Awide range of research works are carried out to enhance the capability of monitoring the execution of ongoing logistics processes and predict their likely evolvement. In order to do this, it is crucial to have in place an IT infrastructure that provides the capability of automatically intercepting the digitalised transportation-related events stemming from widespread sources, along with their elaboration, interpretation and dispatching. In this context, we present here the service-oriented software architecture of such an event-based information engine. In particular, we describe the requisites that it must meet. Thereafter, we present the interfaces and subsequently the service-oriented components that are in charge of realising them. The outlined architecture is being utilised as the reference model for an ongoing European research project on Smart Logistics, namely GET Service.
APA, Harvard, Vancouver, ISO, and other styles
26

Wermund, Rahul [Verfasser], Ralf [Akademischer Betreuer] Steinmetz, and Bernd [Akademischer Betreuer] Freisleben. "Privacy-Aware and Reliable Complex Event Processing in the Internet of Things - Trust-Based and Flexible Execution of Event Processing Operators in Dynamic Distributed Environments / Rahul Wermund ; Ralf Steinmetz, Bernd Freisleben." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2018. http://d-nb.info/1151638897/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Fidalgo, André Filipe dos Santos Pinto. "IPTV data reduction strategy to measure real users’ behaviours." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8448.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The digital IPTV service has evolved in terms of features, technology and accessibility of their contents. However, the rapid evolution of features and services has brought a more complex offering to customers, which often are not enjoyed or even perceived. Therefore, it is important to measure the real advantage of those features and understand how they are used by customers. In this work, we present a strategy that deals directly with the real IPTV data, which result from the interaction actions with the set-top boxes by customers. But this data has a very low granularity level, which is complex and difficult to interpret. The approach is to transform the clicking actions to a more conceptual and representative level of the running activities. Furthermore, there is a significant reduction in the data cardinality, enhanced in terms of information quality. More than a transformation, this approach aims to be iterative, where at each level, we achieve a more accurate information, in order to characterize a particular behaviour. As experimental results, we present some application areas regarding the main offered features in this digital service. In particular, is made a study about zapping behaviour, and also an evaluation about DVR service usage. It is also discussed the possibility to integrate the strategy devised in a particular carrier, aiming to analyse the consumption rate of their services, in order to adjust them to customer real usage profile, and also to study the feasibility of new services introduction.
APA, Harvard, Vancouver, ISO, and other styles
28

Sanli, Ozgur. "Rule-based In-network Processing For Event-driven Applications In Wireless Sensor Networks." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613389/index.pdf.

Full text
Abstract:
Wireless sensor networks are application-specific networks that necessitate the development of specific network and information processing architectures that can meet the requirements of the applications involved. The most important challenge related to wireless sensor networks is the limited energy and computational resources of the battery powered sensor nodes. Although the central processing of information produces the most accurate results, it is not an energy-efficient method because it requires a continuous flow of raw sensor readings over the network. As communication operations are the most expensive in terms of energy usage, the distributed processing of information is indispensable for viable deployments of applications in wireless sensor networks. This method not only helps in reducing the total amount of packets transmitted and the total energy consumed by sensor nodes, but also produces scalable and fault-tolerant networks. Another important challenge associated with wireless sensor networks is that the possibility of sensory data being imperfect and imprecise is high. The requirement of precision necessitates employing expensive mechanisms such as redundancy or use of sophisticated equipments. Therefore, approximate computing may need to be used instead of precise computing to conserve energy. This thesis presents two schemes that distribute information processing for event-driven reactive applications, which are interested in higher-level information not in the raw sensory data of individual nodes, to appropriate nodes in sensor networks. Furthermore, based on these schemes, a fuzzy rule-based system is proposed that handles imprecision, inherently present in sensory data.
APA, Harvard, Vancouver, ISO, and other styles
29

VASCONCELOS, IGOR OLIVEIRA. "A MOBILE AND ONLINE OUTLIER DETECTION OVER MULTIPLE DATA STREAMS: A COMPLEX EVENT PROCESSING APPROACH FOR DRIVING BEHAVIOR DETECTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30648@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Dirigir é uma tarefa diária que permite uma locomoção mais rápida e mais confortável, no entanto, mais da metade dos acidentes fatais estão relacionados à imprudência. Manobras imprudentes podem ser detectadas com boa precisão, analisando dados relativos à interação motorista-veículo, por exemplo, curvas, aceleração e desaceleração abruptas. Embora existam algoritmos para detecção online de anomalias, estes normalmente são projetados para serem executados em computadores com grande poder computacional. Além disso, geralmente visam escala através da computação paralela, computação em grid ou computação em nuvem. Esta tese apresenta uma abordagem baseada em complex event processing para a detecção online de anomalias e classificação do comportamento de condução. Além disso, objetivamos identificar se dispositivos móveis com poder computacional limitado, como os smartphones, podem ser usados para uma detecção online do comportamento de condução. Para isso, modelamos e avaliamos três algoritmos de detecção online de anomalia no paradigma de processamento de fluxos de dados, que recebem os dados dos sensores do smartphone e dos sensores à bordo do veículo como entrada. As vantagens que o processamento de fluxos de dados proporciona reside no fato de que este reduz a quantidade de dados transmitidos do dispositivo móvel para servidores/nuvem, bem como se reduz o consumo de energia/bateria devido à transmissão de dados dos sensores e possibilidade de operação mesmo se o dispositivo móvel estiver desconectado. Para classificar os motoristas, um mecanismo estatístico utilizado na mineração de documentos que avalia a importância de uma palavra em uma coleção de documentos, denominada frequência de documento inversa, foi adaptado para identificar a importância de uma anomalia em um fluxo de dados, e avaliar quantitativamente o grau de prudência ou imprudência das manobras dos motoristas. Finalmente, uma avaliação da abordagem (usando o algoritmo que obteve melhor resultado na primeira etapa) foi realizada através de um estudo de caso do comportamento de condução de 25 motoristas em cenário real. Os resultados mostram uma acurácia de classificação de 84 por cento e um tempo médio de processamento de 100 milissegundos.
Driving is a daily task that allows individuals to travel faster and more comfortably, however, more than half of fatal crashes are related to recklessness driving behaviors. Reckless maneuvers can be detected with accuracy by analyzing data related to driver-vehicle interactions, abrupt turns, acceleration, and deceleration, for instance. Although there are algorithms for online anomaly detection, they are usually designed to run on computers with high computational power. In addition, they typically target scale through parallel computing, grid computing, or cloud computing. This thesis presents an online anomaly detection approach based on complex event processing to enable driving behavior classification. In addition, we investigate if mobile devices with limited computational power, such as smartphones, can be used for online detection of driving behavior. To do so, we first model and evaluate three online anomaly detection algorithms in the data stream processing paradigm, which receive data from the smartphone and the in-vehicle embedded sensors as input. The advantages that stream processing provides lies in the fact that reduce the amount of data transmitted from the mobile device to servers/the cloud, as well as reduce the energy/battery usage due to transmission of sensor data and possibility to operate even if the mobile device is disconnected. To classify the drivers, a statistical mechanism used in document mining that evaluates the importance of a word in a collection of documents, called inverse document frequency, has been adapted to identify the importance of an anomaly in a data stream, and then quantitatively evaluate how cautious or reckless drivers maneuvers are. Finally, an evaluation of the approach (using the algorithm that achieved better result in the first step) was carried out through a case study of the 25 drivers driving behavior. The results show an accuracy of 84 percent and an average processing time of 100 milliseconds.
APA, Harvard, Vancouver, ISO, and other styles
30

JUNIOR, MARCOS PAULINO RORIZ. "DG2CEP: AN ON-LINE ALGORITHM FOR REAL-TIME DETECTION OF SPATIAL CLUSTERS FROM LARGE DATA STREAMS THROUGH COMPLEX EVENT PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30249@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
BOLSA NOTA 10
Clusters (ou concentrações) de objetos móveis, como veículos e seres humanos, é um padrão de mobilidade relevante para muitas aplicações. Uma detecção rápida deste padrão e de sua evolução, por exemplo, se o cluster está encolhendo ou crescendo, é útil em vários cenários, como detectar a formação de engarrafamentos ou detectar uma rápida dispersão de pessoas em um show de música. A detecção on-line deste padrão é uma tarefa desafiadora porque requer algoritmos que sejam capazes de processar de forma contínua e eficiente o alto volume de dados enviados pelos objetos móveis em tempo hábil. Atualmente, a maioria das abordagens para a detecção destes clusters operam em lote. As localizações dos objetos móveis são armazenadas durante um determinado período e depois processadas em lote por uma rotina externa, atrasando o resultado da detecção do cluster até o final do período ou do próximo lote. Além disso, essas abordagem utilizam extensivamente estruturas de dados e operadores espaciais, o que pode ser problemático em cenários de grande fluxos de dados. Com intuito de abordar estes problemas, propomos nesta tese o DG2CEP, um algoritmo que combina o conhecido algoritmo de aglomeração por densidade (DBSCAN) com o paradigma de processamento de fluxos de dados (Complex Event Processing) para a detecção contínua e rápida dos aglomerados. Nossos experimentos com dados reais indicam que o DG2CEP é capaz de detectar a formação e dispersão de clusters rapidamente, em menos de alguns segundos, para milhares de objetos móveis. Além disso, os resultados obtidos indicam que o DG2CEP possui maior similaridade com DBSCAN do que abordagens baseadas em lote.
Spatial concentrations (or spatial clusters) of moving objects, such as vehicles and humans, is a mobility pattern that is relevant to many applications. A fast detection of this pattern and its evolution, e.g., if the cluster is shrinking or growing, is useful in numerous scenarios, such as detecting the formation of traffic jams or detecting a fast dispersion of people in a music concert. An on-line detection of this pattern is a challenging task because it requires algorithms that are capable of continuously and efficiently processing the high volume of position updates in a timely manner. Currently, the majority of approaches for spatial cluster detection operate in batch mode, where moving objects location updates are recorded during time periods of certain length and then batch-processed by an external routine, thus delaying the result of the cluster detection until the end of the time period. Further, they extensively use spatial data structures and operators, which can be troublesome to maintain or parallelize in on-line scenarios. To address these issues, in this thesis we propose DG2CEP, an algorithm that combines the well-known density-based clustering algorithm DBSCAN with the data stream processing paradigm Complex Event Processing (CEP) to achieve continuous and timely detection of spatial clusters. Our experiments with real world data streams indicate that DG2CEP is able to detect the formation and dispersion of clusters with small latency while having a higher similarity to DBSCAN than batch-based approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Pflügler, Christoph M. [Verfasser]. "Measuring Purchasing and Supply Management Efficiency : A Complex Event Processing Approach based on Total Cost of Ownership and Activity-based Costing / Christoph M. Pflügler." Aachen : Shaker, 2012. http://d-nb.info/1067735097/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bougueng, Tchemeube Renaud. "Location-Aware Business Process Management for Real-time Monitoring of Patient Care Processes." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/24336.

Full text
Abstract:
Long wait times are a global issue in the healthcare sector, particularly in Canada. Despite numerous research findings on wait time management, the issue persists. This is partly because for a given hospital, the data required to conduct wait times analysis is currently scattered across various information systems. Moreover, such data is usually not accurate (because of possible human errors), imprecise and late. The whole situation contributes to the current state of wait times. This thesis proposes a location-aware business process management system for real-time care process monitoring. More precisely, the system enables an improved visibility of process execution by gathering, as processes execute, accurate and granular process information including wait time measurements. The major contributions of this thesis include an architecture for the system, a prototype taking advantages of commercial real-time location system combined with a business process management system to accurately measure wait times, as well as a case study based on a real cardiology process from an Ontario hospital.
APA, Harvard, Vancouver, ISO, and other styles
33

Lillethun, David. "ssIoTa: A system software framework for the internet of things." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53531.

Full text
Abstract:
Sensors are widely deployed in our environment, and their number is increasing rapidly. In the near future, billions of devices will all be connected to each other, creating an Internet of Things. Furthermore, computational intelligence is needed to make applications involving these devices truly exciting. In IoT, however, the vast amounts of data will not be statically prepared for batch processing, but rather continually produced and streamed live to data consumers and intelligent algorithms. We refer to applications that perform live analysis on live data streams, bringing intelligence to IoT, as the Analysis of Things. However, the Analysis of Things also comes with a new set of challenges. The data sources are not collected in a single, centralized location, but rather distributed widely across the environment. AoT applications need to be able to access (consume, produce, and share with each other) this data in a way that is natural considering its live streaming nature. The data transport mechanism must also allow easy access to sensors, actuators, and analysis results. Furthermore, analysis applications require computational resources on which to run. We claim that system support for AoT can reduce the complexity of developing and executing such applications. To address this, we make the following contributions: - A framework for systems support of Live Streaming Analysis in the Internet of Things, which we refer to as the Analysis of Things (AoT), including a set of requirements for system design - A system implementation that validates the framework by supporting Analysis of Things applications at a local scale, and a design for a federated system that supports AoT on a wide geographical scale - An empirical system evaluation that validates the system design and implementation, including simulation experiments across a wide-area distributed system We present five broad requirements for the Analysis of Things and discuss one set of specific system support features that can satisfy these requirements. We have implemented a system, called \textsubscript{SS}IoTa, that implements these features and supports AoT applications running on local resources. The programming model for the system allows applications to be specified simply as operator graphs, by connecting operator inputs to operator outputs and sensor streams. Operators are code components that run arbitrary continuous analysis algorithms on streaming data. By conforming to a provided interface, operators may be developed that can be composed into operator graphs and executed by the system. The system consists of an Execution Environment, in which a Resource Manager manages the available computational resources and the applications running on them, a Stream Registry, in which available data streams can be registered so that they may be discovered and used by applications, and an Operator Store, which serves as a repository for operator code so that components can be shared and reused. Experimental results for the system implementation validate its performance. Many applications are also widely distributed across a geographic area. To support such applications, \textsubscript{SS}IoTa must be able to run them on infrastructure resources that are also distributed widely. We have designed a system that does so by federating each of the three system components: Operator Store, Stream Registry, and Resource Manager. The Operator Store is distributed using a distributed hast table (DHT), however since temporal locality can be expected and data churn is low, caching may be employed to further improve performance. Since sensors exist at particular locations in physical space, queries on the Stream Registry will be based on location. We also introduce the concept of geographical locality. Therefore, range queries in two dimensions must be supported by the federated Stream Registry, while taking advantage of geographical locality for improved average-case performance. To accomplish these goals, we present a design sketch for SkipCAN, a modification of the SkipNet and Content Addressable Network DHTs. Finally, the fundamental issue in the federated Resource Manager is how to distributed the operators of multiple applications across the geographically distributed sites where computational resources can execute them. To address this, we introduce DistAl, a fully distributed algorithm that assigns operators to sites. DistAl also respects the system resource constraints and application preferences for performance and quality of results (QoR), using application-specific utility functions to allow applications to express their preferences. DistAl is validated by simulation results.
APA, Harvard, Vancouver, ISO, and other styles
34

Braik, William. "Détection d'évènements complexes dans les flux d'évènements massifs." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0596/document.

Full text
Abstract:
La détection d’évènements complexes dans les flux d’évènements est un domaine qui a récemment fait surface dans le ecommerce. Notre partenaire industriel Cdiscount, parmi les sites ecommerce les plus importants en France, vise à identifier en temps réel des scénarios de navigation afin d’analyser le comportement des clients. Les objectifs principaux sont la performance et la mise à l’échelle : les scénarios de navigation doivent être détectés en moins de quelques secondes, alorsque des millions de clients visitent le site chaque jour, générant ainsi un flux d’évènements massif.Dans cette thèse, nous présentons Auros, un système permettant l’identification efficace et à grande échelle de scénarios de navigation conçu pour le eCommerce. Ce système s’appuie sur un langage dédié pour l’expression des scénarios à identifier. Les règles de détection définies sont ensuite compilées en automates déterministes, qui sont exécutés au sein d’une plateforme Big Data adaptée au traitement de flux. Notre évaluation montre qu’Auros répond aux exigences formulées par Cdiscount, en étant capable de traiter plus de 10,000 évènements par seconde, avec une latence de détection inférieure à une seconde
Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, aims to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behaviors must be achieved within a few seconds, while millions of unique customers visit the website every day,thus producing a large event stream. In this thesis, we present Auros, a system for large-scale an defficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a BigData streaming platform. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount
APA, Harvard, Vancouver, ISO, and other styles
35

Epal, Njamen Orleant. "NETAH, un framework pour la composition distribuée de flux d'événements." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM065/document.

Full text
Abstract:
La réduction de la taille des équipements et l’avènement des communications sans fil ont fortement contribué à l’avènement d’une informatique durable. La plupart des applications informatiques sont aujourd’hui construites en tenant compte de cet environnement ambiant dynamique. Leur développement et exécution nécessite des infrastructures logicielles autorisant des entités à s’exécuter, à interagir à travers divers modes (synchrone et asynchrone), à s’adapter à leur(s) environnement(s) notamment en termes : - de consommation de ressources (calcul, mémoire, support de stockage, bases de données, connexions réseaux, ...), - de multiplicité des sources de données (illustrée par le Web, les capteurs, compteurs intelligents, satellites, les bases de données existantes, ...) - des formats multiples des objets statiques ou en flux (images, son, vidéos). Notons que dans beaucoup de cas, les objets des flux doivent être homogénéisées, enrichies, croisées, filtrées et agrégées pour constituer in fine des produits informationnels riches en sémantique et stratégiques pour les applications ou utilisateurs. Les systèmes à base d'événements particulièrement bien adaptés à la programmation de ce type d’applications. Ils peuvent offrir des communications anonymes et asynchrones (émetteurs/serveurs et récepteurs /clients ne se connaissent pas) qui facilitent l'interopération et la collaboration entre des services autonomes et hétérogènes. Les systèmes d’événements doivent être capables d'observer, transporter, filtrer, agréger, corréler et analyser de nombreux flux d’événements produits de manière distribuée. Ces services d’observation doivent pouvoir être déployés sur des architectures distribuées telles que les réseaux de capteurs, les smart-grid, et le cloud pour contribuer à l’observation des systèmes complexes et à leur contrôle autonome grâce à des processus réactifs de prise de décision. L’objectif de la thèse est de proposer un modèle de composition distribuée de flux d’événements et de spécifier un service d’événements capable de réaliser efficacement l’agrégation, la corrélation temporelle et causale, et l’analyse de flux d’événements dans des plateformes distribuées à base de services. TRAVAIL A REALISER (i) Etat de l’art - Systèmes de gestion de flux événements - Services et infrastructures d’événements distribués - Modèles d’événements (ii) Définition d’un scénario d’expérimentation et de comparaison des approches existantes. (iii) Définition d’un modèle de composition distribuée de flux d’événements à base de suscriptions (iv) Spécification et implantation d’un service distribuée de composition de flux d’événements
The reduction in the size of equipments and the advent of wireless communications have greatly contributed to the advent of sustainable IT . Most computer applications today are built taking into account the dynamic ambient environment. Their development and execution need software infrastructure allowing entities to execute , interact through a variety of modes (synchronous and asynchronous ) , has to adapt to their (s) environment (s ), particularly in terms of: - resource consumption ( computation , memory , storage media , databases , networks connections , ... ) - the multiplicity of data sources ( illustrated by the Web , sensors, smart meters, satellites, existing data bases .. . ) - multiple formats of static objects or streams (images , sounds, videos ) . Note that in many cases , stream's objects have to be homogenized, enriched, filtered and aggregated to form informations rich in semantic and strategic for applications or end users. Event based systems are particularly well suited to the programming of such applications. They can offer anonymous and asynchronous communications ( transmitters / receivers and servers / clients do not know each others) that facilitate interoperation and cooperation between autonomous and heterogeneous services. The event systems should be able to observe, transport, filter, aggregate, correlate and analyze many events streams produced in a distributed way. These observation services must be able to be deployed on distributed architectures , such as sensor networks , smart -grid and cloud, to contribute to the observation of complex systems and their self-control via reactive decisions making processes. The aim of the thesis is to propose a model for distributed event flows composition and specify an event service that can effectively realize the aggregation , temporal and causal correlation , and analysis of flow events in distributed service -based platforms. WORK TO BE PERFORMED (i) State of the art: - Events flow management systems - distributed event services - event model ( ii ) Definition of a scenario for experimentation and comparison of existing approaches. ( iii ) Definition of a model of composition delivered a stream of events based superscriptions ( iv ) Specification and implementation of a distributed event flow composition service
APA, Harvard, Vancouver, ISO, and other styles
36

Carteron, Adrien. "Une approche événementielle pour le développement de services multi-métiers dédiés à l’assistance domiciliaire." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0963/document.

Full text
Abstract:
La notion de contexte est fondamentale dans le champ de l’informatique ubiquitaire. En particulier lorsque des services assistent un utilisateur dans ses activités quotidiennes. Parce qu’elle implique plusieurs disciplines, une maison équipée d’informatique ubiquitaire dédiée au maintien à domicile de personnes âgées demande l’implication d’une variété d’intervenants, tant pour concevoir et développer des services d’assistance, que pour déployer et maintenir l’infrastructure sous-jacente. Cette grande diversité d’intervenants correspond à une diversité de contextes. Ces différents contextes sont généralement étudiés séparément, empêchant toute synergie. Cette thèse présente une méthodologie permettant d’unifier la conception et le développement de services sensibles au contexte et de répondre aux besoins de tout type d’intervenant. Dans un premier temps, nous traitons les besoins des intervenants concernant l’infrastructure de capteurs/actionneurs : installation, maintenance et exploitation. Le modèle d’infrastructure de capteurs et un ensemble de règles en résultant permettent de superviser en continu l’infrastructure et de détecter des dysfonctionnements. Cette supervision simplifie le processus de développement d’applications, en faisant abstraction des problèmes d’infrastructure. Dans un second temps, nous analysons un large éventail de services d’assistance domiciliaire dédié aux personnes âgées, en considérant la variété des besoins des intervenants. Grâce à cette analyse, nous généralisons l’approche de modèle d’infrastructure à tout type de services. Notre méthodologie permet de définir des services de façon unifiée, à travers un langage dédié, appelé Maloya, exprimant des règles manipulant les concepts d’état et d’évènement. Nous avons développé un compilateur de notre langage vers un langage événementiel dont l’exécution s’appuie sur un moteur de traitement d’évènements complexes (CEP). Nous avons validé notre approche en définissant un large éventail de services d’assistance à la personne, à partir de services existants, et concernant l’ensemble des intervenants du domaine. Nous avons compilé et exécuté les services Maloya sur un moteur de traitement d’évènements complexes. Les performances obtenues en terme de latence et d’occupation mémoire sont satisfaisantes pour le domaine et compatible avec une exécution 24 heures sur 24 sur le long terme
The notion of context is fundamental to the field of pervasive computing, and in particular when such services are dedicated to assist a user in his daily activities. Being at the crossroad of various fields, a context-aware home dedicated to aging in place involves a variety of stakeholders to design and develop assistive services, as well as to deploy and maintain the underlying infrastructure. This considerable diversity of stakeholders raises correspondingly diverse context dimensions : each service relies on specific contexts (e.g., sensor status for a maintenance service, fridge usage for a meal activity recognition service). Typically, these contexts are considered separately, preventing any synergy. This dissertation presents a methodology for unifying the design and development of various domestic context-aware services, which addresses the requirements of all the stakeholders. In a first step, we handle the needs of stakeholders concerned by the sensors infrastructure : installers, maintainers and operators. We define an infrastructure model of a home and a set of rules to continuously monitor the sensor infrastructure and raise failure when appropriate. This continuous monitoring simplifies application development by abstracting it from infrastructure concerns. In a second step, we analyze a range of services for aging in place, considering the whole diversity of stakeholders. Based on this analysis, we generalize the approach developed for the infrastructure to all assistive services. Our methodology allows to define unified services, in the form of rules processing events and states. To express such rules, we define a domain-specific design language, named Maloya. We developed a compiler from our langage using as a backend an event processing language, which is executed on a complex event processing (CEP) engine. To validate our approach, we define a wide range of assistive services with our language, which reimplement existing deployed services belonging to all of the stakeholders. These Maloya services were deployed and successfully tested for their effectiveness in performing the specific tasks of the stakeholders. Latency and memory consumption performance turned out to be fully compatible with a 24/7 execution in the long run
APA, Harvard, Vancouver, ISO, and other styles
37

Oztarak, Hakan. "An Energy-efficient And Reactive Remote Surveillance Framework Using Wireless Multimedia Sensor Networks." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614328/index.pdf.

Full text
Abstract:
With the introduction of Wireless Multimedia Sensor Networks, large-scale remote outdoor surveillance applications where the majority of the cameras will be battery-operated are envisioned. These are the applications where the frequency of incidents is too low to employ permanent staffing such as monitoring of land and marine border, critical infrastructures, bridges, water supplies, etc. Given the inexpensive costs of wireless resource constrained camera sensors, the size of these networks will be significantly larger than the traditional multi-camera systems. While large number of cameras may increase the coverage of the network, such a large size along with resource constraints poses new challenges, e.g., localization, classification, tracking or reactive behavior. This dissertation proposes a framework that transforms current multi-camera networks into low-cost and reactive systems which can be used in large-scale remote surveillance applications. Specifically, a remote surveillance system framework with three components is proposed: 1) Localization and tracking of objects
2) Classification and identification of objects
and 3) Reactive behavior at the base-station. For each component, novel lightweight, storage-efficient and real-time algorithms both at the computation and communication level are designed, implemented and tested under a variety of conditions. The results have indicated the feasibility of this framework working with limited energy but having high object localization/classification accuracies. The results of this research will facilitate the design and development of very large-scale remote border surveillance systems and improve the systems effectiveness in dealing with the intrusions with reduced human involvement and labor costs.
APA, Harvard, Vancouver, ISO, and other styles
38

Angsuchotmetee, Chinnapong. "Un framework de traitement semantic d'événement dans les réseaux des capteurs multimedias." Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3034/document.

Full text
Abstract:
Les progrès de la technologie des capteurs, des communications sans fil et de l'Internet des Objets ont favorisé le développement des réseaux de capteurs multimédias. Ces derniers sont composés de capteurs interconnectés capables de fournir de façon omniprésente un suivi fin d’un espace connecté. Grâce à leurs propriétés, les réseaux de capteurs multimédias ont suscité un intérêt croissant ces dernières années des secteurs académiques et industriels et ont été adoptés dans de nombreux domaines d'application (tels que la maison intelligente, le bureau intelligent, ou la ville intelligente). L'un des avantages de l'adoption des réseaux de capteurs multimédias est le fait que les données collectées (vidéos, audios, images, etc.) à partir de capteurs connexes contiennent des informations sémantiques riches (en comparaison avec des capteurs uniquement scalaires) qui permettent de détecter des événements complexes et de mieux gérer les exigences du domaine d'application. Toutefois, la modélisation et la détection des événements dans les reséaux de capteurs multimédias restent une tâche difficile à réaliser, car la traduction de toutes les données multimédias collectées en événements n'est pas simple.Dans cette thèse, un framework complet pour le traitement des événements complexes dans les réseaux de capteurs multimédias est proposé pour éviter les algorithmes codés en dur et pour permettre une meilleure adaptation aux évolution des besoins d’un domaine d'application. Le Framework est appelé CEMiD et composé de :• MSSN-Onto: une ontologie nouvellement proposée pour la modélisation des réseaux de capteurs,• CEMiD-Language: un langage original pour la modélisation des réseaux de capteurs multimédias et des événements à détecter, et• GST-CEMiD: un moteur de traitement d'événement complexe basé sur un pipeline sémantique.Le framework CEMiD aide les utilisateurs à modéliser leur propre infrastructure de réseau de capteurs et les événements à détecter via le langage CEMiD. Le moteur de détection du framework prend en entrée le modèle fourni par les utilisateurs pour initier un pipeline de détection d'événements afin d'extraire des données multimédias correspondantes, de traduire des informations sémantiques et de les traduire automatiquement en événements. Notre framework est validé par des prototypes et des simulations. Les résultats montrent que notre framework peut détecter correctement les événements multimédias complexes dans un scénario de charge de travail élevée (avec une latence de détection moyenne inférieure à une seconde)
The dramatic advancement of low-cost hardware technology, wireless communications, and digital electronics have fostered the development of multifunctional (wireless) Multimedia Sensor Networks (MSNs). Those latter are composed of interconnected devices able to ubiquitously sense multimedia content (video, image, audio, etc.) from the environment. Thanks to their interesting features, MSNs have gained increasing attention in recent years from both academic and industrial sectors and have been adopted in wide range of application domains (such as smart home, smart office, smart city, to mention a few). One of the advantages of adopting MSNs is the fact that data gathered from related sensors contains rich semantic information (in comparison with using solely scalar sensors) which allows to detect complex events and copes better with application domain requirements. However, modeling and detecting events in MSNs remain a difficult task to carry out because translating all gathered multimedia data into events is not straightforward and challenging.In this thesis, a full-fledged framework for processing complex events in MSNs is proposed to avoid hard-coded algorithms. The framework is called Complex Event Modeling and Detection (CEMiD) framework. Core components of the framework are:• MSSN-Onto: a newly proposed ontology for modeling MSNs,• CEMiD-Language: an original language for modeling multimedia sensor networks and events to be detected, and• GST-CEMiD: a semantic pipelining-based complex event processing engine.CEMiD framework helps users model their own sensor network infrastructure and events to be detected through CEMiD language. The detection engine of the framework takes all the model provided by users to initiate an event detection pipeline for extracting multimedia data feature, translating semantic information, and interpret into events automatically. Our framework is validated by means of prototyping and simulations. The results show that our framework can properly detect complex multimedia events in a high work-load scenario (with average detection latency for less than one second)
APA, Harvard, Vancouver, ISO, and other styles
39

Idris, Muhammad. "Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates." Doctoral thesis, Universite Libre de Bruxelles, 2019. https://dipot.ulb.ac.be/dspace/bitstream/2013/284705/5/contratMI.pdf.

Full text
Abstract:
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
40

Garnier, Alexandre. "Langage dédié au traitement des événements complexes et modélisation des usages pour les réseaux de capteurs." Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0287/document.

Full text
Abstract:
On assiste ces dernières années à une explosion des usages dans l’Internet des objets. La démocratisation de ce monde de capteurs est le fruit, d’une part de la baisse drastique des coûts dans l’informatique embarquée, d’autre part d’un support logiciel toujours plus mature. Que ce soit au niveau des protocoles et des réseaux (CoAP, IPv6, etc) ou de la standardisation des supports de développement, notamment sur microprocesseurs ATMEL, les outils à disposition permettent chaque jour une plus grande homogénéisation dans la communication entre des capteurs toujours plus variés. Cette diversification rassemble chaque jour des utilisateurs aux attentes et aux domaines de compétence différents, avec chacun leur propre compréhension des objets connectés. La complexification des réseaux de capteurs, confrontée à cette nécessité d’adresser des usages fondamentalement différents, pose problème. Sur la base d’un même réseau de capteurs hétéroclite, il est crucial de pouvoir répondre aux besoins de chacun des utilisateurs, sans réclamer d’eux une maîtrise du réseau de capteurs dépassant exagérément leur domaine de compétence. L’outil décrit dans ce document se propose d’adresser cette problématique au travers d’un moteur de requête dédié au traitement des données issus des capteurs. Pour ce faire, il repose sur une modélisation des capteurs au sein de différents contextes, chacun à même de répondre à un besoin utilisateur précis. Sur la base de ce modèle est mis à disposition un langage dédié pour le traitement des événements complexes issus des données mesurées par les capteurs. L’implémentation de cet outil permet en outre d’interagir avec d’éventuelles fonctionnalités d’actuation du réseau de capteurs
Usages of the internet of things experience an exponential growth these last few years. As a matter of fact, this is the result of, on one hand the significantly lowercosts in embedded computing systems, on the other hand the maturing of the software layers. From protocols and networks (CoAP, IPv6, etc) to standardization of ATMEL microcontrollers, tools at hand allow a better communication between more and more various sensors. This diversification gather every day users with different needs, expectations and fields of expertise, each one of them having his own approch, his own understanding of the connected things. The main issue concerns the complexity of the sensor networks, with regard to this necessity to address deeply different usages. Based on a single heterogeneous sensor network, it is critical to be able to meet the needs of each user, without having them to master the network beyond their own field of expertise. The tool described in this document aims at addressing this issue via a query engine dedicated to the processing of data collected from the sensors. Towards this end, it relies on a modelling of the sensors within several contexts, each of them reflecting a specific usage. On this basis a domain-specific language is provided, allowing complex event processing over the data monitored by the sensors. Furthermore, the implementation of this tool allows to interact with optional actuation functionalities of the sensor network
APA, Harvard, Vancouver, ISO, and other styles
41

Glaab, Markus. "A distributed service delivery platform for automotive environments : enhancing communication capabilities of an M2M service platform for automotive application." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11249.

Full text
Abstract:
The automotive domain is changing. On the way to more convenient, safe, and efficient vehicles, the role of electronic controllers and particularly software has increased significantly for many years, and vehicles have become software-intensive systems. Furthermore, vehicles are connected to the Internet to enable Advanced Driver Assistance Systems and enhanced In-Vehicle Infotainment functionalities. This widens the automotive software and system landscape beyond the physical vehicle boundaries to presently include as well external backend servers in the cloud. Moreover, the connectivity facilitates new kinds of distributed functionalities, making the vehicle a part of an Intelligent Transportation System (ITS) and thus an important example for a future Internet of Things (IoT). Manufacturers, however, are confronted with the challenging task of integrating these ever-increasing range of functionalities with heterogeneous or even contradictory requirements into a homogenous overall system. This requires new software platforms and architectural approaches. In this regard, the connectivity to fixed side backend systems not only introduces additional challenges, but also enables new approaches for addressing them. The vehicle-to-backend approaches currently emerging are dominated by proprietary solutions, which is in clear contradiction to the requirements of ITS scenarios which call for interoperability within the broad scope of vehicles and manufacturers. Therefore, this research aims at the development and propagation of a new concept of a universal distributed Automotive Service Delivery Platform (ASDP), as enabler for future automotive functionalities, not limited to ITS applications. Since Machine-to-Machine communication (M2M) is considered as a primary building block for the IoT, emergent standards such as the oneM2M service platform are selected as the initial architectural hypothesis for the realisation of an ASDP. Accordingly, this project describes a oneM2M-based ASDP as a reference configuration of the oneM2M service platform for automotive environments. In the research, the general applicability of the oneM2M service platform for the proposed ASDP is shown. However, the research also identifies shortcomings of the current oneM2M platform with respect to the capabilities needed for efficient communication and data exchange policies. It is pointed out that, for example, distributed traffic efficiency or vehicle maintenance functionalities are not efficiently treated by the standard. This may also have negative privacy impacts. Following this analysis, this research proposes novel enhancements to the oneM2M service platform, such as application-data-dependent criteria for data exchange and policy aggregation. The feasibility and advancements of the newly proposed approach are evaluated by means of proof-of-concept implementation and experiments with selected automotive scenarios. The results show the benefits of the proposed enhancements for a oneM2M-based ASDP, without neglecting to indicate their advantages for other domains of the oneM2M landscape where they could be applied as well.
APA, Harvard, Vancouver, ISO, and other styles
42

Khalfallah, Malik. "A Formal Framework for Process Interoperability in Dynamic Collaboration Environments." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10272/document.

Full text
Abstract:
Concevoir les produits complexes tels que les avions, les hélicoptères, et les lanceurs requière l'utilisation de processus standardisés ayant des fondements robustes. Ces processus doivent être exécutés dans le contexte d'environnements collaboratifs interorganisationnels souvent dynamiques. Dans ce manuscrit, nous présentons un cadre formel qui assure une interopérabilité continue dans le temps pour les processus inter-organisationnels dans les environnements dynamiques. Nous proposons un langage de modélisation déclaratif pour définir des contrats qui capturent les objectifs de chaque partenaire intervenant dans la collaboration. Les modèles de contrats construits avec ce langage sous-spécifient les objectifs de la collaboration en limitant les détails capturés durant la phase de construction du contrat. Cette sous-spécification réduit le couplage entre les partenaires de la collaboration. Néanmoins, moins de couplage implique l'apparition de certaines inadéquations quand les processus des partenaires vont s'échanger des messages lors de la phase d'exécution. Par conséquent, nous développons un algorithme de médiation automatique qui est bien adapté pour les environnements dynamiques. Nous conduisons des évaluations de performance sur cet algorithme qui vont démontrer son efficience par rapport aux approches de médiation existantes. Ensuite, nous étendons notre cadre avec un ensemble d'opérations d'administration qui permettent la réalisation de modifications sur l'environnement collaboratif. Nous développons un algorithme qui évalue l'impact des modifications sur les partenaires. Cet algorithme va ensuite décider si la modification doit être réalisée à l'instant ou bien retardée en attendant que des conditions appropriées sur la configuration de l'environnement dynamique soient satisfaites. Pour savoir comment atteindre ces conditions, nous utilisons l'algorithme de planning à base de graphe. Cet algorithme détermine l'ensemble des opérations qui doivent être exécutées pour atteindre ces conditions
Designing complex products such as aircrafts, helicopters and launchers must rely on well-founded and standardized processes. These processes should be executed in the context of dynamic cross-organizational collaboration environments. In this dissertation, we present a formal framework that ensures sustainable interoperability for cross-organizational processes in dynamic environments. We propose a declarative modeling language to define contracts that capture the objectives of each partner in the collaboration. Contract models built using this language under-specify the objectives of the collaboration by limiting the details captured at design-time. This under-specification decreases the coupling between partners in the collaboration. Nevertheless, less coupling leads to the creation of mismatches when partners’ processes will exchange messages at run-time. Accordingly, we develop an automatic mediation algorithm that is well adapted for dynamic environments. We conduct a thorough evaluation of this algorithm in the context of dynamic environments and compare it with existing mediation approaches which will prove its efficiency. We then extend our framework with a set of management operations that help realize the modifications on the collaboration environment at run-time. We develop an algorithm that assesses the impact of modifications on the partners in the collaboration environment. Then, this algorithm decides if the modification can be realized or should be postponed to wait for appropriate conditions. In order to figure out how to reach these appropriate conditions, we use the planning graph algorithm. This algorithm determines the raw set of management operations that should be executed in order to realize these conditions. A raw set of management operations cannot be executed by an engine unless its operations are encapsulated in the right workflow patterns. Accordingly, we extend this planning algorithm in order to generate an executable workflow from the raw set of operations. We evaluate our extension against existing approaches regarding the number and the nature of workflow patterns considered when generating the executable workflow. Finally, we believe that monitoring contributes in decreasing the coupling between partners in a collaboration environment
APA, Harvard, Vancouver, ISO, and other styles
43

DINIZ, Herbertt Barros Mangueira. "Linguagem específica de domínio para abstração de solução de processamento de eventos complexos." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18030.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T12:04:21Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5)
Made available in DSpace on 2016-10-31T12:04:21Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5) Previous issue date: 2016-03-04
Cada vez mais se evidencia uma maior escassez de recursos e uma disputa por espaços físicos, em decorrência da crescente e demasiada concentração populacional nas grandes cidades. Nesse âmbito, surge a necessidade de soluções que vão de encontro à iniciativa de “Cidades Inteligentes" (Smart Cities). Essas soluções buscam centralizar o monitoramento e controle, para auxiliar no apoio à tomada de decisão. No entanto, essas fontes de TICs formam estruturas complexas e geram um grande volume de dados, que apresentam enormes desafios e oportunidades. Uma das principais ferramentas tecnológicas utilizadas nesse contexto é o Complex Event Processing (CEP), o qual pode ser considerado uma boa solução, para lidar com o aumento da disponibilidade de grandes volumes de dados, em tempo real. CEPs realizam captação de eventos de maneira simplificada, utilizando linguagem de expressão, para definir e executar regras de processamento. No entanto, apesar da eficiência comprovada dessas ferramentas, o fato das regras serem expressas em baixo nível, torna o seu uso exclusivo para usuários especialistas, dificultando a criação de soluções. Com intuito de diminuir a complexidade das ferramentas de CEP, em algumas soluções, tem-se utilizado uma abordagem de modelos Model-Driven Development (MDD), a fim de se produzir uma camada de abstração, que possibilite criar regras, sem que necessariamente seja um usuário especialista em linguagem de CEP. No entanto, muitas dessas soluções acabam tornando-se mais complexas no seu manuseio do que o uso convencional da linguagem de baixo nível. Este trabalho tem por objetivo a construção de uma Graphic User Interface (GUI) para criação de regras de CEP, utilizando MDD, a fim de tornar o desenvolvimento mais intuitivo, através de um modelo adaptado as necessidades do usuário não especialista.
Nowadays is Increasingly evident a greater resources scarcity and competition for physical space, in result of growing up and large population concentration into large cities. In this context, comes up the necessity of solutions that are in compliance with initiative of smart cities. Those solutions seek concentrate monitoring and control, for help to make decisions. Although, this sources of information technology and communications (ITCs) forming complex structures and generates a huge quantity of data that represents biggest challenges and opportunities. One of the main technological tools used in this context is the Complex Event Processing (CEP), which may be considered a good solution to deal with increase of the availability and large volume of data, in real time. The CEPs realizes captation of events in a simple way, using expressive languages, to define and execute processing rules. Although the efficient use of this tools, the fact of the rules being expressed in low level, becomes your use exclusive for specialists, difficulting the creation of solutions. With the aim of reduce the complexity of the CEPs tools, solutions has used an approach of models Model-Driven Development (MDD), in order to produce an abstraction layer, that allows to create rules, without necessarily being a specialist in CEP languages. however, many this tools become more complex than the conventional low level language approach. This work aims to build a Graphic User Interface (GUI) for the creation of CEP rules, using MDD, in order to a more intuitive development, across of the adapted model how necessities of the non specialist users.
APA, Harvard, Vancouver, ISO, and other styles
44

Moreira, Helder. "Sensor data integration and management of smart environments." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17884.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
Num mundo de constante desenvolvimento tecnológico e acelerado crescimento populacional, observa-se um aumento da utilização de recursos energéticos. Sendo os edifícios responsáveis por uma grande parte deste consumo energético, desencadeiam-se vários esforços de investigações de forma a criarem-se edifícios energeticamente eficientes e espaços inteligentes. Esta dissertação visa, numa primeira fase, apresentar uma revisão das atuais soluções que combinam sistemas de automação de edifícios e a Internet das Coisas. Posteriormente, é apresentada uma solução de automação para edifícios, com base em princípios da Internet das Coisas e explorando as vantagens de sistemas de processamento complexo de eventos, de forma a fornecer uma maior integração dos múltiplos sistemas existentes num edifício. Esta solução é depois validada através de uma implementação, baseada em protocolos leves desenhados para a Internet das Coisas, plataformas de alto desempenho, e métodos complexos para análise de grandes fluxos de dados. Esta implementação é ainda aplicada num cenário real, e será usada como a solução padrão para gestão e automação num edifício existente.
In a world of constant technological development and accelerated population growth, an increased use of energy resources is being observed. With buildings responsible for a large share of this energy consumption, a lot of research activities are pursued with the goal to create energy efficient buildings and smart spaces. This dissertation aims to, in a first stage, present a review of the current solutions combining Building Automation Systems (BAS) and Internet of Things (IoT). Then, a solution for building automation is presented based on IoT principles and exploiting the advantages of Complex Event Processing (CEP) systems, to provide higher integration of the multiple building subsystems. This solution was validated through an implementation, based on standard lightweight protocols designed for IoT, high performance and real time platforms, and complex methods for analysis of large streams of data. The implementation is also applied to a real world scenario, and will be used as a standard solution for management and automation of an existing building
APA, Harvard, Vancouver, ISO, and other styles
45

Zachau, S. (Swantje). "Signs in the brain: Hearing signers’ cross-linguistic semantic integration strategies." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526213293.

Full text
Abstract:
Abstract Audio-oral speech and visuo-manual sign language as used by the Deaf community are two very different realizations of the human linguistic communication system. Sign language is not only used by the hearing impaired but also by different groups of hearing individuals. To date, there is a great discrepancy in scientific knowledge about signed and spoken languages. Particularly little is known about the integration of the two systems, even though the vast majority of deaf and hearing signers also have a command of some form of speech. This neurolinguistic study aimed to achieve basic knowledge about semantic integration mechanisms across speech and sign language in hearing native and non-native signers. Basic principles of sign processing as reflected in electrocortical brain activation and behavioral decisions were examined in three groups of study participants: Hearing native signers (children of deaf adults, CODAs), hearing late learned signers (professional sign language interpreters), and hearing non-signing controls. Event-related brain potentials (ERPs) and behavioral response frequencies were recorded while the participants performed a semantic decision task for priming lexeme pairs. The lexeme pairs were presented either within speech (spoken prime-spoken target) or across speech and sign language (spoken prime-signed target). Target-related ERP responses were subjected to temporal principal component analyses (tPCA). The neurocognitive basis of semantic integration processes were assessed by analyzing different ERP components (N170, N400, late positive complex) in response to the antonymic and unrelated targets. Behavioral decision sensitivity to the target lexemes is discussed in relation to the measured brain activity. Behaviorally, all three groups of study participants performed above chance level when making semantic decisions about the primed targets. Different result patterns, however, hinted at three different processing strategies. As the target-locked electrophysiological data was analyzed by PCA, for the first time in the context of sign language processing, objectively allocated ERP components of interest could be explored. A little surprisingly, the overall study results from the sign-naïve control group showed that they performed in a more content-guided way than expected. This suggested that even non-experts in the field of sign language were equipped with basic skills to process the cross-linguistically primed signs. Behavioral and electrophysiological study results together further brought up qualitative differences in processing between the native and late learned signers, which raised the question: can a unitary model of sign processing do justice to different groups of sign language users?
Tiivistelmä Kuuloaistiin ja ääntöelimistön motoriikkaan perustuva puhe ja kuurojen yhteisön käyttämä, näköaistiin ja käsien liikkeisiin perustuva viittomakieli ovat kaksi varsin erilaista ihmisen kielellisen viestintäjärjestelmän toteutumismuotoa. Viittomakieltä käyttävät kuulovammaisten ohella myös monet kuulevat ihmisryhmät. Tähänastinen tutkimustiedon määrä viittomakielistä ja puhutuista kielistä eroaa huomattavasti. Erityisen vähän on tiedetty näiden kahden järjestelmän yhdistämisestä, vaikka valtaosa kuuroista ja kuulevista viittomakielen käyttäjistä hallitsee myös puheen jossain muodossa. Tämän neurolingvistisen tutkimuksen tarkoituksena oli hankkia perustietoja puheen ja viittomakielen välisistä semanttisista yhdistämismekanismeista kuulevilla, viittomakieltä äidinkielenään tai muuna kielenä käyttävillä henkilöillä. Viittomien prosessoinnin perusperiaatteita, jotka ilmenevät aivojen sähköisen toiminnan muutoksina ja valintapäätöksinä, tutkittiin kolmessa koehenkilöryhmässä: kuulevilla viittomakieltä äidinkielenään käyttävillä henkilöillä (kuurojen aikuisten kuulevilla ns. CODA-lapsilla, engl. children of deaf adults), kuulevilla viittomakielen myöhemmin oppineilla henkilöillä (viittomakielen ammattitulkeilla) sekä kuulevilla viittomakieltä osaamattomilla verrokkihenkilöillä. Tapahtumasidonnaiset herätepotentiaalit (ERP:t) ja käyttäytymisvasteen frekvenssit rekisteröitiin koehenkilöiden tehdessä semanttisia valintoja viritetyistä (engl. primed) lekseemipareista. Lekseemiparit esitettiin joko puheena (puhuttu viritesana – puhuttu kohdesana) tai puheen ja viittomakielen välillä (puhuttu viritesana – viitottu kohdesana). Kohdesidonnaisille ERP-vasteille tehtiin temporaaliset pääkomponenttianalyysit (tPCA). Semanttisten yhdistämisprosessien neurokognitiivista perustaa arvioitiin analysoimalla erilaisia ERP-komponentteja (N170, N400, myöhäinen positiivinen kompleksi) vastineina antonyymisiin ja toisiinsa liittymättömiin kohteisiin. Käyttäytymispäätöksen herkkyyttä kohdelekseemeille tarkastellaan suhteessa mitattuun aivojen aktiviteettiin. Käyttäytymisen osalta kaikki kolme koehenkilöryhmää suoriutuivat satunnaistasoa paremmin tehdessään semanttisia valintoja viritetyistä kohdelekseemeistä. Erilaiset tulosmallit viittaavat kuitenkin kolmeen erilaiseen prosessointistrategiaan. Kun kohdelukittua elektrofysiologista dataa analysoitiin pääkomponenttianalyysin avulla ensimmäistä kertaa viittomakielen prosessoinnin yhteydessä, voitiin tutkia tarkkaavaisuuden objektiivisesti allokoituja ERP-komponentteja. Oli jossain määrin yllättävää, että viittomakielellisesti natiivin verrokkiryhmän tulokset osoittivat sen jäsenten toimivan odotettua sisältölähtöisemmin. Tämä viittaa siihen, että viittomakieleen perehtymättömilläkin henkilöillä on perustaidot lingvistisesti ristiin viritettyjen viittomien prosessointiin. Yhdessä käyttäytymisperäiset ja elektrofysiologiset tutkimustulokset toivat esiin laadullisia eroja prosessoinnissa viittomakieltä äidinkielenään puhuvien henkilöiden ja kielen myöhemmin oppineiden henkilöiden välillä. Tämä puolestaan johtaa kysymykseen, voiko yksi viittomien prosessointimalli soveltua erilaisille viittomakielen käyttäjäryhmille?
APA, Harvard, Vancouver, ISO, and other styles
46

Mousheimish, Raef. "Combinaison de l’Internet des objets, du traitement d’évènements complexes et de la classification de séries temporelles pour une gestion proactive de processus métier." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV073/document.

Full text
Abstract:
L’internet des objets est au coeur desprocessus industriels intelligents grâce à lacapacité de détection d’évènements à partir dedonnées de capteurs. Cependant, beaucoup resteà faire pour tirer le meilleur parti de cettetechnologie récente et la faire passer à l’échelle.Cette thèse vise à combler le gap entre les fluxmassifs de données collectées par les capteurs etleur exploitation effective dans la gestion desprocessus métier. Elle propose une approcheglobale qui combine le traitement de flux dedonnées, l’apprentissage supervisé et/oul’utilisation de règles sur des évènementscomplexes permettant de prédire (et doncéviter) des évènements indésirables, et enfin lagestion des processus métier étendue par cesrègles complexes.Les contributions scientifiques de cette thèse sesituent dans différents domaines : les processusmétiers plus intelligents et dynamiques; letraitement d’évènements complexes automatisépar l’apprentissage de règles; et enfin et surtout,dans le domaine de la fouille de données deséries temporelles multivariéespar la prédiction précoce de risques.L’application cible de cette thèse est le transportinstrumenté d’oeuvres d’art
Internet of things is at the core ofsmart industrial processes thanks to its capacityof event detection from data conveyed bysensors. However, much remains to be done tomake the most out of this recent technologyand make it scale. This thesis aims at filling thegap between the massive data flow collected bysensors and their effective exploitation inbusiness process management. It proposes aglobal approach, which combines stream dataprocessing, supervised learning and/or use ofcomplex event processing rules allowing topredict (and thereby avoid) undesirable events,and finally business process managementextended to these complex rules. The scientificcontributions of this thesis lie in several topics:making the business process more intelligentand more dynamic; automation of complexevent processing by learning the rules; and lastand not least, in datamining for multivariatetime series by early prediction of risks. Thetarget application of this thesis is theinstrumented transportation of artworks
APA, Harvard, Vancouver, ISO, and other styles
47

Baouab, Aymen. "Gouvernance et supervision décentralisée des chorégraphies inter-organisationnelles." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00843420.

Full text
Abstract:
Durant la dernière décennie, les architectures orientées services (SOA) d'une part et la gestion des processus business (BPM) d'autre part ont beaucoup évolué et semblent maintenant en train de converger vers un but commun qui est de permettre à des organisations complètement hétérogènes de partager de manière flexible leurs ressources dans le but d'atteindre des objectifs communs, et ce, à travers des schémas de collaboration avancée. Ces derniers permettent de spécifier l'interconnexion des processus métier de différentes organisations. La nature dynamique et la complexité de ces processus posent des défis majeurs quant à leur bonne exécution. Certes, les langages de description de chorégraphie aident à réduire cette complexité en fournissant des moyens pour décrire des systèmes complexes à un niveau abstrait. Toutefois, rien ne garantit que des situations erronées ne se produisent pas suite, par exemple, à des interactions "mal" spécifiées ou encore des comportements malhonnêtes d'un des partenaires. Dans ce manuscrit, nous proposons une approche décentralisée qui permet la supervision de chorégraphies au moment de leur exécution et la détection instantanée de violations de séquences d'interaction. Nous définissons un modèle de propagation hiérarchique pour l'échange de notifications externes entre les partenaires. Notre approche permet une génération optimisée de requêtes de supervision dans un environnement événementiel, et ce, d'une façon automatique et à partir de tout modèle de chorégraphie.
APA, Harvard, Vancouver, ISO, and other styles
48

Carvalho, Danilo Codeco. "Obtenção de padrões sequenciais em data streams atendendo requisitos do Big Data." Universidade Federal de São Carlos, 2016. https://repositorio.ufscar.br/handle/ufscar/8280.

Full text
Abstract:
Submitted by Daniele Amaral (daniee_ni@hotmail.com) on 2016-10-20T18:13:56Z No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:36Z (GMT) No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:42Z (GMT) No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5)
Made available in DSpace on 2016-11-08T18:42:49Z (GMT). No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) Previous issue date: 2016-06-06
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
The growing amount of data produced daily, by both businesses and individuals in the web, increased the demand for analysis and extraction of knowledge of this data. While the last two decades the solution was to store and perform data mining algorithms, currently it has become unviable even to supercomputers. In addition, the requirements of the Big Data age go far beyond the large amount of data to analyze. Response time requirements and complexity of the data acquire more weight in many areas in the real world. New models have been researched and developed, often proposing distributed computing or different ways to handle the data stream mining. Current researches shows that an alternative in the data stream mining is to join a real-time event handling mechanism with a classic mining association rules or sequential patterns algorithms. In this work is shown a data stream mining approach to meet the Big Data response time requirement, linking the event handling mechanism in real time Esper and Incremental Miner of Stretchy Time Sequences (IncMSTS) algorithm. The results show that is possible to take a static data mining algorithm for data stream environment and keep tendency in the patterns, although not possible to continuously read all data coming into the data stream.
O crescimento da quantidade de dados produzidos diariamente, tanto por empresas como por indivíduos na web, aumentou a exigência para a análise e extração de conhecimento sobre esses dados. Enquanto nas duas últimas décadas a solução era armazenar e executar algoritmos de mineração de dados, atualmente isso se tornou inviável mesmo em super computadores. Além disso, os requisitos da chamada era do Big Data vão muito além da grande quantidade de dados a se analisar. Requisitos de tempo de resposta e complexidade dos dados adquirem maior peso em muitos domínios no mundo real. Novos modelos têm sido pesquisados e desenvolvidos, muitas vezes propondo computação distribuída ou diferentes formas de se tratar a mineração de fluxo de dados. Pesquisas atuais mostram que uma alternativa na mineração de fluxo de dados é unir um mecanismo de tratamento de eventos em tempo real com algoritmos clássicos de mineração de regras de associação ou padrões sequenciais. Neste trabalho é mostrada uma abordagem de mineração de fluxo de dados (data stream) para atender ao requisito de tempo de resposta do Big Data, que une o mecanismo de manipulação de eventos em tempo real Esper e o algoritmo Incremental Miner of Stretchy Time Sequences (IncMSTS). Os resultados mostram ser possível levar um algoritmo de mineração de dados estático para o ambiente de fluxo de dados e manter as tendências de padrões encontrados, mesmo não sendo possível ler todos os dados vindos continuamente no fluxo de dados.
APA, Harvard, Vancouver, ISO, and other styles
49

Luthra, Manisha. "Network-centric Complex Event Processing." Phd thesis, 2021. https://tuprints.ulb.tu-darmstadt.de/19285/1/2021-09-01_Luthra_Manisha.pdf.

Full text
Abstract:
Complex Event Processing (CEP) is a widely used paradigm to detect and react to events of interest for various applications. Numerous companies, including Twitter and Google, build on CEP in a broad spectrum of applications to perform real-time data analytics. Many of these applications require to efficiently adapt to the dynamic environmental conditions and changes in the quality requirements. An essential building block to perform adaptations in CEP is through an operator, which encapsulates the event detection logic in the form of a query often coupled with an execution state. Despite significant contributions to concepts for operator specification, placement, and execution, there are multiple research gaps concerning adaptivity, efficiency, and interoperability in CEP. Thereby, this thesis identifies and contributes appropriate methods and their analysis to overcome these fundamental research gaps in CEP: (i) The lack of adaptivity between CEP mechanisms hinders meeting the changing quality requirements of applications. (ii) Absence of suitable network-centric abstractions that hinder efficient event processing. (iii) Absence of suitable programming abstractions that hinder reuse of CEP mechanisms across multiple programming models. To close the first gap, we contribute a novel programming model, named TCEP, that enables transitions between so-called operator placement mechanisms. The programming model provides methods for the research questions "when" and "how" to perform a transition while ensuring crucial properties of transition such as seamlessness. In particular, we propose transition strategies that minimize the costs for operator migrations and ensure seamlessness in performing adaptations. A learning-based selection algorithm guarantees a well-suited operator placement mechanism for given quality requirements. By integrating and evaluating six operator placement mechanisms, we showed that the programming model allows the use of distinct mechanisms for adaptations, and it provides a better understanding of their cost and performance characteristics. Our extensive evaluation study using a real-world workload and implementation shows that TCEP can adapt to the dynamic quality requirements of applications in a quick, seamless, and low-cost manner. To close the second gap, we propose a novel unified communication model named INETCEP. The proposed concepts of INETCEP contribute to the research question of "how" to enable efficient continuous event stream processing and network-centric CEP query execution. We build INETCEP using the concepts of Information-centric Networking, which has been proven to facilitate in-network programmability. As part of the unified communication model, we propose an expressive meta query language and query execution algorithms for CEP that efficiently place operators over Information-centric Networks. Our detailed evaluation study of INETCEP shows that event forwarding can be achieved in a very short time of a few microseconds. Similarly, using our network-centric abstractions, CEP queries can be resolved at very high incoming event rates in a few milliseconds while incurring no event loss. Finally, we propose a novel unified CEP middleware, named CEPLESS, based on the serverless computing principles to close the third gap. The middleware provides concepts for the research question of "how" to specify CEP queries independent of their programming and execution environment. Specifically, the middleware contributes a programming abstraction that hides away the complexity of heterogeneous CEP programming models from the application developers. Moreover, we propose mechanisms for an efficient exchange of events using so-called in-memory queues and allow event processing across different execution models. By extending the CEPLESS middleware programming abstraction with five different programming languages, we show extensibility as well as the platform and language independence of the concept. Our evaluation using a real-world workload and implementation shows that event processing using the CEPLESS middleware is equally performant as native CEP systems. Overall, this thesis contributes (i) a novel programming model and methods for transitions in CEP systems to support changing quality requirements, (ii) a novel unified communication model and efficient algorithms that accelerate query execution using the concepts of Information-centric Networking, and (iii) a novel serverless middleware with programming abstractions to achieve efficient execution and reuse of multiple and heterogeneous CEP execution environments.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Chia-Chih, and 吳佳芷. "Complex Event Processing in IoT Middleware." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/41429849781646249957.

Full text
Abstract:
碩士
國立臺灣大學
資訊網路與多媒體研究所
104
In recent years, Internet of Things has been attracting enormous attention not only from the research community but also from the industry sectors. However, the development of the software for Internet of Things has faced many problems, in particular, lacking the concept of complex event processing makes it hard to analyze data derived from the transducers. To solve this problem, we propose a complex event processing system which can read data from sensors, transform them to atomic events, put events into complex events processing engine, and output the complex events predefined. We developed the complex event process engine: user can define the complex event pattern, transducer data collected from the gateway will be transformed to events and last the tree-based complex event processing engine will find out the complex events. Finally, we present the scenario verifications and performance evaluations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography