Academic literature on the topic 'Dataflow patterns'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Dataflow patterns.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Dataflow patterns"

1

Meddah, Ishak H. A., Khaled Belkadi, and Mohamed Amine Boudia. "Efficient Implementation of Hadoop MapReduce based Business Process Dataflow." International Journal of Decision Support System Technology 9, no. 1 (2017): 49–60. http://dx.doi.org/10.4018/ijdsst.2017010104.

Full text
Abstract:
Hadoop MapReduce is one of the solutions for the process of large and big data, with-it the authors can analyze and process data, it does this by distributing the computational in a large set of machines. Process mining provides an important bridge between data mining and business process analysis, his techniques allow for mining data information from event logs. Firstly, the work consists to mine small patterns from a log traces, those patterns are the workflow of the execution traces of business process. The authors' work is an amelioration of the existing techniques who mine only one general workflow, the workflow present the general traces of two web applications; they use existing techniques; the patterns are represented by finite state automaton; the final model is the combination of only two types of patterns whom are represented by the regular expressions. Secondly, the authors compute these patterns in parallel, and then combine those patterns using MapReduce, they have two parts the first is the Map Step, they mine patterns from execution traces and the second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general and precise. It reduces the execution time by the use of Hadoop MapReduce Framework.
APA, Harvard, Vancouver, ISO, and other styles
2

Rostami, Mohammad, Somayyeh Ehteshami, Fatemeh Yaghoobi, Farid Saghari, and Samaneh Dezhdar. "Proposing a Algorithm for Finding Repetitive Patterns in Web Dataflow." International Journal of Software Engineering and Its Applications 9, no. 7 (2015): 181–92. http://dx.doi.org/10.14257/ijseia.2015.9.7.19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sane, Nimish, Hojin Kee, Gunasekaran Seetharaman, and Shuvra S. Bhattacharyya. "Topological Patterns for Scalable Representation and Analysis of Dataflow Graphs." Journal of Signal Processing Systems 65, no. 2 (2011): 229–44. http://dx.doi.org/10.1007/s11265-011-0610-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Lai-Huei, Chung-Ching Shen, Shenpei Wu, and Shuvra S. Bhattacharyya. "Parameterized Scheduling of Topological Patterns in Signal Processing Dataflow Graphs." Journal of Signal Processing Systems 71, no. 3 (2012): 275–86. http://dx.doi.org/10.1007/s11265-012-0719-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsoeunyane, Lekhobola, Simon Winberg, and Michael Inggs. "Automatic Configurable Hardware Code Generation for Software-Defined Radios." Computers 7, no. 4 (2018): 53. http://dx.doi.org/10.3390/computers7040053.

Full text
Abstract:
The development of software-defined radio (SDR) systems using field-programmable gate arrays (FPGAs) compels designers to reuse pre-existing Intellectual Property (IP) cores in order to meet time-to-market and design efficiency requirements. However, the low-level development difficulties associated with FPGAs hinder productivity, even when the designer is experienced with hardware design. These low-level difficulties include non-standard interfacing methods, component communication and synchronization challenges, complicated timing constraints and processing blocks that need to be customized through time-consuming design tweaks. In this paper, we present a methodology for automated and behavioral integration of dedicated IP cores for rapid prototyping of SDR applications. To maintain high performance of the SDR designs, our methodology integrates IP cores using characteristics of the dataflow model of computation (MoC), namely the static dataflow with access patterns (SDF-AP). We show how the dataflow is mapped onto the low-level model of hardware by efficiently applying low-level based optimizations and using a formal analysis technique that guarantees the correctness of the generated solutions. Furthermore, we demonstrate the capability of our automated hardware design approach by developing eight SDR applications in VHDL. The results show that well-optimized designs are generated and that this can improve productivity while also conserving the hardware resources used.
APA, Harvard, Vancouver, ISO, and other styles
6

Hentrich, David, Erdal Oruklu, and Jafar Saniie. "Program diagramming and fundamental programming patterns for a polymorphic computing dataflow processor." Journal of Computer Languages 65 (August 2021): 101052. http://dx.doi.org/10.1016/j.cola.2021.101052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bispo, J., J. Cardoso, and J. Monteiro. "Hardware Pipelining of Repetitive Patterns in Processor Instruction Traces." Journal of Integrated Circuits and Systems 8, no. 1 (2013): 22–31. http://dx.doi.org/10.29292/jics.v8i1.373.

Full text
Abstract:
Dynamic partitioning is a promising technique where computations are transparently moved from a Gene-
 ral Purpose Processor (GPP) to a coprocessor during application execution. To be effective, the mapping
 of computations to the coprocessor needs to consider aggressive optimizations. One of the mapping opti-
 mizations is loop pipelining, a technique extensively studied and known to allow substantial performance
 improvements. This paper describes a technique for pipelining Megablocks, a type of runtime loop deve-
 loped for dynamic partitioning. The technique transforms the body of Mega-blocks into an acyclic dataflow
 graph which can be fully pipe-lined and is based on the atomic execution of loop iterations. For a set of 9 ben-
 chmarks without memory operations, we generated pipelined hardware versions of the loops and esti-mate
 that the presented loop pipelining technique increases the average speedup of non-pipelined coprocessor
 accelerated designs from 1.6× to 2.2×. For a larger set of 61 benchmarks which include memory operations,
 we estimate through simulation a speedup increase from 2.5× to 5.6× with this technique.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, G. Q., R. Allen, H. A. Andrade, and A. Sangiovanni-Vincentelli. "Communication storage optimization for static dataflow with access patterns under periodic scheduling and throughput constraint." Computers & Electrical Engineering 40, no. 6 (2014): 1858–73. http://dx.doi.org/10.1016/j.compeleceng.2014.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alam, A. K. M. Mubashwir, Sagar Sharma, and Keke Chen. "SGX-MR: Regulating Dataflows for Protecting Access Patterns of Data-Intensive SGX Applications." Proceedings on Privacy Enhancing Technologies 2021, no. 1 (2021): 5–20. http://dx.doi.org/10.2478/popets-2021-0002.

Full text
Abstract:
AbstractIntel SGX has been a popular trusted execution environment (TEE) for protecting the integrity and confidentiality of applications running on untrusted platforms such as cloud. However, the access patterns of SGX-based programs can still be observed by adversaries, which may leak important information for successful attacks. Researchers have been experimenting with Oblivious RAM (ORAM) to address the privacy of access patterns. ORAM is a powerful low-level primitive that provides application-agnostic protection for any I/O operations, however, at a high cost. We find that some application-specific access patterns, such as sequential block I/O, do not provide additional information to adversaries. Others, such as sorting, can be replaced with specific oblivious algorithms that are more efficient than ORAM. The challenge is that developers may need to look into all the details of application-specific access patterns to design suitable solutions, which is time-consuming and error-prone. In this paper, we present the lightweight SGX based MapReduce (SGX-MR) approach that regulates the dataflow of data-intensive SGX applications for easier application-level access-pattern analysis and protection. It uses the MapReduce framework to cover a large class of data-intensive applications, and the entire framework can be implemented with a small memory footprint. With this framework, we have examined the stages of data processing, identified the access patterns that need protection, and designed corresponding efficient protection methods. Our experiments show that SGX-MR based applications are much more efficient than the ORAM-based implementations.
APA, Harvard, Vancouver, ISO, and other styles
10

Miller, Julian, Lukas Trümper, Christian Terboven, and Matthias S. Müller. "A Theoretical Model for Global Optimization of Parallel Algorithms." Mathematics 9, no. 14 (2021): 1685. http://dx.doi.org/10.3390/math9141685.

Full text
Abstract:
With the quickly evolving hardware landscape of high-performance computing (HPC) and its increasing specialization, the implementation of efficient software applications becomes more challenging. This is especially prevalent for domain scientists and may hinder the advances in large-scale simulation software. One idea to overcome these challenges is through software abstraction. We present a parallel algorithm model that allows for global optimization of their synchronization and dataflow and optimal mapping to complex and heterogeneous architectures. The presented model strictly separates the structure of an algorithm from its executed functions. It utilizes a hierarchical decomposition of parallel design patterns as well-established building blocks for algorithmic structures and captures them in an abstract pattern tree (APT). A data-centric flow graph is constructed based on the APT, which acts as an intermediate representation for rich and automated structural transformations. We demonstrate the applicability of this model to three representative algorithms and show runtime speedups between 1.83 and 2.45 on a typical heterogeneous CPU/GPU architecture.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Dataflow patterns"

1

Borges, Grace Anne Pontes. "Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-30102008-193432/.

Full text
Abstract:
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas.<br>Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
APA, Harvard, Vancouver, ISO, and other styles
2

Arumí, Albó Pau. "Real-time multimedia on off-the-shelf operating systems: from timeliness dataflow models to pattern languages." Doctoral thesis, Universitat Pompeu Fabra, 2009. http://hdl.handle.net/10803/7558.

Full text
Abstract:
Els sistemes multimèdia basats en programari capaços de processar àudio, vídeo i gràfics a temps-real són omnipresents avui en dia. Els trobem no només a les estacions de treball de sobre-taula sinó també als dispositius ultra-lleugers com els telèfons mòbils. Degut a que la majoria de processament es realitza mitjançant programari, usant abstraccions del maquinari i els serveis oferts pel sistema operatiu i les piles de llibreries que hi ha per sota, el desenvolupament ràpid d'aplicacions esdevé possible. A més d'aquesta immediatesa i exibilitat (comparat amb les plataformes orientades al maquinari), aquests plataformes també ofereixen capacitats d'operar en temps-real amb uns límits de latència apropiats. Malgrat tot això, els experts en el domini dels multimèdia s'enfronten a un desafiament seriós: les funcionalitats i complexitat de les seves aplicacions creixen ràpidament; mentrestant, els requeriments de temps-real (com ara la baixa latència) i els estàndards de fiabilitat augmenten. La present tesi es centra en l'objectiu de proporcionar una caixa d'eines als experts en el domini que els permeti modelar i prototipar sistemes de processament multimèdia. Aquestes eines contenen plataformes i construccions que reecteixen els requeriments del domini i de l'aplicació, i no de propietats accidentals de la implementació (com ara la sincronització entre threads i manegament de buffers). En aquest context ataquem dos problemes diferents però relacionats:la manca de models de computació adequats pel processament de fluxos multimèdia en temps-real, i la manca d'abstraccions apropiades i mètodes sistemàtics de desenvolupament de programari que suportin els esmentats models. Existeixen molts models de computació orientats-a-l'actor i ofereixen millors abstraccions que les tècniques d'enginyeria del programari dominants, per construir sistemes multimèdia de temps-real. La família de les Process Networks i els models Dataflow basades en xarxes d'actors de processat del senyal interconnectats són els més adequats pel processament de fluxos continus. Aquests models permeten expressar els dissenys de forma propera al domini del problema (en comptes de centrar-se en detalls de la implementació), i possibiliten una millor modularització i composició jeràrquica del sistema. Això és possible perquè el model no sobreespecifica com els actors s'han d'executar, sinó que només imposa dependències de dades en un estil de llenguatge declaratiu. Aquests models admeten el processat multi-freqüència i, per tant, planificacions complexes de les execucions dels actors. Però tenen un problema: els models no incorporen el concepte de temps d'una forma útil i, en conseqüència, les planifiacions periòdiques no garanteixen un comportament de temps-real i de baixa latència. Aquesta dissertació soluciona aquesta limitació a base de descriure formalment un nou model que hem anomenat Time-Triggered Synchronous Dataflow (TTSDF). En aquest nou model les planificacions periòdiques són intercalades per vàries "activacions" temporalment-disparades (time-triggered) de forma que les entrades i sortides de la xarxa de processat poden ser servides de forma regular. El model TTSDF té la mateixa expressivitat (o, en altres paraules, té computabilitat equivalent) que el model Synchronous Dataow (SDF). Però a més, té l'avantatge que garanteix la operativitat en temps-real, amb mínima latència i absència de forats i des-sincronitzacions a la sortida. Finalment, permet el balancejat de la càrrega en temps d'execució entre diferents activacions de callbacks i la paralel·lització dels actors. Els models orientats-a-l'actor no són solucions directament aplicables; no són suficients per construir sistemes multimèdia amb una metodologia sistemàtica i pròpia d'una enginyeria. També afrontem aquest problema i, per solucionar-lo, proposem un catàleg de patrons de disseny específics del domini organitzats en un llenguatge de patrons. Aquest llenguatge de patrons permet el refús del disseny, posant una especial atenció al context en el qual el disseny-solució és aplicable, les forces enfrontades que necessita balancejar i les implicacions de la seva aplicació. Els patrons proposats es centren en com: organitzar diferents tipus de connexions entre els actors, transferir dades entre els actors, habilitar la comunicació dels humans amb l'enginy del dataflow, i finalment, prototipar de forma ràpida interfícies gràfiques d'usuari per sobre de l'enginy del dataflow, creant aplicacions completes i extensibles. Com a cas d'estudi, presentem un entorn de desenvolupament (framework) orientat-a-objectes (CLAM), i aplicacions específiques construïdes al seu damunt, que fan ús extensiu del model TTSDF i els patrons contribuïts en aquesta tesi.<br>Software-based multimedia systems that deal with real-time audio, video and graphics processing are pervasive today, not only in desktop workstations but also in ultra-light devices such as smart-phones. The fact that most of the processing is done in software, using the high-level hardware abstractions and services offered by the underlying operating systems and library stacks, enables for quick application development. Added to this exibility and immediacy (compared to hardware oriented platforms), such platforms also offer soft real-time capabilities with appropriate latency bounds. Nevertheless, experts in the multimedia domain face a serious challenge: the features and complexity of their applications are growing rapidly; meanwhile, real-time requirements (such as low latency) and reliability standards increase. This thesis focus on providing multimedia domain experts with workbench of tools they can use to model and prototype multimedia processing systems. Such tools contain platforms and constructs that reect the requirements of the domain and application, and not accidental properties of the implementation (such as thread synchronization and buffers management). In this context, we address two distinct but related problems: the lack of models of computation that can deal with continuous multimedia streams processing in real-time, and the lack of appropriate abstractions and systematic development methods that support such models. Many actor-oriented models of computation exist and they offer better abstractions than prevailing software engineering techniques (such as object-orientation) for building real-time multimedia systems. The family of Process Networks and Dataow models based on networks of connected processing actors are the most suited for continuous stream processing. Such models allow to express designs close to the problem domain (instead of focusing in implementation details such as threads synchronization), and enable better modularization and hierarchical composition. This is possible because the model does not over-specify how the actors must run, but only imposes data dependencies in a declarative language fashion. These models deal with multi-rate processing and hence complex periodic actor's execution schedulings. The problem is that the models do not incorporate the concept of time in a useful way and, hence, the periodic schedules do not guarantee real-time and low latency requirements. This dissertation overcomes this shortcoming by formally describing a new model that we named Time-Triggered Synchronous Dataow (TTSDF), whose periodic schedules can be interleaved by several time-triggered activations" so that inputs and outputs of the processing graph are regularly serviced. The TTSDF model has the same expressiveness (or equivalent computability) than the Synchronous Dataow (SDF) model, with the advantage that it guarantees minimum latency and absence of gaps and jitter in the output. Additionally, it enables run-time load balancing between callback activations and parallelization. Actor-oriented models are not off-the-shelf solutions and do not suffice for building multimedia systems in a systematic and engineering approach. We address this problem by proposing a catalog of domain-speciffic design patterns organized in a pattern language. This pattern language provides design reuse paying special attention to the context in which a design solution is applicable, the competing forces it needs to balance and the implications of its application. The proposed patterns focus on how to: organize different kinds of actors connections, transfer tokens between actors, enable human interaction with the dataow engine, and finally, rapid prototype user interfaces on top of the dataow engine, creating complete and extensible applications. As a case study, we present an object-oriented framework (CLAM), and speciffic applications built upon it, that makes extensive use of the contributed TTSDF model and patterns.
APA, Harvard, Vancouver, ISO, and other styles
3

Farabet, Clément. "Analyse sémantique des images en temps-réel avec des réseaux convolutifs." Phd thesis, Université Paris-Est, 2013. http://tel.archives-ouvertes.fr/tel-00965622.

Full text
Abstract:
Une des questions centrales de la vision informatique est celle de la conception et apprentissage de représentations du monde visuel. Quel type de représentation peut permettre à un système de vision artificielle de détecter et classifier les objects en catégories, indépendamment de leur pose, échelle, illumination, et obstruction. Plus intéressant encore, comment est-ce qu'un tel système peut apprendre cette représentation de façon automatisée, de la même manière que les animaux et humains parviennent à émerger une représentation du monde qui les entoure. Une question liée est celle de la faisabilité calculatoire, et plus précisément celle de l'efficacité calculatoire. Étant donné un modèle visuel, avec quelle efficacité peut-il être entrainé, et appliqué à de nouvelles données sensorielles. Cette efficacité a plusieurs dimensions: l'énergie consommée, la vitesse de calcul, et l'utilisation mémoire. Dans cette thèse je présente trois contributions à la vision informatique: (1) une nouvelle architecture de réseau convolutif profond multi-échelle, permettant de capturer des relations longue distance entre variables d'entrée dans des données type image, (2) un algorithme à base d'arbres permettant d'explorer de multiples candidats de segmentation, pour produire une segmentation sémantique avec confiance maximale, (3) une architecture de processeur dataflow optimisée pour le calcul de réseaux convolutifs profonds. Ces trois contributions ont été produites dans le but d'améliorer l'état de l'art dans le domain de l'analyse sémantique des images, avec une emphase sur l'efficacité calculatoire. L'analyse de scènes (scene parsing) consiste à étiqueter chaque pixel d'une image avec la catégorie de l'objet auquel il appartient. Dans la première partie de cette thèse, je propose une méthode qui utilise un réseau convolutif profond, entrainé à même les pixels, pour extraire des vecteurs de caractéristiques (features) qui encodent des régions de plusieurs résolutions, centrées sur chaque pixel. Cette méthode permet d'éviter l'usage de caractéristiques créées manuellement. Ces caractéristiques étant multi-échelle, elles permettent au modèle de capturer des relations locales et globales à la scène. En parallèle, un arbre de composants de segmentation est calculé à partir de graphe de dis-similarité des pixels. Les vecteurs de caractéristiques associés à chaque noeud de l'arbre sont agrégés, et utilisés pour entrainé un estimateur de la distribution des catégories d'objets présents dans ce segment. Un sous-ensemble des noeuds de l'arbre, couvrant l'image, est ensuite sélectionné de façon à maximiser la pureté moyenne des distributions de classes. En maximisant cette pureté, la probabilité que chaque composant ne contienne qu'un objet est maximisée. Le système global produit une précision record sur plusieurs benchmarks publics. Le calcul de réseaux convolutifs profonds ne dépend que de quelques opérateurs de base, qui sont particulièrement adaptés à une implémentation hardware dédiée. Dans la deuxième partie de cette thèse, je présente une architecture de processeur dataflow dédiée et optimisée pour le calcul de systèmes de vision à base de réseaux convolutifs--neuFlow--et un compilateur--luaFlow--dont le rôle est de compiler une description haut-niveau (type graphe) de réseaux convolutifs pour produire un flot de données et calculs optimal pour l'architecture. Ce système a été développé pour faire de la détection, catégorisation et localisation d'objets en temps réel, dans des scènes complexes, en ne consommant que 10 Watts, avec une implémentation FPGA standard.
APA, Harvard, Vancouver, ISO, and other styles
4

Gable, George M. IV. "Spatio-temporal patterns of biophysical parameters in a microtidal, bar-built, subtropical estuary of the Gulf of Mexico." 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1637.

Full text
Abstract:
Plankton communities are influenced, in part, by water exchange with adjacent estuarine and oceanic ecosystems. Reduced advective transport through tidal passes or with adjacent bay systems can affect chemical processes and biological interactions, such as nutrient cycling, phytoplankton abundance and productivity, community respiration, and zooplankton biovolume. The most threatened estuarine ecosystems are shallow, bar-built, microtidal estuaries with small water volumes and restricted connections through tidal passes and other water exchange points. This research explored spatio-temporal trends in plankton communities and the physicochemical environment in Mesquite Bay, Texas a microtidal, bar-built, subtropical estuary in the Gulf of Mexico. This research couples sampling at fixedstations for multiple physical and biological parameters with high-resolution spatial mapping of physicochemical parameters. Spatial trends were less in magnitude and affected fewer parameters in fixed station and spatial data. Two dimensional ordination plots indicated spatial heterogeneity with a more pronounced temporal trend affecting parameters including temperature, salinity as a function of inflow timing, and seasonal wind direction affecting primary production and zooplankton biovolume. Temperature was positively correlated with gross production and respiration rates during spring and late summer with sporadic positive and negative correlations with phytoplankton biomass. The timing and magnitude of freshwater inflow affected various physicochemical and biological parameters. Higher than 71-year inflow rates resulted in low salinity system wide, with spatial heterogeneity increasing over the course of the study, which was confirmed by spatial maps. Additionally, high inflow rates led to two periods of increased inorganic nutrients and dissolved organic matter. Low salinity periods coincided with persistence of higher turbidity, likely because of decreased sediment flocculation. Gross production was low at this time, and likely from light limitation. Additionally, wind magnitude and direction created spatial heterogeneity in turbidity levels and phytoplankton biomass. Zooplankton biovolume was highest during spring and late summer with high species diversity in total rotifers. Copepod biovolume and phytoplankton biomass were positively correlated. Other zooplankton taxonomic groups exhibited variable correlations with phytoplankton biomass and other taxonomic groups. Further long-term studies are needed to determine interactions of various components of trophic food-webs and account for interannual variability in all system parameters.
APA, Harvard, Vancouver, ISO, and other styles
5

"Real-time multimedia on off-the-shelf operating systems: from timeliness dataflow models to pattern languages." Universitat Pompeu Fabra, 2009. http://www.tesisenxarxa.net/TDX-1016109-093316/.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Dataflow patterns"

1

Bruno, Giorgio. "Handling the Dataflow in Business Process Models." In Sustainable Business. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9615-8.ch076.

Full text
Abstract:
This chapter stresses the importance of the dataflow in business process models and illustrates a notation called DMA that is meant to fulfill two major goals: promoting the integration between business processes and information systems and leveraging the dataflow to provide flexibility in terms of human decisions. The first goal is fulfilled by considering both tasks and business entities as first-class citizens in process models. Business entities form the dataflow that interconnects the tasks: tasks take the input entities from the input dataflow and deliver the output entities to the output dataflow. Human decisions encompass the selection of the input entities when a task needs more than one, and the selection of the task with which to handle the input entities when two or more tasks are admissible. DMA provides a number of patterns that indicate how tasks affect the dataflow. In addition, two compound patterns, called macro tasks, can be used to represent task selection issues. An example related to an order handling process illustrates the notation.
APA, Harvard, Vancouver, ISO, and other styles
2

Bruno, Giorgio. "Handling the Dataflow in Business Process Models." In Multidisciplinary Perspectives on Human Capital and Information Technology Professionals. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5297-0.ch008.

Full text
Abstract:
This chapter stresses the importance of the dataflow in business process models and illustrates a notation called DMA that is meant to fulfill two major goals: promoting the integration between business processes and information systems and leveraging the dataflow to provide flexibility in terms of human decisions. The first goal is fulfilled by considering both tasks and business entities as first-class citizens in process models. Business entities form the dataflow that interconnects the tasks: tasks take the input entities from the input dataflow and deliver the output entities to the output dataflow. Human decisions encompass the selection of the input entities when a task needs more than one, and the selection of the task with which to handle the input entities when two or more tasks are admissible. DMA provides a number of patterns that indicate how tasks affect the dataflow. In addition, two compound patterns, called macro tasks, can be used to represent task selection issues. An example related to an order handling process illustrates the notation.
APA, Harvard, Vancouver, ISO, and other styles
3

Meddah, Ishak H. A., and Khaled Belkadi. "Efficient Implementation of Hadoop MapReduce-Based Dataflow." In Handbook of Research on Biomimicry in Information Retrieval and Knowledge Management. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3004-6.ch020.

Full text
Abstract:
MapReduce is a solution for the treatment of large data. With it we can analyze and process data. It does this by distributing the computation in a large set of machines. Process mining provides an important bridge between data mining and business process analysis. This technique allows for the extraction of information from event logs. Firstly, the chapter mines small patterns from log traces. Those patterns are the representation of the traces execution from a business process. The authors use existing techniques; the patterns are represented by finite state automaton; the final model is the combination of only two types of patterns that are represented by the regular expressions. Secondly, the authors compute these patterns in parallel, and then combine those patterns using MapReduce. They have two parties. The first is the Map Step. The authors mine patterns from execution traces. The second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general, and precise. It minimizes the execution time by the use of MapReduce.
APA, Harvard, Vancouver, ISO, and other styles
4

Schiffel, Jeffrey A. "Using Organizational Semiotics and Conceptual Graphs in a Two-Step Method for Knowledge Management Process Improvement Measurement." In Information Resources Management. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-965-1.ch216.

Full text
Abstract:
The semantic normal forms of organizational semiotics extract structures from natural language texts that may be stored electronically. In themselves, the SNFs are only canonic descriptions of the patterns of behavior observed in a culture. Conceptual graphs and dataflow graphs, their dynamic variety, provide means to reason over propositions in first order logics. Conceptual graphs, however, do not of themselves capture the ontological entities needed for such reasoning. The culture of an organization contains natural language entities that can be extracted for use in knowledge representation and reasoning. Together in a rigorous, two-step process, ontology charting from organizational semiotics and dataflow graphs from knowledge engineering provide a means to extract entities of interest from a subject domain such as the culture of organizations and then to represent these entities in formal logic reasoning. This paper presents this process, and concludes with an example of how process improvement in an IT organization may be measured in this two-step process.
APA, Harvard, Vancouver, ISO, and other styles
5

Schiffel, Jeffrey A. "Organizational Semiotics Complements Knowledge Management." In Intelligent, Adaptive and Reasoning Technologies. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-595-7.ch006.

Full text
Abstract:
Inserting the human element into an Information System leads to interpreting the Information System as an information field. Organizational semiotics provides a means to analyze this alternate interpretation. The semantic normal forms of organizational semiotics extract structures from natural language texts that may be stored electronically. In themselves, the SNFs are only canonic descriptions of the patterns of behavior observed in a culture. Conceptual graphs and dataflow graphs, their dynamic variety, provide means to reason over propositions in first order logics. Conceptual graphs, however, do not of themselves capture the ontological entities needed for such reasoning. The culture of an organization contains natural language entities that can be extracted for use in knowledge representation and reasoning. Together in a rigorous, two-step process, ontology charting from organizational semiotics and dataflow graphs from knowledge engineering provide a means to extract entities of interest from a subject domain such as the culture of organizations and then to represent these entities in formal logic reasoning. This chapter presents this process, and concludes with an example of how process improvement in an IT organization may be measured in this two-step process.
APA, Harvard, Vancouver, ISO, and other styles
6

Aridoss, Manimaran. "Defensive Mechanism Against DDoS Attack to Preserve Resource Availability for IoT Applications." In Securing the Internet of Things. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-5225-9866-4.ch065.

Full text
Abstract:
The major challenge of Internet of Things (IoT) generated data is its hypervisor level vulnerabilities. Malicious VM deployment and termination are so simple due to its multitenant shared nature and distributed elastic cloud features. These features enable the attackers to launch Distributed Denial of Service attacks to degrade cloud server performance. Attack detection techniques are applied to the VMs that are used by malicious tenants to hold the cloud resources by launching DDoS attacks at data center subnets. Traditional dataflow-based attack detection methods rely on the similarities of incoming requests which consist of IP and TCP header information flows. The proposed approach classifies the status patterns of malicious VMs and ideal VMs to identify the attackers. In this article, information theory is used to calculate the entropy value of the malicious virtual machines for detecting attack behaviors. Experimental results prove that the proposed system works well against DDoS attacks in IoT applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Jian-Xun, and Jiping Wen. "Identifying Batch Processing Features in Workflows." In Handbook of Research on Complex Dynamic Process Management. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-669-3.ch021.

Full text
Abstract:
The employment of batch processing in workflow is to model and enact the batch processing logic for multiple cases of a workflow in order to optimize business processes execution dynamically. Our previous work has preliminarily investigated the model and its implementation. However, it does not figure out precisely which activity and how a/multiple workflow activity(s) can gain execution efficiency from batch processing. Inspired by workflow mining and functional dependency inference, this chapter proposes a method for mining batch processing patterns in workflows from process dataflow logs. We first introduce a new concept, batch dependency, which is a specific type of functional dependency in database. The theoretical foundation of batch dependency as well as its mining algorithms is analyzed and investigated. Based on batch dependency and its discovery technique, the activities meriting batch processing and their batch processing features are identified. With the batch processing features discovered, the batch processing areas in workflow are recognized then. Finally, an experiment is demonstrated to show the effectiveness of our method.
APA, Harvard, Vancouver, ISO, and other styles
8

Arndt, T., S. K. Chang, A. Guerico, and P. Maresca. "An XML-Based Approach to Multimedia Engineering for Distance Learning." In Advances in Distance Education Technologies. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-376-0.ch006.

Full text
Abstract:
Multimedia software engineering (MSE) is a new frontier for both software engineering (SE) and visual languages (VL). In fact, multimedia software engineering can be considered as the discipline for systematic specification, design, substitution, and verification of visual patterns. Visual languages contribute to MSE such concepts as: Visual notation for software specification, design, and verification flow charts, ER diagrams, Petri nets, UML visualization, visual programming languages, etc. Multimedia software engineering and software engineering are like two sides of the same coin. On the one hand, we can apply software engineering principles to the design of multimedia systems. On the other hand, we can apply multimedia technologies to the software engineering practice. In this chapter, we concentrate on the first of these possibilities. One of the promising application areas for multimedia software engineering is distance learning. One aim of this chapter is to demonstrate how it is possible to design and to implement complex multimedia software systems for distance learning using a tele-action object transformer based on XML technology applying a component-based multimedia software engineering approach. The chapter shows a complete process of dataflow transformation that represents TAO in different ways (text, TAOML, etc.) and at different levels of abstraction. The transformation process is a reversible one. A component-based tool architecture is also discussed. We also show the first experiments conducted jointly using the TAOML_T tool. The use of an XML-based approach in the distance learning field has other advantages as well. It facilitates reuse of the teaching resources produced in preceding decades by universities, schools, research institutions, and companies by using metadata. The evolution of the technologies and methodologies underlying the Internet has provided the means to transport this material. On the other hand, standards for representing multimedia distance learning materials are currently evolving. Such standards are necessary in order to allow a representation which is independent of hardware and software platforms so that this material can be examined, for example, in a Web browser or so that it may be reused in whole or in part in other chapters of a book or sections of a course distinct from that for which it was originally developed. Initial experiments in reuse of distance learning carried out at the University of Naples, Kent State University, and Cleveland State University are described. The authors have also developed a collaboration environment through which the resources can be visualized and exchanged.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Dataflow patterns"

1

Ghosal, Arkadeb, Rhishikesh Limaye, Kaushik Ravindran, et al. "Static dataflow with access patterns." In the 49th Annual Design Automation Conference. ACM Press, 2012. http://dx.doi.org/10.1145/2228360.2228479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sane, Nimish, Hojin Kee, Gunasekaran Seetharaman, and Shuvra S. Bhattacharyya. "Scalable representation of dataflow graph structures using topological patterns." In 2010 IEEE Workshop On Signal Processing Systems (SiPS). IEEE, 2010. http://dx.doi.org/10.1109/sips.2010.5624821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jetley, Pritish, Adarsh Keshany, and Laxmikant V. Kale. "Incorporating Dynamic Communication Patterns in a Static Dataflow Notation." In 2012 Data-Flow Execution Models for Extreme Scale Computing (DFM). IEEE, 2012. http://dx.doi.org/10.1109/dfm.2012.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Osmari, Daniel K., Huy T. Vo, Claudio T. Silva, Joao L. D. Comba, and Lauro Lins. "Visualization and Analysis of Parallel Dataflow Execution with Smart Traces." In 2014 27th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2014. http://dx.doi.org/10.1109/sibgrapi.2014.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tsoeunyane, Lekhobola J., Simon Winberg, and Michael Inggs. "An IP core integration tool-flow for prototyping software-defined radios using static dataflow with access patterns." In 2017 International Conference on Field Programmable Technology (ICFPT). IEEE, 2017. http://dx.doi.org/10.1109/fpt.2017.8280125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arumí, Pau, David García, and Xavier Amatriain. "A dataflow pattern catalog for sound and music computing." In the 2006 conference. ACM Press, 2006. http://dx.doi.org/10.1145/1415472.1415503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Guiling, Sai Zhang, Chen Liu, and Yanbo Han. "A Dataflow-Pattern-Based Recommendation Approach for Data Service Mashups." In 2014 IEEE International Conference on Services Computing (SCC). IEEE, 2014. http://dx.doi.org/10.1109/scc.2014.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Farabet, Clement, Berin Martini, Benoit Corda, Polina Akselrod, Eugenio Culurciello, and Yann LeCun. "NeuFlow: A runtime reconfigurable dataflow processor for vision." In 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2011. http://dx.doi.org/10.1109/cvprw.2011.5981829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography