To see the other types of publications on this topic, follow the link: Stream processing comparison.

Journal articles on the topic 'Stream processing comparison'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stream processing comparison.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Osborn, Wendy. "Unbounded Spatial Data Stream Query Processing using Spatial Semijoins." Journal of Ubiquitous Systems and Pervasive Networks 15, no. 02 (2021): 33–41. http://dx.doi.org/10.5383/juspn.15.02.005.

Full text
Abstract:
In this paper, the problem of query processing in spatial data streams is explored, with a focus on the spatial join operation. Although the spatial join has been utilized in many proposed centralized and distributed query processing strategies, for its application to spatial data streams the spatial join operation has received very little attention. One identified limitation with existing strategies is that a bounded region of space (i.e., spatial extent) from which the spatial objects are generated needs to be known in advance. However, this information may not be available. Therefore, two strategies for spatial data stream join processing are proposed where the spatial extent of the spatial object stream is not required to be known in advance. Both strategies estimate the common region that is shared by two or more spatial data streams in order to process the spatial join. An evaluation of both strategies includes a comparison with a recently proposed approach in which the spatial extent of the data set is known. Experimental results show that one of the strategies performs very well at estimating the common region of space using only incoming objects on the spatial data streams. Other limitations of this work are also identified.
APA, Harvard, Vancouver, ISO, and other styles
2

Stenroth, Karolina, Trent M. Hoover, Jan Herrmann, Irene Bohman, and John S. Richardson. "A model-based comparison of organic matter dynamics between riparian-forested and open-canopy streams." Riparian Ecology and Conservation 2, no. 1 (2014): 1–13. http://dx.doi.org/10.2478/remc-2014-0001.

Full text
Abstract:
AbstractThe food webs of forest streams are primarily based upon inputs of organic matter from adjacent terrestrial ecosystems. However, streams that run through open landscapes generally lack closed riparian canopies, and an increasing number of studies indicate that terrestrial organic matter may be an important resource in these systems as well. Combining key abiotically-controlled factors (stream discharge, water temperature, and litter input rate) with relevant biotic processes (e.g. macroinvertebrate CPOM consumption, microbial processing), we constructed a model to predict and contrast organic matter dynamics (including temporal variation in CPOM standing crop, CPOM processing rate, FPOM production, and detritivore biomass) in small riparian-forested and open-canopy streams. Our modeled results showed that the standing crop of CPOM was similar between riparian-forested and open-canopy streams, despite considerable differences in litter input rate. This unexpected result was partly due to linkages between CPOM supply and consumer abundance that produced higher detritivore biomass in the forest stream than the open-canopy stream. CPOM standing crop in the forest stream was mainly regulated by top-down consumer control, depressing it to a level similar to that of the open-canopy stream. In contrast, CPOM standing crop in the open-canopy stream was primarily controlled by physical factors (litter input rates and discharge), not consumption. This suggests that abiotic processes (e.g. discharge) may play a greater role in limiting detrital resource availability and consumer biomass in open-canopy streams than in forest streams. These model results give insight on functional differences that exists among streams and they can be used to predict effects of anthropogenic influences such as forestry, agriculture, urbanization, and climate change on streams and how riparian management and conservation tools can be employed to mitigate undesirable effects.
APA, Harvard, Vancouver, ISO, and other styles
3

Bok, Kyoungsoo, Daeyun Kim, and Jaesoo Yoo. "Complex Event Processing for Sensor Stream Data." Sensors 18, no. 9 (2018): 3084. http://dx.doi.org/10.3390/s18093084.

Full text
Abstract:
As a large amount of stream data are generated through sensors over the Internet of Things environment, studies on complex event processing have been conducted to detect information required by users or specific applications in real time. A complex event is made by combining primitive events through a number of operators. However, the existing complex event-processing methods take a long time because they do not consider similarity and redundancy of operators. In this paper, we propose a new complex event-processing method considering similar and redundant operations for stream data from sensors in real time. In the proposed method, a similar operation in common events is converted into a virtual operator, and redundant operations on the same events are converted into a single operator. The event query tree for complex event detection is reconstructed using the converted operators. Through this method, the cost of comparison and inspection of similar and redundant operations is reduced, thereby decreasing the overall processing cost. To prove the superior performance of the proposed method, its performance is evaluated in comparison with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Short, Robert A., and Stephen L. Smith. "Seasonal Comparison of Leaf Processing in a Texas Stream." American Midland Naturalist 121, no. 2 (1989): 219. http://dx.doi.org/10.2307/2426025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ye, Qian, and Minyan Lu. "SPOT: Testing Stream Processing Programs with Symbolic Execution and Stream Synthesizing." Applied Sciences 11, no. 17 (2021): 8057. http://dx.doi.org/10.3390/app11178057.

Full text
Abstract:
Adoption of distributed stream processing (DSP) systems such as Apache Flink in real-time big data processing is increasing. However, DSP programs are prone to be buggy, especially when one programmer neglects some DSP features (e.g., source data reordering), which motivates development of approaches for testing and verification. In this paper, we focus on the test data generation problem for DSP programs. Currently, there is a lack of an approach that generates test data for DSP programs with both high path coverage and covering different stream reordering situations. We present a novel solution, SPOT (i.e., Stream Processing Program Test), to achieve these two goals simultaneously. At first, SPOT generates a set of individual test data representing each path of one DSP program through symbolic execution. Then, SPOT composes these independent data into various time series data (a.k.a, stream) in diverse reordering. Finally, we can perform a test by feeding the DSP program with these streams continuously. To automatically support symbolic analysis, we also developed JPF-Flink, a JPF (i.e., Java Pathfinder) extension to coordinate the execution of Flink programs. We present four case studies to illustrate that: (1) SPOT can support symbolic analysis for the commonly used DSP operators; (2) test data generated by SPOT can more efficiently achieve high JDU (i.e., Joint Dataflow and UDF) path coverage than two recent DSP testing approaches; (3) test data generated by SPOT can more easily trigger software failure when comparing with those two DSP testing approaches; and (4) the data randomly generated by those two test techniques are highly skewed in terms of stream reordering, which is measured by the entropy metric. In comparison, it is even for test data from SPOT.
APA, Harvard, Vancouver, ISO, and other styles
6

Gulis, Vladislav, Keller Suberkropp, and Amy D. Rosemond. "Comparison of Fungal Activities on Wood and Leaf Litter in Unaltered and Nutrient-Enriched Headwater Streams." Applied and Environmental Microbiology 74, no. 4 (2007): 1094–101. http://dx.doi.org/10.1128/aem.01903-07.

Full text
Abstract:
ABSTRACT Fungi are the dominant organisms decomposing leaf litter in streams and mediating energy transfer to other trophic levels. However, less is known about their role in decomposing submerged wood. This study provides the first estimates of fungal production on wood and compares the importance of fungi in the decomposition of submerged wood versus that of leaves at the ecosystem scale. We determined fungal biomass (ergosterol) and activity associated with randomly collected small wood (<40 mm diameter) and leaves in two southern Appalachian streams (reference and nutrient enriched) over an annual cycle. Fungal production (from rates of radiolabeled acetate incorporation into ergosterol) and microbial respiration on wood (per gram of detrital C) were about an order of magnitude lower than those on leaves. Microbial activity (per gram of C) was significantly higher in the nutrient-enriched stream. Despite a standing crop of wood two to three times higher than that of leaves in both streams, fungal production on an areal basis was lower on wood than on leaves (4.3 and 15.8 g C m−2 year−1 in the reference stream; 5.5 and 33.1 g C m−2 year−1 in the enriched stream). However, since the annual input of wood was five times lower than that of leaves, the proportion of organic matter input directly assimilated by fungi was comparable for these substrates (15.4 [wood] and 11.3% [leaves] in the reference stream; 20.0 [wood] and 20.2% [leaves] in the enriched stream). Despite a significantly lower fungal activity on wood than on leaves (per gram of detrital C), fungi can be equally important in processing both leaves and wood in streams.
APA, Harvard, Vancouver, ISO, and other styles
7

Khettabi, Karima, Zineddine Kouahla, Brahim Farou, Hamid Seridi, and Mohamed Amine Ferrag. "Efficient Method for Continuous IoT Data Stream Indexing in the Fog-Cloud Computing Level." Big Data and Cognitive Computing 7, no. 2 (2023): 119. http://dx.doi.org/10.3390/bdcc7020119.

Full text
Abstract:
Internet of Things (IoT) systems include many smart devices that continuously generate massive spatio-temporal data, which can be difficult to process. These continuous data streams need to be stored smartly so that query searches are efficient. In this work, we propose an efficient method, in the fog-cloud computing architecture, to index continuous and heterogeneous data streams in metric space. This method divides the fog layer into three levels: clustering, clusters processing and indexing. The Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to group the data from each stream into homogeneous clusters at the clustering fog level. Each cluster in the first data stream is stored in the clusters processing fog level and indexed directly in the indexing fog level in a Binary tree with Hyperplane (BH tree). The indexing of clusters in the subsequent data stream is determined by the coefficient of variation (CV) value of the union of the new cluster with the existing clusters in the cluster processing fog layer. An analysis and comparison of our experimental results with other results in the literature demonstrated the effectiveness of the CV method in reducing energy consumption during BH tree construction, as well as reducing the search time and energy consumption during a k Nearest Neighbor (kNN) parallel query search.
APA, Harvard, Vancouver, ISO, and other styles
8

Rodrigo, Arosha, Miyuru Dayarathna, and Sanath Jayasena. "Latency-Aware Secure Elastic Stream Processing with Homomorphic Encryption." Data Science and Engineering 4, no. 3 (2019): 223–39. http://dx.doi.org/10.1007/s41019-019-00100-5.

Full text
Abstract:
Abstract Increasingly organizations are elastically scaling their stream processing applications into the infrastructure as a service clouds. However, state-of-the-art approaches for elastic stream processing do not consider the potential threats of exposing their data to third parties in cloud environments. We present the design and implementation of an Elastic Switching Mechanism for data stream processing which is based on homomorphic encryption (HomoESM). The HomoESM not only elastically scales data stream processing applications into public clouds but also preserves the privacy of such applications. Using a real-world test setup, which includes an E-mail Filter benchmark and a Web server access log processor benchmark (EDGAR), we demonstrate the effectiveness of our approach. Experiments on Amazon EC2 indicate that the proposed approach for homomorphic encryption provides a significant result which is 10–17% improvement in average latency in the case of E-mail Filter benchmark and EDGAR benchmark, respectively. Furthermore, EDGAR add/subtract operations, multiplication, and comparison operations showed up to 6.13%, 7.81%, and 26.17% average latency improvements, respectively. Finally, we evaluate the potential of scaling the homomorphic stream processor in the public cloud. These results indicate the potential for real-world deployments of secure elastic data stream processing applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Hrytsko, T. L., D. Lenskiy, and V. S. Hlukhov. "REVIEW OF THE CAPABILITIES OF THE JPEG-LS ALGORITHM FOR ITS USE WITH EARTH SURFACE SCANNERS." Computer systems and network 6, no. 2 (2024): 14–24. https://doi.org/10.23939/csn2024.02.014.

Full text
Abstract:
The article explores the possibilities of implementing the JPEG-LS image compression algorithm on Field Programmable Gate Arrays (FPGA) for processing monochrome video streams from Earth surface scanners. A comparison of software implementations of the algorithms, their compression ratio, and execution time is conducted. Methods for improving FPGA performance are considered, using parallel data processing and optimized data structures to accelerate compression and decompression processes. Test results of the software implementation of the algorithm show an average processing speed of 179.2 Mbit/s during compression and 169.6 Mbit/s during decompression. A compression ratio from 1.2 to 7.4 can be achieved depending on the complexity of the image. Key words: FPGA, JPEG-LS, Field-programmable gate arrays, Image compression, Image processing, Video compression, Video stream processing.
APA, Harvard, Vancouver, ISO, and other styles
10

Hrytsko, T. L., D. Lenskiy, and V. S. Hlukhov. "REVIEW OF THE CAPABILITIES OF THE JPEG-LS ALGORITHM FOR ITS USE WITH EARTH SURFACE SCANNERS." Computer systems and network 6, no. 2 (2024): 15–25. https://doi.org/10.23939/csn2024.02.015.

Full text
Abstract:
The article explores the possibilities of implementing the JPEG-LS image compression algorithm on Field Programmable Gate Arrays (FPGA) for processing monochrome video streams from Earth surface scanners. A comparison of software implementations of the algorithms, their compression ratio, and execution time is conducted. Methods for improving FPGA performance are considered, using parallel data processing and optimized data structures to accelerate compression and decompression processes. Test results of the software implementation of the algorithm show an average processing speed of 179.2 Mbit/s during compression and 169.6 Mbit/s during decompression. A compression ratio from 1.2 to 7.4 can be achieved depending on the complexity of the image. Key words: FPGA, JPEG-LS, Field-programmable gate arrays, Image compression, Image processing, Video compression, Video stream processing.
APA, Harvard, Vancouver, ISO, and other styles
11

Bhatt, Nirav, and Amit Thakkar. "An efficient approach for low latency processing in stream data." PeerJ Computer Science 7 (March 10, 2021): e426. http://dx.doi.org/10.7717/peerj-cs.426.

Full text
Abstract:
Stream data is the data that is generated continuously from the different data sources and ideally defined as the data that has no discrete beginning or end. Processing the stream data is a part of big data analytics that aims at querying the continuously arriving data and extracting meaningful information from the stream. Although earlier processing of such stream was using batch analytics, nowadays there are applications like the stock market, patient monitoring, and traffic analysis which can cause a drastic difference in processing, if the output is generated in levels of hours and minutes. The primary goal of any real-time stream processing system is to process the stream data as soon as it arrives. Correspondingly, analytics of the stream data also needs consideration of surrounding dependent data. For example, stock market analytics results are often useless if we do not consider their associated or dependent parameters which affect the result. In a real-world application, these dependent stream data usually arrive from the distributed environment. Hence, the stream processing system has to be designed, which can deal with the delay in the arrival of such data from distributed sources. We have designed the stream processing model which can deal with all the possible latency and provide an end-to-end low latency system. We have performed the stock market prediction by considering affecting parameters, such as USD, OIL Price, and Gold Price with an equal arrival rate. We have calculated the Normalized Root Mean Square Error (NRMSE) which simplifies the comparison among models with different scales. A comparative analysis of the experiment presented in the report shows a significant improvement in the result when considering the affecting parameters. In this work, we have used the statistical approach to forecast the probability of possible data latency arrives from distributed sources. Moreover, we have performed preprocessing of stream data to ensure at-least-once delivery semantics. In the direction towards providing low latency in processing, we have also implemented exactly-once processing semantics. Extensive experiments have been performed with varying sizes of the window and data arrival rate. We have concluded that system latency can be reduced when the window size is equal to the data arrival rate.
APA, Harvard, Vancouver, ISO, and other styles
12

Yousefi, Bardia, and Chu Kiong Loo. "Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement." Scientific World Journal 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/723213.

Full text
Abstract:
Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility.
APA, Harvard, Vancouver, ISO, and other styles
13

Raveendra, Reddy Pasala, Raja Pulicharla Mohan, and Premani Varsha. "Optimizing Real-Time Data Pipelines for Machine Learning: A Comparative Study of Stream Processing Architectures." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1653–60. https://doi.org/10.5281/zenodo.14948785.

Full text
Abstract:
Within the time of enormous information and real-time analytics, optimizing information pipelines for machine learning is basic for convenient and exact bits of knowledge. This consideration analyzes the execution and versatility of Apache Kafka Streams, Apache Flink, and Apache Pulsar in real-time machine-learning applications. In spite of the wide use of these innovations, there's a need for comprehensive comparative examination with respect to their productivity in commonsense scenarios. This inquiry about addresses this crevice by giving a point-by-point comparison of these systems, centering on idleness, throughput, and asset utilization. We conducted benchmarks and tests to assess each framework's execution in taking care of high-throughput information, conveying real-time expectations, and overseeing asset utilization. Our conclusion uncovered that Apache Flink accomplishes a 25% lower end-to-end idleness compared to Kafka Streams in high-throughput scenarios. Apache Pulsar exceeds expectations in adaptability, handling up to 1.5 million messages per moment, whereas Kafka Streams appears 15% higher memory utilization. These discoveries highlight the qualities and impediments of each system. Kafka Streams coordinate well with Kafka's informing framework but may have higher idleness beneath overwhelming loads. Flink offers prevalent low-latency and high-throughput execution, making it reasonable for complex assignments. Pulsar's progressed informing highlights and versatility are promising for large-scale applications, though it requires cautious tuning. This comparative investigation gives down-to-earth bits of knowledge for choosing the ideal stream preparation system for machine learning pipelines.
APA, Harvard, Vancouver, ISO, and other styles
14

Hudson, Thomas Samuel, Alex M. Brisbourne, Sofia-Katerina Kufner, J. Michael Kendall, and Andy M. Smith. "Array processing in cryoseismology: a comparison to network-based approaches at an Antarctic ice stream." Cryosphere 17, no. 11 (2023): 4979–93. http://dx.doi.org/10.5194/tc-17-4979-2023.

Full text
Abstract:
Abstract. Seismicity at glaciers, ice sheets, and ice shelves provides observational constraint on a number of glaciological processes. Detecting and locating this seismicity, specifically icequakes, is a necessary first step in studying processes such as basal slip, crevassing, imaging ice fabric, and iceberg calving, for example. Most glacier deployments to date use conventional seismic networks, comprised of seismometers distributed over the entire area of interest. However, smaller-aperture seismic arrays can also be used, which are typically sensitive to seismicity distal from the array footprint and require a smaller number of instruments. Here, we investigate the potential of arrays and array-processing methods to detect and locate subsurface microseismicity at glaciers, benchmarking performance against conventional seismic-network-based methods for an example at an Antarctic ice stream. We also provide an array-processing recipe for body-wave cryoseismology applications. Results from an array and a network deployed at Rutford Ice Stream, Antarctica, show that arrays and networks both have strengths and weaknesses. Arrays can detect icequakes from further distances, whereas networks outperform arrays in more comprehensive studies of a particular process due to greater hypocentral constraint within the network extent. We also gain new insights into seismic behaviour at the Rutford Ice Stream. The array detects basal icequakes in what was previously interpreted to be an aseismic region of the bed, as well as new icequake observations downstream and at the ice stream shear margins, where it would be challenging to deploy instruments. Finally, we make some practical recommendations for future array deployments at glaciers.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Haibo, Chaoyi Ma, Olufemi O. Odegbile, Shigang Chen, and Jih-Kwon Peir. "Randomized error removal for online spread estimation in data streaming." Proceedings of the VLDB Endowment 14, no. 6 (2021): 1040–52. http://dx.doi.org/10.14778/3447689.3447707.

Full text
Abstract:
Measuring flow spread in real time from large, high-rate data streams has numerous practical applications, where a data stream is modeled as a sequence of data items from different flows and the spread of a flow is the number of distinct items in the flow. Past decades have witnessed tremendous performance improvement for single-flow spread estimation. However, when dealing with numerous flows in a data stream, it remains a significant challenge to measure per-flow spread accurately while reducing memory footprint. The goal of this paper is to introduce new multi-flow spread estimation designs that incur much smaller processing overhead and query overhead than the state of the art, yet achieves significant accuracy improvement in spread estimation. We formally analyze the performance of these new designs. We implement them in both hardware and software, and use real-world data traces to evaluate their performance in comparison with the state of the art. The experimental results show that our best sketch significantly improves over the best existing work in terms of estimation accuracy, data item processing throughput, and online query throughput.
APA, Harvard, Vancouver, ISO, and other styles
16

Parafe, Nikhil, M. Venkatesan, and Prabhavathy Panner. "Hymenopteran Colony Stream Clustering Algorithm and Comparison with Particle Swarm Optimization and Genetic Optimization Clustering." Journal of Computational and Theoretical Nanoscience 18, no. 4 (2021): 1336–41. http://dx.doi.org/10.1166/jctn.2021.9402.

Full text
Abstract:
Stream is endlessly inbound sequence of information, streamed information is unbounded and every information are often examined one time. Streamed information are often noisy and therefore the variety of clusters within the information and their applied mathematics properties will change over time, wherever random access to the information isn’t possible and storing all the arriving information is impractical. When applying data set processing techniques and specifically stream clustering Algorithms to real time information streams, limitation in execution time and memory have to be oblige to be thought-about carefully. The projected hymenopteran colony stream clustering Algorithmic is a clustering Algorithm which forms cluster according to density variation, in which clusters are separated by high density features from low density feature region with mounted movement of hymenopteran. Result shows that it created denser cluster than antecedently projected Algorithmic program. And with mounted movement of ants conjointly it decreases the loss of data points. And conjointly the changed radius formula of cluster is projected so as to increase performance of model to create it a lot of dynamic with continuous flow of information. And also we changed probability formula for pick up and drop to reduce oulier. Results from hymenopteran experiments conjointly showed that sorting is disbursed in 2 phases, a primary clustering episode followed by a spacing part. In this paper, we have also compared proposed Algorithm with particle swarm optimization and genetic optimization using DBSCAN and k -means clustering.
APA, Harvard, Vancouver, ISO, and other styles
17

Raveendra Reddy Pasala, Mohan Raja Pulicharla, and Varsha Premani. "Optimizing Real-Time Data Pipelines for Machine Learning: A Comparative Study of Stream Processing Architectures." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1653–60. http://dx.doi.org/10.30574/wjarr.2024.23.3.2818.

Full text
Abstract:
Within the time of enormous information and real-time analytics, optimizing information pipelines for machine learning is basic for convenient and exact bits of knowledge. This consideration analyzes the execution and versatility of Apache Kafka Streams, Apache Flink, and Apache Pulsar in real-time machine-learning applications. In spite of the wide use of these innovations, there's a need for comprehensive comparative examination with respect to their productivity in commonsense scenarios. This inquiry about addresses this crevice by giving a point-by-point comparison of these systems, centering on idleness, throughput, and asset utilization. We conducted benchmarks and tests to assess each framework's execution in taking care of high-throughput information, conveying real-time expectations, and overseeing asset utilization. Our conclusion uncovered that Apache Flink accomplishes a 25% lower end-to-end idleness compared to Kafka Streams in high-throughput scenarios. Apache Pulsar exceeds expectations in adaptability, handling up to 1.5 million messages per moment, whereas Kafka Streams appears 15% higher memory utilization. These discoveries highlight the qualities and impediments of each system. Kafka Streams coordinate well with Kafka's informing framework but may have higher idleness beneath overwhelming loads. Flink offers prevalent low-latency and high-throughput execution, making it reasonable for complex assignments. Pulsar's progressed informing highlights and versatility are promising for large-scale applications, though it requires cautious tuning. This comparative investigation gives down-to-earth bits of knowledge for choosing the ideal stream preparation system for machine learning pipelines.
APA, Harvard, Vancouver, ISO, and other styles
18

Surbeck, Werner, Jürgen Hänggi, Petra Viher, et al. "T169. SEMANTIC PROCESSING IN RELATION TO ANATOMICAL INTEGRITY OF THE VENTRAL LANGUAGE STREAM IN SCHIZOPHRENIA SPECTRUM DISORDERS." Schizophrenia Bulletin 46, Supplement_1 (2020): S295—S296. http://dx.doi.org/10.1093/schbul/sbaa029.729.

Full text
Abstract:
Abstract Background Semantic processing anomalies, clinically reflected by disorganized speech are a core symptom of schizophrenia. In the light of accumulating evidence on its prominent role in semantic processing, aberrant structural integrity of the ventral language stream may reflect impaired semantic processing in schizophrenia spectrum disorders (SSD). Methods Comparison of white matter tract integrity in SSD patients and healthy controls using diffusion tensor imaging combined with probabilistic fiber tractography. For the ventral language stream, we assessed the inferior fronto-occipital fasciculus [IFOF], inferior longitudinal fasciculus, and uncinate fasciculus. The arcuate fasciculus and corticospinal tract were used as control tracts. In SSD patients, the relationship between semantic processing impairments and tract integrity was analyzed separately. Three-dimensional tract reconstructions were performed in 45/44 SSD patients/controls (“Bern sample”) and replicated in an independent sample of 24/24 SSD patients/controls (“Basel sample”). Results Multivariate analyses of fractional anisotropy, mean, axial, and radial diffusivity of the left IFOF showed significant differences between SSD patients and controls (p<0.001, ηp2=0.23) in the Bern sample. In SSD, axial diffusivity of the left IFOF was inversely correlated with semantic processing impairments (r=-0.579, p<0.0001). In the Basel sample, significant group differences for the left IFOF were replicated (p<0.01, ηp2=0.29), while the correlation between axial diffusivity of the left IFOF and semantic processing decline (r=-0.376, p=0.09) showed a statistical trend. No significant effects were found for the dorsal language stream. Discussion This work provides direct evidence for the importance of the integrity of the ventral language stream, in particular the left IFOF, in semantic processing deficits in SSD.
APA, Harvard, Vancouver, ISO, and other styles
19

Konopka, Piotr, and Barthélémy von Haller. "Exploring data merging methods for a distributed processing system." Journal of Physics: Conference Series 2438, no. 1 (2023): 012038. http://dx.doi.org/10.1088/1742-6596/2438/1/012038.

Full text
Abstract:
Abstract The ALICE experiment at the CERN LHC (Large Hadron Collider) is undertaking a major upgrade during the LHC Long Shutdown 2 in 2019-2021, which includes a new computing system called O2 (Online-Offline). The raw data input from the ALICE detectors will increase a hundredfold, up to 3.5 TB/s. By reconstructing the data online, it will be possible to compress the data stream down to 100 GB/s before storing it permanently. The O2 software is a message-passing system. It will run on approximately 500 computing nodes performing reconstruction, compression, calibration and quality control of the received data stream. As a direct consequence of having a distributed computing system, locally generated data might be incomplete and could require merging to obtain complete results. This paper presents the O2 Mergers, the software designed to match and combine partial data into complete objects synchronously to data taking. Based on a detailed study and results of extensive benchmarks, a qualitative and quantitative comparison of different merging strategies considered to reach the final design and implementation of the software is discussed.
APA, Harvard, Vancouver, ISO, and other styles
20

Pąchalska, Maria, Jolanta Góral-Pólrola, Andreas Mueller, and Juri D. Kropotov. "NEUROPSYCHOLOGY AND THE NEUROPHYSIOLOGY OF PERCEPTUAL MICROGENESIS." Acta Neuropsychologica 15, no. 4 (2017): 365–89. http://dx.doi.org/10.5604/01.3001.0010.7243.

Full text
Abstract:
Perception is one of the psychological operations that can be analyzed from the point of view of microgenetic theory. Our study tests the basic premise of microgenesis theory – the existence of recurrent stages of visual information processing. The event related potentials in two variants of a cued GO/NOGO task (contrasting images of Animals and Plants in the first variant, and contrasting images of Angry and Happy faces in the second variant) were studied during the first 300 ms following stimulus presentation. The independent component analysis was applied to a large collection of ERPs. The functional independent components associated with visual category discrimination, comparison to working memory, action initiation and conflict detection were separated. Information processing in the ventral visual stream (the temporal independent components) occurs at two sequential stages with positive/negative fluctuations of the cortical potential as indexes of the stages. The first stage represents the comparison of the pure physical features of the visual input with the memory trace. The second stage represents the comparison of more sophisticated semantic/emotional features with the working memory. The two stages are the results of interplay between bottom-up and top-down projections in the visual ventral stream.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Hui, Xiaoping Zhou, Yunchuan Yang, and Wenbo Wang. "Hierarchical Modulation with Vector Rotation for E-MBMS Transmission in LTE Systems." Journal of Electrical and Computer Engineering 2010 (2010): 1–9. http://dx.doi.org/10.1155/2010/316749.

Full text
Abstract:
Enhanced Multimedia Broadcast and Multicast Service (E-MBMS) is considered of key importance for the proliferation of Long-Term Evolution (LTE) network in mobile market. Hierarchical modulation (HM), which involves a “base-layer” (BL) and an “enhancement-layer” (EL) bit streams, is a simple technique for achieving tradeoff between service quality and radio coverage. Therefore, it is appealing for MBMS. Generally, HM suffers from the severe performance degradation of the less protected EL stream. In this paper, HM with vector rotation operation introduced to EL stream is proposed, in order to improve EL's performance. With the proper interleaving in frequency domain, this operation can exploit the inherent diversity gain from the multipath channel. In this way, HM with vector rotation can effectively enhance multimedia broadcasting on quality video and coverage. The simulation results with scalable video coding (SVC) as source show the significant benefits in comparison with the conventional HM and alternative schemes.
APA, Harvard, Vancouver, ISO, and other styles
22

Kuchuk, Heorhii, Yevhen Kalinin, Nataliia Dotsenko, Igor Chumachenko, and Yuriy Pakhomov. "DECOMPOSITION OF INTEGRATED HIGH-DENSITY IoT DATA FLOW." Advanced Information Systems 8, no. 3 (2024): 77–84. http://dx.doi.org/10.20998/2522-9052.2024.3.09.

Full text
Abstract:
Topicality. The concept of fog computing made it possible to transfer part of the data processing and storage tasks from the cloud to fog nodes to reduce latency. But in batch processing of integrated data streams from IoT sensors, it is sometimes necessary to distribute the tasks of the batch between the fog and cloud layers. For this, it is necessary to decompose the formed package. But the existing methods of decomposition do not meet the requirements for efficiency in high-density IoT systems. The subject of study in the article are methods of decomposition of integrated data streams. The purpose of the article is to develop a method of decomposition of an integrated data stream in a dense high-density Internet of Things fog environment. This will reduce the processing time of operational transactions. The following results were obtained. The concept of decomposition of integrated information flows in the foggy layer was implemented to transition from the batch mode to the flow mode of task processing. Within the framework of the concept, a method of selecting elementary task flows from an integrated flow is proposed. An algorithm for decomposition of the integrated flow of tasks is proposed. Conclusion. A comparison of the proposed method of processing information flows in the foggy environment of high-density IoT with the existing approach is carried out. The results of the comparison showed that the proposed method is more suitable for deployment in conditions of limited network and computing resources. It is advisable to use it on nodes of fog computing systems with a high density of IoT sensors.
APA, Harvard, Vancouver, ISO, and other styles
23

Dr., Jay Singh, Rajendra Prasad Gupta Dr., and Ram Prasad Sonakar Dr. "Processing speed and scholastic performance of tribal students: A comparison across gender and subject stream." International Journal of Novel Research and Development 10, no. 2 (2025): a737—a742. https://doi.org/10.5281/zenodo.14891209.

Full text
Abstract:
Scholastic performance of the students is depended on several psychological and environmental factors in which the cognitive ability such as processing speed is an important factor. Indian culture is full of diversity where tribal students belong to Chhattisgarh face a distinct situation during their study and development. Major intention behind this study was to investigate the views of the faculty members of the colleges who reported that tribal students belong to Chhattisgarh had poor understanding of subject matters and made very low response to the questions in basic language like Hindi in daily class room teaching. These factors could highly affect to the academic record of any student, because the general competency level of a student is considered as raw material for scholastic performance. The main objective of this study was to measure the level of processing speed of the information as well as its association with academic performance among tribal students. Comparison between students of arts and science group on both variables (processing speed of the information and academic performance) was the second objective. The study was based on the performance of the students on a simple block design test, along with three years long class room observations, and views of faculties during a District level meeting regarding New Education Policy. Total 345 students of tribal areas including both the genders from arts and science background were selected purposively. Simple block design test of WAIS-III (design no. 1, 2, and 3) were administered to check the ability of processing speed, and marks of 12th class were taken as the measure of scholastic performance of tribal students. Results revealed that all the students have taken more time than limit of 90 seconds. Significant differences in the timing on block design tests and academic marks across gender and subject stream are also present. The association between obtained marks on 12th grade and time duration to complete block design test was found significantly negative. On the basis of results of the present study, it can be concluded that to ensure the better academic performance of tribal students, some tasks should be design to increase their processing speed of the information.
APA, Harvard, Vancouver, ISO, and other styles
24

Alshamrani, Sultan, Hesham Alhumyani, Quadri Waseem, and Isbudeen Noor Mohamed. "High availability of data using Automatic Selection Algorithm (ASA) in distributed stream processing systems." Bulletin of Electrical Engineering and Informatics 8, no. 2 (2019): 690–98. http://dx.doi.org/10.11591/eei.v8i2.1414.

Full text
Abstract:
High Availability of data is one of the most critical requirements of a distributed stream processing systems (DSPS). We can achieve high availability using available recovering techniques, which include (active backup, passive backup and upstream backup). Each recovery technique has its own advantages and disadvantages. They are used for different type of failures based on the type and the nature of the failures. This paper presents an Automatic Selection Algorithm (ASA) which will help in selecting the best recovery techniques based on the type of failures. We intend to use together all different recovery approaches available (i.e., active standby, passive standby, and upstream standby) at nodes in a distributed stream-processing system (DSPS) based upon the system requirements and a failure type). By doing this, we will achieve all benefits of fastest recovery, precise recovery and a lower runtime overhead in a single solution. We evaluate our automatic selection algorithm (ASA) approach as an algorithm selector during the runtime of stream processing. Moreover, we also evaluated its efficiency in comparison with the time factor. The experimental results show that our approach is 95% efficient and fast than other conventional manual failure recovery approaches and is hence totally automatic in nature.
APA, Harvard, Vancouver, ISO, and other styles
25

Sultan, Alshamrani, Alhumyani Hesham, Waseem Quadri, and Noor Mohamed Isbudeen. "High availability of data using Automatic Selection Algorithm (ASA) in distributed stream processing systems." Bulletin of Electrical Engineering and Informatics 8, no. 2 (2019): 690–98. https://doi.org/10.11591/eei.v8i2.1414.

Full text
Abstract:
High Availability of data is one of the most critical requirements of a distributed stream processing systems (DSPS). We can achieve high availability using available recovering techniques, which include (active backup, passive backup and upstream backup). Each recovery technique has its own advantages and disadvantages. They are used for different type of failures based on the type and the nature of the failures. This paper presents an Automatic Selection Algorithm (ASA) which will help in selecting the best recovery techniques based on the type of failures. We intend to use together all different recovery approaches available (i.e., active standby, passive standby, and upstream standby) at nodes in a distributed stream-processing system (DSPS) based upon the system requirements and a failure type). By doing this, we will achieve all benefits of fastest recovery, precise recovery and a lower runtime overhead in a single solution. We evaluate our automatic selection algorithm (ASA) approach as an algorithm selector during the runtime of stream processing. Moreover, we also evaluated its efficiency in comparison with the time factor. The experimental results show that our approach is 95% efficient and fast than other conventional manual failure recovery approaches and is hence totally automatic in nature.
APA, Harvard, Vancouver, ISO, and other styles
26

Bolgov, Sergei. "COMPARING REST API AND KAFKA IN THE CONTEXT OF FINANCIAL APPLICATIONS." Deutsche internationale Zeitschrift für zeitgenössische Wissenschaft 104 (May 21, 2025): 84–87. https://doi.org/10.5281/zenodo.15480672.

Full text
Abstract:
This paper explores the comparison between two data transmission technologies – REST API and Kafka used in financial applications. The study examines architectural differences, performance, reliability, and applicability of each technology in various scenarios such as banking transactions, external system integration, and stream analytics. Particular attention is given to the capabilities and limitations of REST API in synchronous data processing and the strengths of Kafka in stream analytics and handling large volumes of real-time data. The paper also discusses the potential for hybrid solutions combining both technologies to create more universal and scalable financial platforms.
APA, Harvard, Vancouver, ISO, and other styles
27

Brino, Alex, Alessandro Di Girolamo, Wen Guan, et al. "Towards an Event Streaming Service for ATLAS data processing." EPJ Web of Conferences 214 (2019): 04034. http://dx.doi.org/10.1051/epjconf/201921404034.

Full text
Abstract:
The ATLAS experiment at the LHC is gradually transitioning from the traditional file-based processing model to dynamic workflow management at the event level with the ATLAS Event Service (AES). The AES assigns finegrained processing jobs to workers and streams out the data in quasi-real time, ensuring fully efficient utilization of all resources, including the most volatile. The next major step in this evolution is the possibility to intelligently stream the input data itself to workers. The Event Streaming Service (ESS) is now in development to asynchronously deliver only the input data required for processing when it is needed, protecting the application payload fromWAN latency without creating expensive long-term replicas. In the current prototype implementation, ESS processes run on compute nodes in parallel to the payload, reading the input event ranges remotely over the network, and replicating them in small input files that are passed to the application. In this contribution, we present the performance of the ESS prototype for different types of workflows in comparison to tasks accessing remote data directly. Based on the experience gained with the current prototype, we are now moving to the development of a server-side component of the ESS. The service can evolve progressively into a powerful Content Delivery Network-like capability for data streaming, ultimately enabling the delivery of ‘virtual data’ generated on demand.
APA, Harvard, Vancouver, ISO, and other styles
28

R, Abhishek. "https://www.ijraset.com/best-journal/stream-processing-for-association-rule-using-apriori-algorithm-on-student-dataset-824." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (2022): 3728–35. http://dx.doi.org/10.22214/ijraset.2022.45658.

Full text
Abstract:
Abstract: In general, the most suitable choices in improvement of reinforcement concrete frame against lateral loading is use of bracing system. This paper deals with the, “SEISMIC ANALYSIS OF G+4 STOREY BUILDING WITH X-BRACING UNDER ZONE V”. In this paper, the seismic analysis of reinforced concrete (RC) buildings with X type of bracing with rectangular and reinforced concrete (RC) buildings without bracing with rectangular are compared. For this analysis of work a four-storey (G+4) building is considered which is situated in seismic zone V. The building models are analysed by equivalent static analysis as per recommendation given by IS 1893:2002 using ETABS software. This paper includes the comparison of seismic analysis of building with rectangular columns by using X-bracing system mentioned above and also the comparison between the response of the structure with bracings and structure without bracings.
APA, Harvard, Vancouver, ISO, and other styles
29

Hao, Peng, Shengbing Zhang, Xinbing Zhou, Yi Man, and Dake Liu. "PaCHNOC: Packet and Circuit Hybrid Switching NoC for Real-Time Parallel Stream Signal Processing." Micromachines 15, no. 3 (2024): 304. http://dx.doi.org/10.3390/mi15030304.

Full text
Abstract:
Real-time heterogeneous parallel embedded digital signal processor (DSP) systems process multiple data streams in parallel in a stringent time interval. This type of system on chip (SoC) requires the network on chip (NoC) to establish multiple symbiotic parallel data transmission paths with ultra-low transmission latency in real time. Our early NoC research PCCNOC meets this need. The PCCNOC uses packet routing to establish and lock a transmission circuit, so that PCCNOC is perfectly suitable for ultra-low latency and high-bandwidth transmission of long data packets. However, a parallel multi-data stream DSP system also needs to transmit roughly the same number of short data packets for job configuration and job execution status reports. While transferring short data packets, the link establishment routing delay of short data packets becomes relatively obvious. Our further research, thus, introduced PaCHNOC, a hybrid NoC in which long data packets are transmitted through a circuit established and locked by routing, and short data packets are attached to the routing packet and the transmission is completed during the routing process, thus avoiding the PCCNOC setup delay. Simulation shows that PaCHNOC performs well in supporting real-time heterogeneous parallel embedded DSP systems and achieves overall latency reduction 65% compared with related works. Finally, we used PaCHNOC in the baseband subsystem of a real 5G base station, which proved that our research is the best NoC for baseband subsystem of 5G base stations, which reduce 31% comprehensive latency in comparison to related works.
APA, Harvard, Vancouver, ISO, and other styles
30

Roessler, Markus Philipp, Eberhard Abele, and Joachim Metternich. "Simulation Based Multi-Criteria Assessment of Lean Material Flow Design Alternatives." Applied Mechanics and Materials 598 (July 2014): 661–66. http://dx.doi.org/10.4028/www.scientific.net/amm.598.661.

Full text
Abstract:
In this article a procedure is introduced to improve transparency and reliability of results for the selection of material flow design alternatives including machine tools and other capital-intensive goods. In the design phase of material flow planning projects, key performance indicators (KPIs) for design alternatives including processing as well as intralogistics elements can be derived using simulation. Using the state of the art method in value stream design and simulation often volatile input data is taken into account only in the simulation itself, but not in the downstream comparison of alternative designs, which could lead to imprecise conclusions and therefore to wrong investment decisions. To overcome this issue and to consider variability in the whole simulation phase and a subsequent decision making process, a multi-criteria decision analysis (MCDA) with two fuzzy representations is proposed and discussed here with the aim of helping practitioners to get more competitive value streams. A further goal of the article is the comparison between both forms used for fuzzy representation. Using the design example of machine tool-intralogistics systems obtained results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Arinelli, Lara de Oliveira, Alexandre Mendonça Teixeira, José Luiz de Medeiros, and Ofélia de Queiroz Fernandes Araújo. "CO2 Rich Natural Gas Processing: Technical, Power Consumption and Emission Comparisons of Conventional and Supersonic Technologies." Materials Science Forum 965 (July 2019): 79–86. http://dx.doi.org/10.4028/www.scientific.net/msf.965.79.

Full text
Abstract:
Supersonic separator is investigated via process simulation for treating CO2 rich (>40%) natural gas in terms of dew-points adjustment and CO2 removal for enhanced oil recovery. These applications are compared in terms of technical and energetic performances with conventional technologies, also comparing CO2 emissions by power generation. The context is that of an offshore platform to treat raw gas with 45%mol of CO2, producing a lean gas stream with maximum CO2 composition of ≈20%mol, suitable for use as fuel gas, and a CO2 rich stream that is compressed and injected to the oil and gas fields. The conventional process comprises dehydration by chemical absorption in TEG, Joule-Thomson expansion for C3+ removal, and membrane permeation for CO2 capture. The other alternatives use supersonic separation for dew-points adjustment, and membranes or another supersonic separation unit for CO2 capture. Simulations are carried out in HYSYS 8.8, where membranes and supersonic separation are modeled via unit operation extensions developed in a previous work: MP-UOE and SS-UOE. A full technical and power consumption analysis is performed for comparison of the three cases. The results show that the replacement of conventional dehydration technology by supersonic separators decreases power demand by 8.5%, consequently reducing 69.66 t/d of CO2 emitted to the atmosphere. The use of supersonic separation for CO2 capture is also superior than membranes, mainly due to the production of a high-pressure CO2 stream, that requires much less power for injection compression than the low-pressure permeate stream from membranes. Therefore, the case with two supersonic separator units in series presents the best results: lowest power demand (-23.9% than conventional case), directly impacting on CO2 emissions, which are reduced by 2598 t/d (-27.82%).
APA, Harvard, Vancouver, ISO, and other styles
32

Martínez-Carreras, N., C. E. Wetzel, J. Frentress, et al. "Hydrological connectivity inferred from diatom transport through the riparian-stream system." Hydrology and Earth System Sciences 19, no. 7 (2015): 3133–51. http://dx.doi.org/10.5194/hess-19-3133-2015.

Full text
Abstract:
Abstract. Diatoms (Bacillariophyta) are one of the most common and diverse algal groups (ca. 200 000 species, ≈ 10–200 μm, unicellular, eukaryotic). Here we investigate the potential of aerial diatoms (i.e. diatoms nearly exclusively occurring outside water bodies, in wet, moist or temporarily dry places) to infer surface hydrological connectivity between hillslope-riparian-stream (HRS) landscape units during storm runoff events. We present data from the Weierbach catchment (0.45 km2, northwestern Luxembourg) that quantify the relative abundance of aerial diatom species on hillslopes and in riparian zones (i.e. surface soils, litter, bryophytes and vegetation) and within streams (i.e. stream water, epilithon and epipelon). We tested the hypothesis that different diatom species assemblages inhabit specific moisture domains of the catchment (i.e. HRS units) and, consequently, the presence of certain species assemblages in the stream during runoff events offers the potential for recording whether there was hydrological connectivity between these domains or not. We found that a higher percentage of aerial diatom species was present in samples collected from the riparian and hillslope zones than inside the stream. However, diatoms were absent on hillslopes covered by dry litter and the quantities of diatoms (in absolute numbers) were small in the rest of hillslope samples. This limits their use for inferring hillslope-riparian zone connectivity. Our results also showed that aerial diatom abundance in the stream increased systematically during all sampled events (n = 11, 2011–2012) in response to incident precipitation and increasing discharge. This transport of aerial diatoms during events suggested a rapid connectivity between the soil surface and the stream. Diatom transport data were compared to two-component hydrograph separation, and end-member mixing analysis (EMMA) using stream water chemistry and stable isotope data. Hillslope overland flow was insignificant during most sampled events. This research suggests that diatoms were likely sourced exclusively from the riparian zone, since it was not only the largest aerial diatom reservoir, but also since soil water from the riparian zone was a major streamflow source during rainfall events under both wet and dry antecedent conditions. In comparison to other tracer methods, diatoms require taxonomy knowledge and a rather large processing time. However, they can provide unequivocal evidence of hydrological connectivity and potentially be used at larger catchment scales.
APA, Harvard, Vancouver, ISO, and other styles
33

Dziubich, Tomasz, Julian Szymański, Adam Brzeski, Jan Cychnerski, and Waldemar Korłub. "Depth Images Filtering In Distributed Streaming." Polish Maritime Research 23, no. 2 (2016): 91–98. http://dx.doi.org/10.1515/pomr-2016-0025.

Full text
Abstract:
Abstract In this paper, we propose a distributed system for point cloud processing and transferring them via computer network regarding to effectiveness-related requirements. We discuss the comparison of point cloud filters focusing on their usage for streaming optimization. For the filtering step of the stream pipeline processing we evaluate four filters: Voxel Grid, Radial Outliner Remover, Statistical Outlier Removal and Pass Through. For each of the filters we perform a series of tests for evaluating the impact on the point cloud size and transmitting frequency (analysed for various fps ratio). We present results of the optimization process used for point cloud consolidation in a distributed environment. We describe the processing of the point clouds before and after the transmission. Pre- and post-processing allow the user to send the cloud via network without any delays. The proposed pre-processing compression of the cloud and the post-processing reconstruction of it are focused on assuring that the end-user application obtains the cloud with a given precision.
APA, Harvard, Vancouver, ISO, and other styles
34

Philip, Milu Mary, Amrutha Seshadri, and B. Vijayakumar. "Microservices Centric Architectural Model for Handling Data Stream Oriented Applications." Cybernetics and Information Technologies 20, no. 3 (2020): 32–44. http://dx.doi.org/10.2478/cait-2020-0026.

Full text
Abstract:
AbstractThe present-day software application systems are highly complex with many requirements and variations, which can only be handled by more than one architectural pattern. This paper focuses on a combinational architectural design, with the micro-services at the center and supported by the model view controller and the pipes and filter architectural patterns to realize any data stream-oriented application. The proposed model is very generic and for validation, a prototype GIS application has been considered. The application is designed to extract GIS data from internet sources and process the data using third party processing tools. The overall design follows the micro-services architecture and the processing segment is designed using pipes-and-filters architectural pattern. The user interaction is made possible with the use of the model view controller pattern. The versatility of the application is expressed in its ability to organize any number of given filters in a connected structure that agrees with inter-component dependencies. The model includes different services, which make the application more user-friendly and secure by prompting client for authentication and providing unique storage for every client. This approach is very much useful for building applications with a high degree of flexibility, maintainability and adaptability. A qualitative comparison is made using a set of criteria and their implementation using the different architectural styles.
APA, Harvard, Vancouver, ISO, and other styles
35

Miranda, Denis da Silva, Luise Prado Martins, Beatriz Arioli de Sá Teles, et al. "Alternative Integrated Ethanol, Urea, and Acetic Acid Processing Routes Employing CCU: A Prospective Study through a Life Cycle Perspective." Sustainability 15, no. 22 (2023): 15937. http://dx.doi.org/10.3390/su152215937.

Full text
Abstract:
Despite the importance of inputs such as urea, ethanol, and acetic acid for the global production of food, energy, and chemical bases, manufacturing these substances depends on non-renewable resources, generating significant environmental impacts. One alternative to reducing these effects is to integrate production processes. This study compares the cumulative environmental performance of individual production routes for ethanol, urea, and acetic acid with that of an integrated complex designed based on Industrial Ecology precepts. Life Cycle Assessment was used as a metric for the impact categories of Global Warming Potential (GWP) and Primary Energy Demand (PED). The comparison occurred between the reference scenario, which considers individual processes, and six alternative integrated arrangements that vary in the treatment given to a stream concentrated in fuels generated in the Carbon Capture and Usage system that serves the processing of acetic acid. The study showed that process integration is recommended in terms of PED, whose contributions were reduced by 46–63% compared to stand-alone processes. The impacts of GWP are associated with treating the fuel stream. If it is treated as a co-product and environmental loads are allocated in terms of energy content, gains of up to 44% can be expected. On the other hand, if the stream is a waste, the complex’s GWP becomes more aggressive than the baseline scenario by 66%.
APA, Harvard, Vancouver, ISO, and other styles
36

Taehyun Kim and M. H. Ammar. "A comparison of heterogeneous video multicast schemes: Layered encoding or stream replication." IEEE Transactions on Multimedia 7, no. 6 (2005): 1123–30. http://dx.doi.org/10.1109/tmm.2005.858376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Van Wieringen, Wessel N., Mark A. Van De Wiel, and Bauke Ylstra. "Normalized, Segmented or Called aCGH Data?" Cancer Informatics 3 (January 2007): 117693510700300. http://dx.doi.org/10.1177/117693510700300030.

Full text
Abstract:
Array comparative genomic hybridization (aCGH) is a high-throughput lab technique to measure genome-wide chromosomal copy numbers. Data from aCGH experiments require extensive pre-processing, which consists of three steps: normalization, segmentation and calling. Each of these pre-processing steps yields a different data set: normalized data, segmented data, and called data. Publications using aCGH base their findings on data from all stages of the pre-processing. Hence, there is no consensus on which should be used for further down-stream analysis. This consensus is however important for correct reporting of findings, and comparison of results from different studies. We discuss several issues that should be taken into account when deciding on which data are to be used. We express the believe that called data are best used, but would welcome opposing views.
APA, Harvard, Vancouver, ISO, and other styles
38

Abuqabita, Flasteen, Razan Al-Omoush, and Jaber Alwidian. "A Comparative Study on Big Data Analytics Frameworks, Data Resources and Challenges." Modern Applied Science 13, no. 7 (2019): 1. http://dx.doi.org/10.5539/mas.v13n7p1.

Full text
Abstract:
Recently, huge amount of data has been generated in all over the world; these data are very huge, extremely fast and varies in its type. In order to extract the value from this data and make sense of it, a lot of frameworks and tools are needed to be developed for analyzing it. Until now a lot of tools and frameworks were generated to capture, store, analyze and visualize it. In this study we categorized the existing frameworks which is used for processing the big data into three groups, namely as, Batch processing, Stream analytics and Interactive analytics, we discussed each of them in detailed and made comparison on each of them.
APA, Harvard, Vancouver, ISO, and other styles
39

Evdokimov, Sergei Ivanovich, Nikolay S. Golikov, Alexey F. Pryalukhin, et al. "Studying Flotation of Gold Microdispersions with Carrier Minerals and Pulp Aeration with a Steam–Air Mixture." Minerals 14, no. 1 (2024): 108. http://dx.doi.org/10.3390/min14010108.

Full text
Abstract:
This work is aimed at obtaining new knowledge in the field of interactions of polydisperse hydrophobic surfaces in order to increase the extraction of mineral microdispersions via flotation. The effect of high velocity and the probability of aggregating fine particles with large ones are used to increase the extraction of finely dispersed gold in this work. Large particles act as carrier minerals, which are intentionally introduced into a pulp. The novelty of this work lies in the fact that a rougher concentrate is used as the carrier mineral. For this purpose, it is isolated from three parallel pulp streams by mixing the rougher concentrate, isolated from the first stream of raw materials, with an initial feed of the second stream; accordingly, the rougher concentrate of the second stream is mixed with the initial feed of the third stream, and the finished rougher concentrate is obtained. In this mode of extracting the rougher concentrate, the content of the extracted metal increases from stream to stream, which contributes to the growth in its content in the end product. Moreover, in order to supplement forces involved in the separation of minerals with surface forces of structural origin in the third flotation stream, the pulp is aerated for a short time (about 15%–25% of the total) with air bubbles filled with a heat carrier, i.e., hot water vapor. Within this accepted flotation method, the influence that the surface currents occurring in the wetting film have on its thinning and breakthrough kinetics is proposed to be in the form of a correction to a length of a liquid slip in the hydrophobic gap. The value of the correction is expressed as a fraction of the limiting thickness of the wetting film, determined by the condition of its thickness invariability when the streams are equal in an interphase gap: outflowing (due to an action of the downforce) and inflowing (Marangoni flows and a thermo-osmotic stream). Gold flotation experiments are performed on samples of gold-bearing ore obtained from two deposits with conditions that simulate a continuous process. Technological advantages of this developed scheme and a flotation mode of gold microdispersions are shown in comparison with the basic technology. The purpose of this work is to conduct comparative tests on the basic and developed technologies using samples of gold-bearing ore obtained from the Natalka and Olimpiada deposits. Through the use of the developed technology, an increase in gold extraction of 7.99% and in concentrate quality (from 5.09 to 100.3 g/t) is achieved when the yield of the concentrate decreases from 1.86 to 1.30%, which reduces the costs associated with its expensive metallurgical processing.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Jianfeng, Chenglei Wu, Jilin Xu, Lan Liu, and Yuanjian Zhang. "Stream Convolution for Attribute Reduction of Concept Lattices." Mathematics 11, no. 17 (2023): 3739. http://dx.doi.org/10.3390/math11173739.

Full text
Abstract:
Attribute reduction is a crucial research area within concept lattices. However, the existing works are mostly limited to either increment or decrement algorithms, rather than considering both. Therefore, dealing with large-scale streaming attributes in both cases may be inefficient. Convolution calculation in deep learning involves a dynamic data processing method in the form of sliding windows. Inspired by this, we adopt slide-in and slide-out windows in convolution calculation to update attribute reduction. Specifically, we study the attribute changing mechanism in the sliding window mode of convolution and investigate five attribute variation cases. These cases consider the respective intersection of slide-in and slide-out attributes, i.e., equal to, disjoint with, partially joint with, containing, and contained by. Then, we propose an updated solution of the reduction set for simultaneous sliding in and out of attributes. Meanwhile, we propose the CLARA-DC algorithm, which aims to solve the problem of inefficient attribute reduction for large-scale streaming data. Finally, through the experimental comparison on four UCI datasets, CLARA-DC achieves higher efficiency and scalability in dealing with large-scale datasets. It can adapt to varying types and sizes of datasets, boosting efficiency by an average of 25%.
APA, Harvard, Vancouver, ISO, and other styles
41

Rönnholm, P., M. T. Vaaja, H. Kauhanen, and T. Klockars. "ON SELECTING IMAGES FROM AN UNAIMED VIDEO STREAM FOR PHOTOGRAMMETRIC MODELLING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 389–94. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-389-2020.

Full text
Abstract:
Abstract. In this paper, we illustrate how convolutional neural networks and voxel-based processing together with voxel visualizations can be utilized for the selection of unaimed images for a photogrammetric image block. Our research included the detection of an ear from images with a convolutional neural network, computation of image orientations with a structure-from-motion algorithm, visualization of camera locations in a voxel representation to detect the goodness of the imaging geometry, rejection of unnecessary images with an XYZ buffer, the creation of 3D models in two different example cases, and the comparison of resulting 3D models. Two test data sets were taken of an ear with the video recorder of a mobile phone. In the first test case, a special emphasis was taken to ensure good imaging geometry. On the contrary, in the second test case the trajectory was limited to approximately horizontal movement, leading to poor imaging geometry. A convolutional neural network together with an XYZ buffer managed to select a useful set of images for the photogrammetric 3D measuring phase. The voxel representation well illustrated the imaging geometry and has potential for early detection where data is suitable for photogrammetric modelling. The comparison of 3D models revealed that the model from poor imaging geometry was noisy and flattened. The results emphasize the importance of good imaging geometry.
APA, Harvard, Vancouver, ISO, and other styles
42

Savich, Antony, and Shawki Areibi. "A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM)." VLSI Design 2014 (February 24, 2014): 1–11. http://dx.doi.org/10.1155/2014/712085.

Full text
Abstract:
Many applications ranging from machine learning, image processing, and machine vision to optimization utilize matrix multiplication as a fundamental block. Matrix operations play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.
APA, Harvard, Vancouver, ISO, and other styles
43

Klomp, Sven, Marco Munderloh, and Jörn Ostermann. "Decoder-Side Motion Estimation Assuming Temporally or Spatially Constant Motion." ISRN Signal Processing 2011 (June 20, 2011): 1–10. http://dx.doi.org/10.5402/2011/956372.

Full text
Abstract:
In current video coding standards, the encoder exploits temporal redundancies within the video sequence by performing block-based motion compensated prediction. However, the motion estimation is only performed at the encoder, and the motion vectors have to be coded explicitly into the bit stream. Recent research has shown that the compression efficiency can be improved by also estimating the motion at the decoder. This paper gives a detailed description of a decoder-side motion estimation architecture which assumes temporal constant motion and compares the proposed motion compensation algorithm with an alternative interpolation method. The overall rate reduction for this approach is almost 8% compared to H.264/MPEG-4 Part 10 (AVC). Furthermore, an extensive comparison with the assumption of spatial constant motion, as used in decoder-side motion vector derivation, is given. A new combined approach of both algorithms is proposed that leads to 13% bit rate reduction on average.
APA, Harvard, Vancouver, ISO, and other styles
44

Shwe, Thanda, and Masayoshi Aritsugi. "Optimizing Data Processing: A Comparative Study of Big Data Platforms in Edge, Fog, and Cloud Layers." Applied Sciences 14, no. 1 (2024): 452. http://dx.doi.org/10.3390/app14010452.

Full text
Abstract:
Intelligent applications in several areas increasingly rely on big data solutions to improve their efficiency, but the processing and management of big data incur high costs. Although cloud-computing-based big data management and processing offer a promising solution to provide scalable and abundant resources, the current cloud-based big data management platforms do not properly address the high latency, privacy, and bandwidth consumption challenges that arise when sending large volumes of user data to the cloud. Computing in the edge and fog layers is quickly emerging as an extension of cloud computing used to reduce latency and bandwidth consumption, resulting in some of the processing tasks being performed in edge/fog-layer devices. Although these devices are resource-constrained, recent increases in resource capacity provide the potential for collaborative big data processing. We investigated the deployment of data processing platforms based on three different computing paradigms, namely batch processing, stream processing, and function processing, by aggregating the processing power from a diverse set of nodes in the local area. Herein, we demonstrate the efficacy and viability of edge-/fog-layer big data processing across a variety of real-world applications and in comparison to the cloud-native approach in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Rahman, Muhammad Habibur, Bonghee Hong, Hari Setiawan, Sanghyun Lee, Dongjun Lim, and Woochan Kim. "An Indexing Method of Continuous Spatiotemporal Queries for Stream Data Processing Rules of Detected Target Objects." Sensors 21, no. 23 (2021): 8013. http://dx.doi.org/10.3390/s21238013.

Full text
Abstract:
Real-time performance is important in rule-based continuous spatiotemporal query processing for risk analysis and decision making of target objects collected by sensors of combat vessels. The existing Rete algorithm, which creates a compiled node link structure for executing rules, is known to be the best. However, when a large number of rules are to be processed and the stream data to be performed are large, the Rete technique has an overhead of searching for rules to be bound. This paper proposes a hashing indexing technique for Rete nodes to the overhead of searching for spatiotemporal condition rules that must be bound when rules are expressed in a node link structure. A performance comparison evaluation experiment was conducted with Drool, which implemented the Rete method, and the method that implemented the hash index method presented in this paper. For performance measurement, processing time was measured for the change in the number of rules, the change in the number of objects, and the distribution of objects. The hash index method presented in this paper improved performance by at least 18% compared to Drool.
APA, Harvard, Vancouver, ISO, and other styles
46

Lobo, Jesus L., Igor Ballesteros, Izaskun Oregi, Javier Del Ser, and Sancho Salcedo-Sanz. "Stream Learning in Energy IoT Systems: A Case Study in Combined Cycle Power Plants." Energies 13, no. 3 (2020): 740. http://dx.doi.org/10.3390/en13030740.

Full text
Abstract:
The prediction of electrical power produced in combined cycle power plants is a key challenge in the electrical power and energy systems field. This power production can vary depending on environmental variables, such as temperature, pressure, and humidity. Thus, the business problem is how to predict the power production as a function of these environmental conditions, in order to maximize the profit. The research community has solved this problem by applying Machine Learning techniques, and has managed to reduce the computational and time costs in comparison with the traditional thermodynamical analysis. Until now, this challenge has been tackled from a batch learning perspective, in which data is assumed to be at rest, and where models do not continuously integrate new information into already constructed models. We present an approach closer to the Big Data and Internet of Things paradigms, in which data are continuously arriving and where models learn incrementally, achieving significant enhancements in terms of data processing (time, memory and computational costs), and obtaining competitive performances. This work compares and examines the hourly electrical power prediction of several streaming regressors, and discusses about the best technique in terms of time processing and predictive performance to be applied on this streaming scenario.
APA, Harvard, Vancouver, ISO, and other styles
47

McDonald, James, and Ian M. Whillans. "Comparison of Results From Transit Satellite Tracking." Annals of Glaciology 11 (1988): 83–88. http://dx.doi.org/10.3189/s0260305500006376.

Full text
Abstract:
Large-scale motions and strain-rates over great distances on polar ice sheets are often obtained from the tracking of Transit (or doppler) satellites. The results of different processing techniques for these tracking data are compared, using some of the data collected on and near Ice Stream C. Reduction is made by using the software packages CALIPER, GEODOP V, MAGNET, and the micro-processor on the Magnavox MX 1502 satellite receiver. The orbital data broadcast by the satellites are used, as well as more precise orbits obtained afterward. In addition, calculations are made for single sites individually (point positioning) and for many sites with simultaneous tracking data (network adjustment).The results agree within the range of known errors associated with the orbits. Earth-based positions (latitude, longitude, ellipsoidal height), based on the broadcast orbits, agree to within 41.1 m. Positions with more precise orbits are within 0.7 m of one another. Relative positions are best obtained by using network techniques, and these agree with terrestrial survey results within 0.2 m in horizontal separation for sites 19 km apart, and are within 4.8 m in elevation difference. The calculated azimuth differs by 1.5 m/19 km or 10−4 rad.
APA, Harvard, Vancouver, ISO, and other styles
48

McDonald, James, and Ian M. Whillans. "Comparison of Results From Transit Satellite Tracking." Annals of Glaciology 11 (1988): 83–88. http://dx.doi.org/10.1017/s0260305500006376.

Full text
Abstract:
Large-scale motions and strain-rates over great distances on polar ice sheets are often obtained from the tracking of Transit (or doppler) satellites. The results of different processing techniques for these tracking data are compared, using some of the data collected on and near Ice Stream C. Reduction is made by using the software packages CALIPER, GEODOP V, MAGNET, and the micro-processor on the Magnavox MX 1502 satellite receiver. The orbital data broadcast by the satellites are used, as well as more precise orbits obtained afterward. In addition, calculations are made for single sites individually (point positioning) and for many sites with simultaneous tracking data (network adjustment). The results agree within the range of known errors associated with the orbits. Earth-based positions (latitude, longitude, ellipsoidal height), based on the broadcast orbits, agree to within 41.1 m. Positions with more precise orbits are within 0.7 m of one another. Relative positions are best obtained by using network techniques, and these agree with terrestrial survey results within 0.2 m in horizontal separation for sites 19 km apart, and are within 4.8 m in elevation difference. The calculated azimuth differs by 1.5 m/19 km or 10−4 rad.
APA, Harvard, Vancouver, ISO, and other styles
49

Pavlopoulou, Niki. "Dynamic Diverse Summarisation in Heterogeneous Graph Streams: a Comparison between Thesaurus/Ontology-based and Embeddings-based Approaches." International Journal of Graph Computing 1, no. 1 (2020): 70–94. http://dx.doi.org/10.35708/gc1868-126724.

Full text
Abstract:
Nowadays, there is a lot of attention drawn in smart environments, like Smart Cities and Internet of Things. These environments generate data streams that could be represented as graphs, which can be analysed in real-time to satisfy user or application needs. The challenges involved in these environments, ranging from the dynamism, heterogeneity, continuity, and high-volume of these real-world graph streams create new requirements for graph processing algorithms. We propose a dynamic graph stream summarisation system with the use of embeddings that provides expressive graphs while ensuring high usability and limited resource usage. In this paper, we examine the performance comparison between our embeddings-based approach and an existing thesaurus/ontology-based approach (FACES) that we adapted in a dynamic environment with the use of windows and data fusion. Both approaches use conceptual clustering and top-k scoring that can result in expressive, dynamic graph summaries with limited resources. Evaluations show that sending top-k fused diverse summaries, results in 34% to 92% reduction of forwarded messages and redundancy-awareness with an F-score ranging from 0.80 to 0.95 depending on the k compared to sending all the available information without top-k scoring. Also, the summaries' quality follows the agreement of ideal summaries determined by human judges. The summarisation approaches come with the expense of reduced system performance. The thesaurus/ontology-based approach proved 6 times more latency-heavy and 3 times more memory-heavy compared to the most expensive embeddings-based approach while having lower throughput but provided slightly better quality summaries.
APA, Harvard, Vancouver, ISO, and other styles
50

Nakagawa, Fumiko, Urumu Tsunogai, Yusuke Obata, et al. "Export flux of unprocessed atmospheric nitrate from temperate forested catchments: a possible new index for nitrogen saturation." Biogeosciences 15, no. 22 (2018): 7025–42. http://dx.doi.org/10.5194/bg-15-7025-2018.

Full text
Abstract:
Abstract. To clarify the biological processing of nitrate within temperate forested catchments using unprocessed atmospheric nitrate exported from each catchment as a tracer, we continuously monitored stream nitrate concentrations and stable isotopic compositions, including 17O excess (Δ17O), in three forested catchments in Japan (KJ, IJ1, and IJ2) for more than 2 years. The catchments showed varying flux-weighted average nitrate concentrations of 58.4, 24.4, and 17.1 µmol L−1 in KJ, IJ1, and IJ2, respectively, which correspond to varying export fluxes of nitrate: 76.4, 50.1, and 35.1 mmol m−2 in KJ, IJ1, and IJ2, respectively. In addition to stream nitrate, nitrate concentrations and stable isotopic compositions in soil water were determined for comparison in the most nitrate-enriched catchment (site KJ). While the 17O excess of nitrate in soil water showed significant seasonal variation, ranging from +0.1 ‰ to +5.7 ‰ in KJ, stream nitrate showed small variation, from +0.8 ‰ to +2.0 ‰ in KJ, +0.7 ‰ to +2.8 ‰ in IJ1, and +0.4 ‰ to +2.2 ‰ in IJ2. We conclude that the major source of stream nitrate in each forested catchment is groundwater nitrate. Additionally, the significant seasonal variation found in soil nitrate is buffered by the groundwater nitrate. The estimated annual export flux of unprocessed atmospheric nitrate accounted for 9.4 %±2.6 %, 6.5 %±1.8 %, and 2.6 %±0.6 % of the annual deposition flux of atmospheric nitrate in KJ, IJ1, and IJ2, respectively. The export flux of unprocessed atmospheric nitrate relative to the deposition flux showed a clear normal correlation with the flux-weighted average concentration of stream nitrate, indicating that reductions in the biological assimilation rates of nitrate in forested soils, rather than increased nitrification rates, are likely responsible for the elevated stream nitrate concentration, probably as a result of nitrogen saturation. The export flux of unprocessed atmospheric nitrate relative to the deposition flux in each forest ecosystem is applicable as an index for nitrogen saturation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography