To see the other types of publications on this topic, follow the link: Storm sewers – Data processing.

Journal articles on the topic 'Storm sewers – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Storm sewers – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Russo, Beniamino, David Sunyer, Marc Velasco, and Slobodan Djordjević. "Analysis of extreme flooding events through a calibrated 1D/2D coupled model: the case of Barcelona (Spain)." Journal of Hydroinformatics 17, no. 3 (2014): 473–91. http://dx.doi.org/10.2166/hydro.2014.063.

Full text
Abstract:
This paper presents the results of a calibrated 1D/2D coupled model simulating surface and sewer flows in Barcelona. The model covers 44 km2 of the city land involving 241 km of sewers. It was developed in order to assess the flood hazard in the Raval district, historically affected by flooding during heavy rainfalls. Special attention was paid to the hydraulic characterization of the inlet systems (representing the interface between surface and underground flows), through experimental expressions used to estimate the effective runoff flows into the sewers in case of storms. A 2D unstructured mesh with more than 400,000 cells was created on the basis of a detailed digital terrain model. The model was calibrated and validated using four sets of well-recorded flooding events that occurred in 2011. The aim of this paper is to show how a detailed 1D/2D coupled model can be adequately calibrated and validated using a wide set of sewer sensors and post-event collected data (videos, photos, emergency reports, etc.). Moreover, the created model presents significant computational time savings via parallel processing and hardware configuration. Considering the computational performances achieved, the model can be used for real-time strategies and as the core of early warning systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Métadier, M., and J. L. Bertrand-Krajewski. "From mess to mass: a methodology for calculating storm event pollutant loads with their uncertainties, from continuous raw data time series." Water Science and Technology 63, no. 3 (2011): 369–76. http://dx.doi.org/10.2166/wst.2011.230.

Full text
Abstract:
With the increasing implementation of continuous monitoring of both discharge and water quality in sewer systems, large data bases are now available. In order to manage large amounts of data and calculate various variables and indicators of interest it is necessary to apply automated methods for data processing. This paper deals with the processing of short time step turbidity time series to estimate TSS (Total Suspended Solids) and COD (Chemical Oxygen Demand) event loads in sewer systems during storm events and their associated uncertainties. The following steps are described: (i) sensor calibration, (ii) estimation of data uncertainties, (iii) correction of raw data, (iv) data pre-validation tests, (v) final validation, and (vi) calculation of TSS and COD event loads and estimation of their uncertainties. These steps have been implemented in an integrated software tool. Examples of results are given for a set of 33 storm events monitored in a stormwater separate sewer system.
APA, Harvard, Vancouver, ISO, and other styles
3

Gong, N., X. Ding, T. Denoeux, J. L. Bertrand-Krajewski, and M. Clément. "Stormnet: a connectionist model for dynamic management of wastewater treatment plants during storm events." Water Science and Technology 33, no. 1 (1996): 247–56. http://dx.doi.org/10.2166/wst.1996.0024.

Full text
Abstract:
Models for solid transport in sewers during storm events are increasingly used. An important application of these models is the management of treatment plants during storm events so as to improve the quality of receiving waters. However, a major difficulty that prevents more general use of these tools is their calibration, which requires field data, accurate information about catchments and sewers, and a specific methodology. For that reason, a connectionist model called STORMNET has been designed to reproduce and replace usual conceptual and deterministic models. This model requires fewer data, can be automatically calibrated, and is comparatively simple. It is composed of two recurrent neural networks for the simulation of hydrographs and pollutographs of suspended solids, respectively. In this paper, we present an updated version of STORMNET designed for optimal management of wastewater treatment plants during storm events. This model has been validated using both model and real data. The results show the efficiency of STORMNET as a computational tool for simulating stormwater pollution.
APA, Harvard, Vancouver, ISO, and other styles
4

Schilperoort, Rémy, Holger Hoppe, Cornelis de Haan, and Jeroen Langeveld. "Searching for storm water inflows in foul sewers using fibre-optic distributed temperature sensing." Water Science and Technology 68, no. 8 (2013): 1723–30. http://dx.doi.org/10.2166/wst.2013.419.

Full text
Abstract:
A major drawback of separate sewer systems is the occurrence of illicit connections: unintended sewer cross-connections that connect foul water outlets from residential or industrial premises to the storm water system and/or storm water outlets to the foul sewer system. The amount of unwanted storm water in foul sewer systems can be significant, resulting in a number of detrimental effects on the performance of the wastewater system. Efficient removal of storm water inflows into foul sewers requires knowledge of the exact locations of the inflows. This paper presents the use of distributed temperature sensing (DTS) monitoring data to localize illicit storm water inflows into foul sewer systems. Data results from two monitoring campaigns in foul sewer systems in the Netherlands and Germany are presented. For both areas a number of storm water inflow locations can be derived from the data. Storm water inflow can only be detected as long as the temperature of this inflow differs from the in-sewer temperatures prior to the event. Also, the in-sewer propagation of storm and wastewater can be monitored, enabling a detailed view on advection.
APA, Harvard, Vancouver, ISO, and other styles
5

Gong, Ning, Thierry Denoeux, and Jean-Luc Bertrand-Krajewski. "Neural networks for solid transport modelling in sewer systems during storm events." Water Science and Technology 33, no. 9 (1996): 85–92. http://dx.doi.org/10.2166/wst.1996.0183.

Full text
Abstract:
Models for solid transport in sewers during storm events are increasingly used by engineers and operators to improve their systems and the quality of receiving waters. However, a major difficulty that prevents more general use of these models is their calibration, which requires field data, accurate information about catchments and sewers, and a specific methodology. Therefore, research has been carried out to assess the ability of connectionist models to reproduce and replace usual models for use by an operator. Such models require fewer data, are self-calibrated, and very easy to use. The first stage presented in this paper consists in a comparison between neural networks and the HYPOCRAS model, using simulations of real pollutographs for single storm events. Two specific recurrent neural networks based on the HYPOCRAS model and a general-purpose recurrent multilayer network are used to simulate hydrographs and pollutographs of TSS. The learning algorithm and the performance criterion used for optimization of these networks are described in detail. Experimental results with simulated and real data are then presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Gang, Bao Wang, Shuai Guo, Wenming Zhang, and Stephen Edwini-Bonsu. "Statistical analysis of sewer odour based on 10-year complaint data." Water Science and Technology 81, no. 6 (2020): 1221–30. http://dx.doi.org/10.2166/wst.2020.217.

Full text
Abstract:
Abstract The City of Edmonton has been suffering from sewer odour problem for many years. Ten years of odour complaints data from 2008 to 2017 were statistically analyzed to identify major factors that relate to the odour problem. Spatial and temporal distributions of odour complaints in the city were first presented. Then relationships between the complaints and physical attributes of the sewer systems were analyzed by introducing a parameter of risk index. It was found that the snowmelt and storm events could possibly reduce odour complaints. Old sewer pipes and large drop structures are statistically more linked and thus significantly contribute to the complaints. The risk index relationship for three pipe materials is clay pipe > concrete pipe > PVC pipe. Combined sewers are more problematic in terms of odour complaints than sanitary sewers. And no clear correlation has been found between the changes of sewer pipe slope or angle and the complaints.
APA, Harvard, Vancouver, ISO, and other styles
7

Ruan, Mingchaun, and Jan B. M. Wiggers. "Application of time-series analysis to urban storm drainage." Water Science and Technology 36, no. 5 (1997): 125–31. http://dx.doi.org/10.2166/wst.1997.0180.

Full text
Abstract:
In urban storm drainage, deterministic models, such as SWMM, HydroWorks and MOUSE are commonly used. However, comprehensive research programmes, including field surveys, have indicated that most processes related to urban storm drainage have stochastic characteristic, like the occurrence of rainfall events, the processes of rainfall-runoff and flow routing in sewer networks3etc.. Particularly, sediments found in sewers either in suspension or in deposition, cannot be considered as having a unique entity. Inhomogeneity and randomness are just the nature of sewer sediment behaviour. Most data required for urban storm drainage are time-series data, such as rainfall intensity, water level measured in an outfall, CSO discharge and pollutant load etc.. Consequently, time-series analysis should be an alternative for predicting some relationships of urban storm drainage, such as (net) rainfall-CSO discharge, rainfall-water level and CSO discharge-pollutant load.
APA, Harvard, Vancouver, ISO, and other styles
8

Coghlan, Brian P., Richard M. Ashley, and George M. Smith. "Empirical equations for solids transport in combined sewers." Water Science and Technology 33, no. 9 (1996): 77–84. http://dx.doi.org/10.2166/wst.1996.0181.

Full text
Abstract:
An investigation of the transport of solids in combined sewers during both dry weather flow (DWF) periods and storms is described. The study was based on data obtained from a number of sites in the combined sewer system of Dundee, Scotland. The relationship between hydraulic conditions in a combined sewer and the transport of solids in suspension was examined. The aim was to arrive at a methodology by which an appropriate model could be selected or developed which would predict solids transport rates given information on hydraulic conditions. It was found that for individual sites, site-specific regression equations could be developed separately for dry weather and storm conditions. A non-site-specific regression equation was also developed, which was found to be preferable to the site specific equations, in terms of accuracy and ease of use. More important, however, were the fundamental procedures (ie the methodology) developed by which the model type was in each case selected and subsequently developed.
APA, Harvard, Vancouver, ISO, and other styles
9

Arthur, S., and R. M. Ashley. "The influence of near bed solids transport on first foul flush in combined sewers." Water Science and Technology 37, no. 1 (1998): 131–38. http://dx.doi.org/10.2166/wst.1998.0032.

Full text
Abstract:
The problems associated with deposited sediments in sewers, and their transport through sewer systems have been the subject of detailed fieldwork programmes in the UK, and elsewhere in Europe. Existing laboratory, and some field based research exercises have focused on the relatively small, discrete particles. It is clear, however, that combined sewer systems have inputs which comprise of a significant proportion of large organic solids (faecal and food wastes), as well as the finer range of particle sizes. The increased concern regarding CSO spills into the environment has fuelled the recent development of sewer flow quality models, such as HYDROWORKS QM and MOUSETRAP, some of which make no attempt to represent the transport of these larger organic particles. Herein, the results of a collaborative research programme undertaken between three UK universities and a water authority are discussed. Transport at the bed in sewers, as “near bed solids”, is defined. Based on a comprehensive data collection program undertaken in the Dundee combined sewerage system, a method is presented which may be used to estimate the rate of sediment transport near the bed in sewers. The influence that solids in transport near the bed have on first foul flush in combined sewers is discussed. A methodology is proposed which may be used to estimate the extent to which sediment in transport near the bed in sewers contributes to first foul flush phenomena, by describing the movement of a storm wave along a conceptual sewer length.
APA, Harvard, Vancouver, ISO, and other styles
10

Delleur, J. W., and Y. Gyasi-Agyei. "Prediction of Suspended Solids in Urban Sewers by Transfer Function Model." Water Science and Technology 29, no. 1-2 (1994): 171–79. http://dx.doi.org/10.2166/wst.1994.0663.

Full text
Abstract:
There is increasing concern about the sediments transported in urban storm sewers. Progress has been made on the measurement of suspended solids, and telemetry systems have been installed that permit remote access to flow, temperature and suspended solids concentration data. Using observations obtained in the main trunk sewer in Brussels, Belgium, a transfer function model for the prediction of suspended load concentration from temperature and discharge measurements was developed. This model is based on the transfer function methodology developed by Box and Jenkins. It is shown that the transfer function model correctly tracks the suspended solids observations and makes reasonable forecasts. It provides a valid alternative for the determination of suspended solids in urban sewers from discharge and water temperature observations which are more easily measurable on line than suspended solids.
APA, Harvard, Vancouver, ISO, and other styles
11

Murodov, P., O. Amirov, and P. Khuzhaev. "Cleaning the Kafirnigan River From Sewage Pollution." Bulletin of Science and Practice 6, no. 11 (2020): 126–31. http://dx.doi.org/10.33619/2414-2948/60/13.

Full text
Abstract:
The influence of the discharged treated wastewater on the ecology of the Kafirnigan River is considered. The data on the current state of sewage treatment facilities in the city of Dushanbe are given and an assessment of the environmental efficiency of these treatment facilities is given. Preliminary calculations of costs for the construction of a new sewage treatment plant in Dushanbe have been made. The article is devoted to the current problem of cleaning storm sewers. It should be noted that storm water drainage, like wastewater, has a negative impact on the environment. Before the wastewater is disposed of, it is necessary to treat it in a special way, subjecting it to treatment of varying degrees and depths.
APA, Harvard, Vancouver, ISO, and other styles
12

Yuan, Shao Guang, Yong Li Zhu, Guo Liang Zhou, and Ming Kun Wang. "Research on Dynamic Scheduling of Grid Monitoring Data Processing Tasks Based Storm." Applied Mechanics and Materials 651-653 (September 2014): 1051–55. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.1051.

Full text
Abstract:
Development of smart grid spawned the big data in electric power industry, the cloud computing platform provided the solution for the big data in electric power industry, it has a significant effect for batch jobs, but its real-time is not guaranteed. For the real-time problem of cloud computing platform, the Storm platform will be introduced to monitor the grid power. This paper studies the fair share scheduling algorithm under the Storm platform. It introduced the concept of Storm framework briefly, then, proposed the fair share scheduling algorithm according to the lack of current Storm scheduling algorithm, finally, the experiment proved that the scheduling algorithm based on fair share improved the resource utilization of Storm cluster and reduced the processing delay of the data.
APA, Harvard, Vancouver, ISO, and other styles
13

Daliakopoulos, Ioannis N., and Ioannis K. Tsanis. "A weather radar data processing module for storm analysis." Journal of Hydroinformatics 14, no. 2 (2011): 332–44. http://dx.doi.org/10.2166/hydro.2011.118.

Full text
Abstract:
A pre- and post-processing weather radar data module was developed in the Matlab suite of software with GIS data exchange abilities for storm event analysis. During pre-processing, each radar sweep is converted from spherical to Cartesian coordinates in the desired temporal and spatial resolution. The module's functionality in post processing includes radar data display, geo-referencing over GIS maps, data filtering with the Wiener filter and single or multiple sweep processing. The user can perform individual storm cell detection and tracking, resulting in the storm's average velocity and track length. The tested methods are modifications of the LoG (Laplacian of the Gaussian) blob detection method and a Brownian particle trajectory linking algorithm. Radar reflectivity factor (Z) data can be referenced over predefined rainfall (R) gauges in order to determine the radar Z–R equation parameters. The user can also produce spatially distributed precipitation estimates by using standard Z–R equations from the literature. The module's functionality is demonstrated using data from a rainfall event captured by the NSA Souda Bay C-Band radar during a storm in October 2006. Results show that the Rosenfeld Tropical Z–R equation is the one that gives a satisfactory description of the spatial and temporal precipitation distribution of the investigated event.
APA, Harvard, Vancouver, ISO, and other styles
14

Hansen, R., T. Thøgersen, and F. Rogalla. "Comparing cost and process performance of activated sludge (AS) and biological aerated filters (BAF) over ten years of full sale operation." Water Science and Technology 55, no. 8-9 (2007): 99–106. http://dx.doi.org/10.2166/wst.2007.247.

Full text
Abstract:
In the early 1990s, the Wastewater Treatment Plant (WWTP) of Frederikshavn, Denmark, was extended to meet new requirements for nutrient removal (8 mg/L TN, 1.5 mg TP/L) as well as to increase its average daily flow to 16,500 m3/d (4.5 MGD). As the most economical upgrade of the existing activated sludge (AS) plant, a parallel biological aerated filter (BAF) was selected, and started up in 1995. Running two full scale processes in parallel for over ten years on the same wastewater and treatment objectives enabled a direct comparison in relation to operating performance, costs and experience. Common pretreatment consists of screening, an aerated grit and grease removal and three primary settlers with chemical addition. The effluent is then pumped to the two parallel biological treatment stages, AS with recirculation and an upflow BAF with floating media. The wastewater is a mixture of industrial and domestic wastewater, with a dominant discharge of fish processing effluent which can amount to 50% of the flow. The maximum hydraulic load on the pretreatment section as a whole is 1,530 m3/h. Approximately 60% of the sewer system is combined with a total of 32 overflow structures. To avoid the direct discharge of combined sewer overflows into the receiving waters, the total hydraulic wet weather capacity of the plant is increased to 4,330 m3/h, or 6 times average flow. During rain, some of the raw sewage can be directed through a stormwater bypass to the BAF, which can be modified in its operation to accommodate various treatment needs:•either using simultaneous nitrification/denitrification in all filters with recirculation•introducing bottom aeration with full nitrification in some filters for storm treatment•and/or post-denitrification in one filter. After treatment, the wastewater is discharged to the Baltic Sea through a 500 m outfall. The BAF backwash sludge, approximately 1,900 m3 per 24 h in dry weather, is redirected to the AS plant. Primary settler sludge and the combined biosolids from the AS plant are anaerobically digested, with methane gas being used for generation of heat and power. On-line measurements for the parameters NO3, NO2, NH4, temperature as well as dissolved oxygen (DO) are used for control of aeration and external carbon source (methanol). Dosing of flocculants for P-removal is carried out based on laboratory analysis and jar tests. This paper discusses the experience gained from the plant operation during the last ten years, compiling comparative performance and cost data of the two processes, as well as their optimisation.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Youngkuk, Siwoon Son, and Yang-Sae Moon. "SPMgr: Dynamic workflow manager for sampling and filtering data streams over Apache Storm." International Journal of Distributed Sensor Networks 15, no. 7 (2019): 155014771986220. http://dx.doi.org/10.1177/1550147719862206.

Full text
Abstract:
In this article, we address dynamic workflow management for sampling and filtering data streams in Apache Storm. As many sensors generate data streams continuously, we often use sampling to choose some representative data or filtering to remove unnecessary data. Apache Storm is a real-time distributed processing platform suitable for handling large data streams. Storm, however, must stop the entire work when it changes the input data structure or processing algorithm as it needs to modify, redistribute, and restart the programs. In addition, for effective data processing, we often use Storm with Kafka and databases, but it is difficult to use these platforms in an integrated manner. In this article, we derive the problems when applying sampling and filtering algorithms to Storm and propose a dynamic workflow management model that solves these problems. First, we present the concept of a plan consisting of input, processing, and output modules of a data stream. Second, we propose Storm Plan Manager, which can operate Storm, Kafka, and database as a single integrated system. Storm Plan Manager is an integrated workflow manager that dynamically controls sampling and filtering of data streams through plans. Third, as a key feature, Storm Plan Manager provides a Web client interface to visually create, execute, and monitor plans. In this article, we show the usefulness of the proposed Storm Plan Manager by presenting its design, implementation, and experimental results in order.
APA, Harvard, Vancouver, ISO, and other styles
16

Zakšek, K., K. Čotar, T. Veljanovski, P. Pehani, and K. Oštir. "Topographic Correction Module at Storm (TC@Storm)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 721–28. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-721-2015.

Full text
Abstract:
Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.
APA, Harvard, Vancouver, ISO, and other styles
17

Mcllhatton, T. D., R. Sakrabani, R. M. Ashley, and R. Burrows. "Erosion mechanisms in combined sewers and the potential for pollutant release to receiving waters and water treatment plants." Water Science and Technology 45, no. 3 (2002): 61–69. http://dx.doi.org/10.2166/wst.2002.0055.

Full text
Abstract:
The problems associated with solids in sewerage systems result in common difficulties such as blockages and flooding and the subsequent maintenance requirements have been well documented. Concerns regarding pollutant release have also been demonstrated, with the contribution from in-sewer solids to the quality of the flow during a storm event being especially significant. These events known as “foul flushes” in combined sewers typically occur in the initial period of storm flows, when the concentration of suspended sediments and other pollutants are significantly higher than at other times. Traditionally impacts from these events have been related to the suspended solids phase of the flow passing through a CSO structure. It is now apparent that much of the suspended load originates from solids eroded from the bed. The “near bed solids” which are re-entrained into the flow, together with solids eroded from the bulk bed, account for large changes in the suspended sediment concentration under time varying flow conditions. The influence of these eroded solids and their potential impact on receiving waters and treatment plants will be reviewed using data obtained from field studies carried out in the main Dundee interceptor sewer in Scotland. This paper describes some of the methods employed to investigate the characteristics of the pollutants associated with solids erosion in combined sewers.
APA, Harvard, Vancouver, ISO, and other styles
18

Lv, Jia-Ke, Yang Li, and Xuan Wang. "Log Data Real Time Analysis Using Big Data Analytic Framework with Storm and Hadoop." MATEC Web of Conferences 246 (2018): 03009. http://dx.doi.org/10.1051/matecconf/201824603009.

Full text
Abstract:
The log data real-time processing platform which is built using Storm On YARN integrated MapReduce and Storm that use MapReduce to complete large-scale off-line data global knowledge extraction, sudden knowledge extraction of small-scale data in Kafka buffers through Storm, and continuous real-time calculation of streaming data in combination with global knowledge. We tested our technique with the well-known KDD99 CUP data set. The experimentation results prove the system to be effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Ziyu, Zitan Liu, Qingcai Jiang, Junshi Chen, and Hong An. "RDMA-Based Apache Storm for High-Performance Stream Data Processing." International Journal of Parallel Programming 49, no. 5 (2021): 671–84. http://dx.doi.org/10.1007/s10766-021-00696-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Frehmann, T., T. Mietzel, R. Kutzner, B. Spengler, and W. F. Geiger. "Monitoring in inline storage sewers for stormwater treatment to determine efficiencies." Water Science and Technology 50, no. 11 (2004): 89–96. http://dx.doi.org/10.2166/wst.2004.0675.

Full text
Abstract:
A special structure of combined sewer overflow tanks is the inline storage sewer with downstream discharge (SKU). This layout has the advantage that besides the sewer system, no other structures are required for storm water treatment. Consequently only very little space is required and compared to combined sewer overflow tanks, there is an enormous potential in reducing costs during construction. To investigate the efficiency of an inline storage sewer, a monitoring station was established in Dortmund-Scharnhorst, Germany. The monitoring station was in operation for a period of 2.5 years. Within this period water samples were taken during a total of 20 discharge events. Besides the complete hydraulic data collection, seven water samplers took more than 5,000 water samples during dry and wet weather. This adds up to a total of more than 20,000 individual lab analyses. The average of the total efficiency for the SKU-West is 86%. 29% of this efficiency can be attributed to the throttle flow. The remaining 57% can be divided into a part of 48% that can be attributed to the process storage and 9% that can be attributed to sedimentation and erosion process.
APA, Harvard, Vancouver, ISO, and other styles
21

Cao, Huiyan, Chase Q. Wu, Liang Bao, Aiqin Hou, and Wei Shen. "Throughput optimization for Storm-based processing of stream data on clouds." Future Generation Computer Systems 112 (November 2020): 567–79. http://dx.doi.org/10.1016/j.future.2020.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

García, Juan T., and Joseph R. Harrington. "Fine Sediment Modeling During Storm-Based Events in the River Bandon, Ireland." Water 11, no. 7 (2019): 1523. http://dx.doi.org/10.3390/w11071523.

Full text
Abstract:
The River Bandon located in County Cork (Ireland) has been time-continuously monitored by turbidity probes, as well as automatic and manual suspended sediment sampling. The current work evaluates three different models used to estimate the fine sediment concentration during storm-based events over a period of one year. The modeled suspended sediment concentration is compared with that measured at an event scale. Uncertainty indices are calculated and compared with those presented in the bibliography. An empirically-based model was used as a reference, as this model has been previously applied to evaluate sediment behavior over the same time period in the River Bandon. Three other models have been applied to the gathered data. First is an empirically-based storm events model, based on an exponential function for calculation of the sediment output from the bed. A statistically-based approach first developed for sewers was also evaluated. The third model evaluated was a shear stress erosion-based model based on one parameter. The importance of considering the fine sediment volume stored in the bed and its consolidation to predict the suspended sediment concentration during storm events is clearly evident. Taking into account dry weather periods and the bed erosion in previous events, knowledge on the eroded volume for each storm event is necessary to adjust the parameters for each model.
APA, Harvard, Vancouver, ISO, and other styles
23

Hu, Xiling. "An Analysis on Task Migration Strategy of Big Data Streaming Storm Computing Framework for Distributed Processing." International Journal of Information System Modeling and Design 11, no. 4 (2020): 18–35. http://dx.doi.org/10.4018/ijismd.2020100102.

Full text
Abstract:
In this modern era, a large volume of data is generated regularly, which needs to be processed for gaining profits of latent information. The processing of big data is composed of a challenge termed as communication overhead. In order to minimize communication overhead on the premise of various resource constraints, a task migration strategy under heterogeneous storm environment is proposed. The proposed strategy is based on the establishment and demonstration of storm resource constraint model, optimal communication overhead model, and task migration model. Where the source node selection algorithm adds the nodes beyond the threshold to the source node set according to the load of CPU, memory, and network bandwidth of each working node in the cluster and the priority order of various resources. The experiment shows that compared with the existing research, computing storm can effectively reduce the delay and communication overhead between nodes, and the execution overhead is small.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Jindong, Shouke Wei, Xuebin Wen, and Xiuqin Qiu. "Analysis and prediction of big stream data in real-time water quality monitoring system." Journal of Ambient Intelligence and Smart Environments 12, no. 5 (2020): 393–406. http://dx.doi.org/10.3233/ais-200571.

Full text
Abstract:
Large scale real-time water quality monitoring system usually produces vast amounts of high frequency data, and it is difficult for traditional water quality monitoring system to process such large and high frequency data generated by wireless sensor network. A real-time processing and early warning system framework is proposed to solve this problem, Apache Storm is used as the big data processing platform, and Kafka message queue is applied to classify the sample data into several data streams so as to reserve the time series data property of a sensor. In storm platform, Daubechies Wavelet is used to decompose the data series to obtain the trend of the series, then Long Short Term Memory Network (LSTM) model is used to model and predict the trend of the data. This paper provides a detailed description concerning the distribution mechanism of aggregated data in Storm, data storage format in HBase, the process of wavelet decomposition, model training and the application of mode for prediction. The application results in Xin’an River in Yantai City reveal that the prosed system framework has a very good ability to model big data with high prediction accuracy and robust processing capability.
APA, Harvard, Vancouver, ISO, and other styles
25

Xiao, F., G. Y. K. Shea, M. S. Wong, and J. Campbell. "An automated and integrated framework for dust storm detection based on ogc web processing services." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-2 (November 11, 2014): 151–56. http://dx.doi.org/10.5194/isprsarchives-xl-2-151-2014.

Full text
Abstract:
Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data and scientific model integration problem by using a framework and scientific workflow approach together. The experimental result shows that this newly automated and integrated framework can be used to give advance near real-time warning of dust storms, for both environmental authorities and public. The methods presented in this paper might be also generalized to other types of Earth system models, leading to improved ease of use and flexibility.
APA, Harvard, Vancouver, ISO, and other styles
26

Choi, Dojin, Hyeonwook Jeon, Jongtae Lim, Kyoungsoo Bok, and Jaesoo Yoo. "Dynamic Task Scheduling Scheme for Processing Real-Time Stream Data in Storm Environments." Applied Sciences 11, no. 17 (2021): 7942. http://dx.doi.org/10.3390/app11177942.

Full text
Abstract:
Owing to the recent advancements in Internet of Things technology, social media, and mobile devices, real-time stream balancing processing systems are commonly used to process vast amounts of data generated in various media. In this paper, we propose a dynamic task scheduling scheme considering task deadlines and node resources. The proposed scheme performs dynamic scheduling using a heterogeneous cluster consisting of various nodes with different performances. Additionally, the loads of the nodes considering the task deadlines are balanced by different task scheduling based on three defined load types. Based on diverse performance evaluations it is shown that the proposed scheme outperforms the conventional schemes.
APA, Harvard, Vancouver, ISO, and other styles
27

Lacour, C., C. Joannis, M. C. Gromaire, and G. Chebbo. "Potential of turbidity monitoring for real time control of pollutant discharge in sewers during rainfall events." Water Science and Technology 59, no. 8 (2009): 1471–78. http://dx.doi.org/10.2166/wst.2009.169.

Full text
Abstract:
Turbidity sensors can be used to continuously monitor the evolution of pollutant mass discharge. For two sites within the Paris combined sewer system, continuous turbidity, conductivity and flow data were recorded at one-minute time intervals over a one-year period. This paper is intended to highlight the variability in turbidity dynamics during wet weather. For each storm event, turbidity response aspects were analysed through different classifications. The correlation between classification and common parameters, such as the antecedent dry weather period, total event volume per impervious hectare and both the mean and maximum hydraulic flow for each event, was also studied. Moreover, the dynamics of flow and turbidity signals were compared at the event scale. No simple relation between turbidity responses, hydraulic flow dynamics and the chosen parameters was derived from this effort. Knowledge of turbidity dynamics could therefore potentially improve wet weather management, especially when using pollution-based real-time control (P-RTC) since turbidity contains information not included in hydraulic flow dynamics and not readily predictable from such dynamics.
APA, Harvard, Vancouver, ISO, and other styles
28

Jafari, H., and A. A. Alesheikh. "DEVELOPING A SPATIAL PROCESSING SERVICE FOR AUTOMATIC CALCULATION OF STORM INUNDATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W4 (September 27, 2017): 389–94. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w4-389-2017.

Full text
Abstract:
With the increase in urbanization, the surface of earth and its climate are changing. These changes resulted in more frequent floodingand storm inundation in urban areas. The challenges of flooding can be addressed through several computational procedures. Due to its numerous advantages, accessible web services can be chosen as a proper format for determining the storm inundation. Web services have facilitated the integration and interactivity of the web applications. Such services made the interaction between machines more feasible. Web services enable the heterogeneous software systems to communicate with each other. A Web Processing Service (WPS) makes it possible to process spatial data with different formats. In this study, we developed a WPS to automatically calculate the amount of storm inundation caused by rainfall in urban areas. The method we used for calculating the storm inundation is based on a simplified hydrologic model which estimates the final status of inundation. The simulation process and water transfer between subcatchments are carried out respectively, without user’s interference. The implementation of processing functions in a form of processing web services gives the capability to reuse the services and apply them in other services. As a result, it would avoid creating the duplicate resources.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Xiang, Beth Plale, Nithya Vijayakumar, Rahul Ramachandran, Sara Graves, and Helen Conover. "Real-time storm detection and weather forecast activation through data mining and events processing." Earth Science Informatics 1, no. 2 (2008): 49–57. http://dx.doi.org/10.1007/s12145-008-0010-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kahar, Gertrudis V., Abdul Wahid, and Hadi Imam Sutaji. "ANALISIS KEJADIAN BADAI MAGNETIK BERDASARKAN DATA VARIASI HARIAN MAGNETIK DI KOTA KUPANG." Jurnal Fisika : Fisika Sains dan Aplikasinya 3, no. 1 (2018): 12–20. http://dx.doi.org/10.35508/fisa.v3i1.589.

Full text
Abstract:
ABSTRAK
 Telah dilakukan penelitian analisis kejadian badai magnetik di Kota Kupang bulan Oktober 2014 sampai bulan September 2016. Penelitian ini bertujuan untuk menentukan karateristik kejadian badai magnetik serta menentukan periode kemunculan badai magnetik di Kota Kupang. Pengolahan data dengan menggunakan Software Microsoft Excel untuk dibuat grafik data komponen magnet bumi terhadap waktu dan Software Matlab 2011 untuk penentuan periodesitas kejadian badai magnetik menggunakan transformasi fourier cepat (FFT). Berdasarkan hasil pengolahan data, karateristik kejadian badai magnetik yang terdapat di daerah penelitian untuk bulan Oktober 2014 sampai September 2016 adalah untuk tingkat aktivitas gangguan magnetik maksimum ditandai dengan nilai K indeks=8, A indeks=54.875 dan penurunan nilai Dst= -121 nT, sehingga dikategorikan badai menengah dan tingkat aktivitas gangguan magnetik minimum ditandai dengan nilai K indeks=3, A indeks=11.5 serta penurunan nilai Dst = -17 nT, sehingga dikategorikan relatif tenang. Periode kemunculan aktivitas magnetik bulan Oktober 2014-September 2016 adalah berada dalam periode satu harian sampai sepuluh harian. 
 Kata kunci : Variasi harian magnetik, badai magnetik, periodesitas, K indeks, A indeks.
 ABSTRACT 
 The research about analysis of magnetic storm events in Kupang City from October 2014 to September 2016 has been done. The purpose of this study is to determine the characteristic of storm events and the period of emergence magnetic storm from Kupang City. The data used is the daily magnetic variation data obtained from Meteorogical Climatological and Geophysical Agency in Kupang City. The data processing using by Microsoft Excel software to create graph data of the earth magnetic components against time and Matlab 2011 sofware to determining the periodicity of magnetic storm events using Fast Fourier Transform (FFT). From the results of data processing, the characteristic of magnetic storm events in the study area from October 2014 to September 2016 were for the maximum magnetic interference activity level occurring on june 22th, 2015 due to burst of CME marked by the value of K index = 8, A index = 54.875 and degradation value of DST = -121 nT, so category middle storm and minimum magnetic interference activity level occurred on February 10th, 2015 due to burst of CME and flare marked with value K index = 3, A index =11.5, and decreasing value of DST =-17 nT, thus categorized relatively quietly. The period of occurrence magnetic activity from October 2014 to September 2016 is within a period of one daily to ten daily.
 
 Keywords : Daily magnetic variation, magnetic storm, K index, A index.
APA, Harvard, Vancouver, ISO, and other styles
31

Fais, Alessandra, Giuseppe Lettieri, Gregorio Procissi, Stefano Giordano, and Francesco Oppedisano. "Data Stream Processing for Packet-Level Analytics." Sensors 21, no. 5 (2021): 1735. http://dx.doi.org/10.3390/s21051735.

Full text
Abstract:
One of the most challenging tasks for network operators is implementing accurate per-packet monitoring, looking for signs of performance degradation, security threats, and so on. Upon critical event detection, corrective actions must be taken to keep the network running smoothly. Implementing this mechanism requires the analysis of packet streams in a real-time (or close to) fashion. In a softwarized network context, Stream Processing Systems (SPSs) can be adopted for this purpose. Recent solutions based on traditional SPSs, such as Storm and Flink, can support the definition of general complex queries, but they show poor performance at scale. To handle input data rates in the order of gigabits per seconds, programmable switch platforms are typically used, although they offer limited expressiveness. With the proposed approach, we intend to offer high performance and expressive power in a unified framework by solely relying on SPSs for multicores. Captured packets are translated into a proper tuple format, and network monitoring queries are applied to tuple streams. Packet analysis tasks are expressed as streaming pipelines, running on general-purpose programmable network devices, and a second stage of elaboration can process aggregated statistics from different devices. Experiments carried out with an example monitoring application show that the system is able to handle realistic traffic at a 10 Gb/s speed. The same application scales almost up to 20 Gb/s speed thanks to the simple optimizations of the underlying framework. Hence, the approach proves to be viable and calls for the investigation of more extensive optimizations to support more complex elaborations and higher data rates.
APA, Harvard, Vancouver, ISO, and other styles
32

Scaratos, P. D. "Computer modeling of fecal coliform contamination of an urban estuarine system." Water Science and Technology 44, no. 7 (2001): 9–16. http://dx.doi.org/10.2166/wst.2001.0378.

Full text
Abstract:
This study is focused on the investigation of the sources, distribution and fate of fecal coliform populations in the North Fork of the New River that flows through the City of Fort Lauderdale, Florida, USA. The dynamics of this brackish river are driven by weak tides, regulated freshwater discharges, overland runoff, storm water drainage from sewers, and groundwater exchange. Extensive field studies failed to document any alleged source(s) of contamination, including birds, domesticated and undomesticated mammals, humans, septic tank leakage, urban runoff, non-point discharges from agricultural lands, waste disposal from live-aboard vessels and/or in situ re-growth of fecal coliform. In order to facilitate field sampling, and support the data analyses efforts, computer simulations were applied to assess the likelihood of the various possible pollution scenarios. The physically based computer model used is the WASP (Water Quality Analysis Simulation Program Modeling System) of the US Environmental Protection Agency. In addition, the Neural Network MATLAB Toolbox was utilized for data analysis. WASP was able to accurately simulate the water hydrodynamics and coliform concentrations within the North Fork, while the neural network assisted in identifying correlations between fecal coliform and the various parameters involved. The numerical results supported the conclusion that fecal coliform were introduced by the animal populations along the riverbanks and by storm water washout of the adjacent drainage basins and the banks. The problem is exaggerated due to the low flashing capacity of the river.
APA, Harvard, Vancouver, ISO, and other styles
33

Curtis, David C. "Use of Weather Surveillance Radars—88 Doppler Data in Hydrologic Modeling." Transportation Research Record: Journal of the Transportation Research Board 1647, no. 1 (1998): 61–66. http://dx.doi.org/10.3141/1647-08.

Full text
Abstract:
Successful hydrologic modeling depends heavily on high-quality rainfall data sets. If hydrologists cannot determine what is coming into a watershed, there is little chance that any hydrologic model will accurately estimate what is coming out on a consistent basis. Hydrologists are frequently forced to use rainfall data sets derived from sparse rain gauge networks that poorly resolve critical rainfall features, leading to inadequate model results. Over the past several years, the modernizing National Weather Service, the Federal Aviation Administration, and the Department of Defense have installed a new nationwide network of weather radars, providing a rich suite of real-time meteorological observations. Radar rainfall estimates from the new radars cover vast areas at a spatial and temporal resolution that would be impossibly expensive to match with a conventional rain gauge network. Hydrologists can now literally see between the gauges and view truer representations of the spatial distribution of rainfall than ever before. Results from the analysis of the January 9-10, 1995, storms in Sacramento, California, show that gauge-adjusted radar rainfall estimates help resolve rainfall features that could not have been inferred from rain gauge analysis alone. Accurate estimates of the volume, timing, and distribution of rainfall helped create excellent modeling results. In Waco, Texas, radar rainfall estimates were used to improve the analysis of excess inflow and infiltration into city storm sewers. The radar rainfall analyses enabled modelers to account for inflow/infiltration variations down to the neighborhood level.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Di, Ying Wang, and Lian Guang Liu. "Discussion on Power Grid Magnetic Storm Disaster Monitoring System Based on Cloud Computing." Advanced Materials Research 341-342 (September 2011): 641–45. http://dx.doi.org/10.4028/www.scientific.net/amr.341-342.641.

Full text
Abstract:
The power grid storm disaster monitoring system involves power system data,geomagnetic data, satellite data and other earth space observation data. To solve such problems as the system's large quantity of data, storage and processing difficulties, using cloud computing in the system is putting forward. The basic concepts and the services of cloud computing are introduced first. From the aspects of the front-end data collection and communication methods to the background software data processing, the GIC (geomagnetically-induced current) monitoring system is showed. Then the issues of the continuous expansion of the system are analyzed in detail. The architecture of the power grid storm disaster monitoring system based on cloud computing is given. Security problem of implementation of the system using cloud computing technology are finally discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Sheng Hang, Zhe Wang, Ming Yuan He, and Huai Lin Dong. "Large-Scale Text Clustering Based on Improved K-Means Algorithm in the Storm Platform." Applied Mechanics and Materials 543-547 (March 2014): 1913–16. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.1913.

Full text
Abstract:
With the web information dramatically increases, Distributed processing of mass data through a cluster have been the focus of research field. An efficient distributed algorithm is the determinant of the scalability and performance in data analyses. This dissertation firstly studies the operation mechanism of Storm, which is a simplified distributed and real-time computation platform. Based on the Storm platform, an improved K-Means algorithm which could be used for data intensive computing is designed and implemented. Finally, the experience results show that the K-Means clustering algorithm base on Storm platform could obtain a higher performance in experience and improve the effectiveness and accuracy in large-scale text clustering.
APA, Harvard, Vancouver, ISO, and other styles
36

Cho, Wonhyeong, Myeong-Seon Gil, Mi-Jung Choi, and Yang-Sae Moon. "Storm-based distributed sampling system for multi-source stream environment." International Journal of Distributed Sensor Networks 14, no. 11 (2018): 155014771881269. http://dx.doi.org/10.1177/1550147718812698.

Full text
Abstract:
As a large amount of data streams occur rapidly in many recent applications such as social network service, Internet of Things, and smart factory, sampling techniques have attracted many attentions to handle such data streams efficiently. In this article, we address the performance improvement of binary Bernoulli sampling in the multi-source stream environment. Binary Bernoulli sampling has the n:1 structure where n sites transmit data to 1 coordinator. However, as the number of sites increases or the input stream explosively increases, the binary Bernoulli sampling may cause a severe bottleneck in the coordinator. In addition, bidirectional communication over different networks among the coordinator and sites may incur excessive communication overhead. In this article, we propose a novel distributed processing model of binary Bernoulli sampling to solve these coordinator bottleneck and communication overhead problems. We first present a multiple-coordinator structure to solve the coordinator bottleneck. We then present a new sampling model with an integrated framework and shared memory to alleviate the communication overhead. To verify the effectiveness and scalability of the proposed model, we perform its actual implementation in Apache Storm, a real-time distributed stream processing system. Experimental results show that our Storm-based binary Bernoulli sampling improves performance by up to 1.8 times compared with the legacy method and maintains high performance even when the input stream largely increases. These results indicate that the proposed distributed processing model is an excellent approach that solves the performance degradation problem of binary Bernoulli sampling and verifies its superiority through the actual implementation on Apache Storm.
APA, Harvard, Vancouver, ISO, and other styles
37

Oštir, K., K. Čotar, A. Marsetič, et al. "Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 669–76. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-669-2015.

Full text
Abstract:
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. <br><br> Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. <br><br> The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
APA, Harvard, Vancouver, ISO, and other styles
38

Gad, M. A., and I. K. Tsanis. "A GIS methodology for the analysis of weather radar precipitation data." Journal of Hydroinformatics 5, no. 2 (2003): 113–26. http://dx.doi.org/10.2166/hydro.2003.0009.

Full text
Abstract:
A GIS multi-component module was developed within the ArcView GIS environment for processing and analysing weather radar precipitation data. The module is capable of: (a) reading geo-reference radar data and comparing it with rain-gauge network data, (b) estimating the kinematics of rainfall patterns, such as the storm speed and direction, and (c) accumulating radar-derived rainfall depths. By bringing the spatial capabilities of GIS to bear this module can accurately locate rainfall on the ground and can overlay the animated storm on different geographical features of the study area, making the exploration of the storm's kinematic characteristics obtained from radar data relatively simple. A case study in the City of Hamilton in Ontario, Canada is used to demonstrate the functionality of the module. Radar comparison with rain gauge data revealed an underestimation of the classical Marshal & Palmer Z–R relation to rainfall rate.
APA, Harvard, Vancouver, ISO, and other styles
39

Podladchikova, Tatiana, Anatoly Petrukovich, and Yuri Yermolaev. "Geomagnetic storm forecasting service StormFocus: 5 years online." Journal of Space Weather and Space Climate 8 (2018): A22. http://dx.doi.org/10.1051/swsc/2018017.

Full text
Abstract:
Forecasting geomagnetic storms is highly important for many space weather applications. In this study, we review performance of the geomagnetic storm forecasting service StormFocus during 2011–2016. The service was implemented in 2011 at SpaceWeather.Ru and predicts the expected strength of geomagnetic storms as measured by Dst index several hours ahead. The forecast is based on L1 solar wind and IMF measurements and is updated every hour. The solar maximum of cycle 24 is weak, so most of the statistics are on rather moderate storms. We verify quality of selection criteria, as well as reliability of real-time input data in comparison with the final values, available in archives. In real-time operation 87% of storms were correctly predicted while the reanalysis running on final OMNI data predicts successfully 97% of storms. Thus the main reasons for prediction errors are discrepancies between real-time and final data (Dst, solar wind and IMF) due to processing errors, specifics of datasets.
APA, Harvard, Vancouver, ISO, and other styles
40

BELOKI, ZUHAITZ, XABIER ARTOLA, and AITOR SOROA. "A scalable architecture for data-intensive natural language processing." Natural Language Engineering 23, no. 5 (2017): 709–31. http://dx.doi.org/10.1017/s1351324917000092.

Full text
Abstract:
AbstractComputational power needs have greatly increased during the last years, and this is also the case in the Natural Language Processing (NLP) area, where thousands of documents must be processed, i.e., linguistically analyzed, in a reasonable time frame. These computing needs have implied a radical change in the computing architectures and big-scale text processing techniques used in NLP. In this paper, we present a scalable architecture for distributed language processing. The architecture uses Storm to combine diverse NLP modules into a processing chain, which carries out the linguistic analysis of documents. Scalability requires designing solutions that are able to run distributed programs in parallel and across large machine clusters. Using the architecture presented here, it is possible to integrate a set of third-party NLP modules into a unique processing chain which can be deployed onto a distributed environment, i.e., a cluster of machines, so allowing the language-processing modules run in parallel. No restrictions are placed a priori on the NLP modules apart of being able to consume and produce linguistic annotations following a given format. We show the feasibility of our approach by integrating two linguistic processing chains for English and Spanish. Moreover, we provide several scripts that allow building from scratch a whole distributed architecture that can be then easily installed and deployed onto a cluster of machines. The scripts and the NLP modules used in the paper are publicly available and distributed under free licenses. In the paper, we also describe a series of experiments carried out in the context of the NewsReader project with the goal of testing how the system behaves in different scenarios.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Guang Di, Guo Yin Wang, Xue Rui Zhang, Wei Hui Deng, and Fan Zhang. "Forest Cover Types Classification Based on Online Machine Learning on Distributed Cloud Computing Platforms of Storm and SAMOA." Advanced Materials Research 955-959 (June 2014): 3803–12. http://dx.doi.org/10.4028/www.scientific.net/amr.955-959.3803.

Full text
Abstract:
Storm is the most popular realtime stream processing platform, which can be used to deal with online machine learning. Similar to how Hadoop provides a set of general primitives for doing batch processing, Storm provides a set of general primitives for doing realtime computation. SAMOA includes distributed algorithms for the most common machine learning tasks like Mahout for Hadoop. SAMOA is both a platform and a library. In this paper, Forest cover types, a large benchmaking dataset available at the UCI KDD Archive is used as the data stream source. Vertical Hoeffding Tree, a parallelizing streaming decision tree induction for distributed enviroment, which is incorporated in SAMOA API is applied on Storm platform. This study compared stream prcessing technique for predicting forest cover types from cartographic variables with traditional classic machine learning algorithms applied on this dataset. The test then train method used in this system is totally different from the traditional train then test. The results of the stream processing technique indicated that it’s output is aymptotically nearly identical to that of a conventional learner, but the model derived from this system is totally scalable, real-time, capable of dealing with evolving streams and insensitive to stream ordering.
APA, Harvard, Vancouver, ISO, and other styles
42

Wan, Jin Chang, and Tao Cheng. "Research on Multi-Agent and Storm-Hadoop Based on Cooperative Sensing Framework for Multiple Intelligent Manufacturing Agent." Applied Mechanics and Materials 864 (April 2017): 192–201. http://dx.doi.org/10.4028/www.scientific.net/amm.864.192.

Full text
Abstract:
With the bottlenecks of traditional manufacturing technology becoming increasingly prominent, manufacturing industry need to adjust and upgrade the industrial structure, and intelligent manufacturing is the future development direction of manufacturing industry. According to the characteristics of complex, open system hierarchy structure and distributed multi agent of intelligent manufacturing, in order to solve the problem of information integration and collaborative processing of the organic elements of intelligent manufacturing collaborative perception process, it is necessary to give the autonomy of intelligent manufacturing entities to make it to form a fully functional agent, and these agents are connected to the communication network as a network node, so as to realize the data sharing and collaborative awareness processing function between the intelligent manufacturing body. Therefore, this article build the intelligent manufacturing system based the multi Agent and Storm-Hadoop distributed cluster framework through the research and analysis of distributed artificial intelligence and distributed cluster mechanism, and provides a collaborative framework based on multi Agent and Storm-Hadoop for information integration, data sharing, collaborative processing of collaboration awareness problem for multi manufacturing subjects of intelligent manufacturing. The simulation results shows that the cooperative sensing framework proposed in this paper can provide an effective information, data and knowledge processing mechanism and method for realizing multi subject intelligent manufacturing collaborative awareness.
APA, Harvard, Vancouver, ISO, and other styles
43

Rinas, Martin, Jens Tränckner, and Thilo Koegst. "Sediment Transport in Sewage Pressure Pipes, Part I: Continuous Determination of Settling and Erosion Characteristics by In-Situ TSS Monitoring Inside a Pressure Pipe in Northern Germany." Water 11, no. 10 (2019): 2125. http://dx.doi.org/10.3390/w11102125.

Full text
Abstract:
Continuous measurement systems are widely spread in sewers, especially in non-pressure systems. Due to its relatively low costs, turbidity sensors are often used as a surrogate for other indicators (solids, heavy metals, organic compounds). However, little effort is spent to turbidity sensors in pressurized systems so far. This work presents the results of one year in-situ turbidity/total suspended solids (TSS) monitoring inside a pressure pipe (600 mm diameter) in an urban region in northern Germany. The high-resolution sensor data (5 s interval) are used for the determination of solids sedimentation (within pump pauses) and erosion behavior (within pump sequences). In-situ results from sensor measurements are similar to laboratory results presented in previous studies. TSS is decreasing exponentially in pump pauses under dry weather inflow with an average of 0.23 mg/(L s). During pump sequences, solids eroded completely at a bed shear stress of 0.5 N/m². Sedimentation and erosion behavior changes with the inflow rate. Solids settle faster with increasing inflow: at storm water inflow with an average of 0.9 mg/(L s) and at diurnal inflow variation up to 0.6 mg/(L s) at 12:00 a.m. The results are used as calibration data for a sediment transport simulation in Part II.
APA, Harvard, Vancouver, ISO, and other styles
44

Tscheikner-Gratl, Franz, Peter Zeisl, Carolina Kinzel, et al. "Lost in calibration: why people still do not calibrate their models, and why they still should – a case study from urban drainage modelling." Water Science and Technology 74, no. 10 (2016): 2337–48. http://dx.doi.org/10.2166/wst.2016.395.

Full text
Abstract:
From a scientific point of view, it is unquestioned that numerical models for technical systems need to be calibrated. However, in sufficiently calibrated models are still used in engineering practice. Case studies in the scientific literature that deal with urban water management are mostly large cities, while little attention is paid to the differing boundary conditions of smaller municipalities. Consequently, the aim of this paper is to discuss the calibration of a hydrodynamic model of a small municipality (15,000 inhabitants). To represent the spatial distribution of precipitation, three distributed rain gauges were used for model calibration. To show the uncertainties imminent to the calibration process, 17 scenarios, differing in assumptions for calibration, were distinguished. To compare the impact of the different calibration scenarios on actual design values, design rainfall events were applied. The comparison of the model results using the different typical design storm events from all the surrounding data points showed substantial differences for the assessment of the sewers regarding urban flooding, emphasizing the necessity of uncertainty analysis for hydrodynamic models. Furthermore, model calibration is of the utmost importance, because uncalibrated models tend to overestimate flooding volume and therefore result in larger diameters and retention volumes.
APA, Harvard, Vancouver, ISO, and other styles
45

Moura, P., S. Barraud, and M. Baptista. "Multicriteria procedure for the design and the management of infiltration systems." Water Science and Technology 55, no. 4 (2007): 145–53. http://dx.doi.org/10.2166/wst.2007.104.

Full text
Abstract:
Infiltration systems are frequently used as an option to manage urban storm drainage. By reducing flows and volumes in downstream sewers or in surface waters, they decrease the overflows and make it possible to recharge groundwater. They come in various forms with different uses; therefore, their performance is diverse and integrates multiple aspects. Consequently, a multicriteria approach was developed in order to quantify the performance of these systems and to help in decision making problems. For that purpose, a list of performance indicators integrating technical, economical, environmental and social aspects was developed. The performances were defined with the help of a working group composed of engineers from different technical or strategic departments from Greater Lyon and researchers from different fields. The paper presents the last version of the performance indicators tested according to a set of quality requirements: availability of data, relevance, fidelity, precision, sensitivity/robustness. This critical review of the set of indicators has led us to redefine a certain number of indicators, identify numerous biases and allow putting forward general instructions for criterion or indicator construction. The last phase is to propose multicriteria decision aid methods; a procedure using ELECTRE methods should be used.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Kehe, Yayun Zhu, Quan Li, and Ziwei Wu. "A distributed real-time data prediction framework for large-scale time-series data using stream processing." International Journal of Intelligent Computing and Cybernetics 10, no. 2 (2017): 145–65. http://dx.doi.org/10.1108/ijicc-09-2016-0033.

Full text
Abstract:
Purpose The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand. Design/methodology/approach First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework. Findings Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable. Originality/value This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.
APA, Harvard, Vancouver, ISO, and other styles
47

Brown, Tanya M., William H. Pogorzelski, and Ian M. Giammanco. "Evaluating Hail Damage Using Property Insurance Claims Data." Weather, Climate, and Society 7, no. 3 (2015): 197–210. http://dx.doi.org/10.1175/wcas-d-15-0011.1.

Full text
Abstract:
Abstract A series of thunderstorms on 24 May 2011 produced significant hail in the Dallas–Fort Worth (DFW) metroplex, resulting in an estimated $876.8 million (U.S. dollars) in insured losses to property and automobiles, according to the Texas Department of Insurance. Insurance claims and policy-in-force data were obtained from five insurance companies for more than 67 000 residential properties located in 20 ZIP codes. The methodology for selecting the 20 ZIP codes is described. This study evaluates roofing material type with regard to resiliency to hailstone impacts and relative damage costs associated with roofing systems versus wall systems. A comparison of Weather Surveillance Radar-1988 Doppler (WSR-88D) radar-estimated hail sizes and damage levels seen in the claims data is made. Recommendations for improved data collection and quality of insurance claims data, as well as guidance for future property insurance claims studies, are summarized. Studies such as these allow insurance underwriters and claims adjusters to better evaluate the relative performance and vulnerability of various roofing systems and other building components as a function of hail size. They also highlight the abilities and limitations of utilizing radar horizontal reflectivity-based hail sizes, local storm reports, and Storm Data for claims processing. Large studies of this kind may be able to provide guidance to consumers, designers, and contractors concerning building product selections for improved resiliency to hailstorms, and give a glimpse into how product performance varies with storm exposure. Reducing hail losses would reduce the financial burden on property owners and insurers and reduce the amount of building materials being disposed of after storms.
APA, Harvard, Vancouver, ISO, and other styles
48

Bartolini, Ilaria, and Marco Patella. "Real-Time Stream Processing in Social Networks with RAM3S." Future Internet 11, no. 12 (2019): 249. http://dx.doi.org/10.3390/fi11120249.

Full text
Abstract:
The avalanche of (both user- and device-generated) multimedia data published in online social networks poses serious challenges to researchers seeking to analyze such data for many different tasks, like recommendation, event recognition, and so on. For some such tasks, the classical “batch” approach of big data analysis is not suitable, due to constraints of real-time or near-real-time processing. This led to the rise of stream processing big data platforms, like Storm and Flink, that are able to process data with a very low latency. However, this complicates the task of data analysis since any implementation has to deal with the technicalities of such platforms, like distributed processing, synchronization, node faults, etc. In this paper, we show how the RAM 3 S framework could be profitably used to easily implement a variety of applications (such as clothing recommendations, job suggestions, and alert generation for dangerous events), being independent of the particular stream processing big data platforms used. Indeed, by using RAM 3 S, researchers can concentrate on the development of their data analysis application, completely ignoring the details of the underlying platform.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Wenjuan, and Hongchun Yuan. "A Tidal Level Prediction Approach Based on BP Neural Network and Cubic B-Spline Curve with Knot Insertion Algorithm." Mathematical Problems in Engineering 2018 (July 11, 2018): 1–9. http://dx.doi.org/10.1155/2018/9835079.

Full text
Abstract:
Tide levels depend on both long-term astronomical effects that are mainly affected by moon and sun and short-term meteorological effects generated by severe weather conditions like storm surge. Storm surge caused by typhoons will impose serious security risks and threats on the coastal residents’ safety in production, property, and life. Due to the challenges of nonperiodic and incontinuous tidal level record data and the influence of multimeteorological factors, the existing methods cannot predict the tide levels affected by typhoons precisely. This paper targets to explore a more advanced method for forecasting the tide levels of storm surge caused by typhoons. First, on the basis of successive five-year tide level and typhoon data at Luchaogang, China, a BP neural network model is developed using six parameters of typhoons as input parameters and the relevant tide level data as output parameters. Then, for an improved forecasting accuracy, cubic B-spline curve with knot insertion algorithm is combined with the BP model to conduct smooth processing of the predicted points and thus the smoothed prediction curve of tidal level has been obtained. By using the data of the fifth year as the testing sample, the predicted results by the two methods are compared. The experimental results have shown that the latter approach has higher accuracy in forecasting tidal level of storm surge caused by typhoons, and the combined prediction approach provides a powerful tool for defending and reducing storm surge disaster.
APA, Harvard, Vancouver, ISO, and other styles
50

Yovan Felix, A., G. S. S. Vinay, and G. Akhik. "K-Means Cluster Using Rainfall and Storm Prediction in Machine Learning Technique." Journal of Computational and Theoretical Nanoscience 16, no. 8 (2019): 3265–69. http://dx.doi.org/10.1166/jctn.2019.8174.

Full text
Abstract:
Data Mining involves extracting meaningful information from the available data in a user understandable manner. Its role is to analyze voluminous data that is being often assembled. Using the approach of Data mining techniques various business related queries can be attended which formerly were extremely time-consuming to answer. There exist uncontrollable natural disasters that critically hampers and costs human life, environment and revenue material. Natural calamities like heavy rainfall and floods cannot be well predicted until it happens, also it’s beyond one’s power to control them. The aftereffect or destruction caused by these calamities prevails for many years. The term disaster is a result of a vulnerable condition caused by heavy rainfall, flood or storm that can have intense effect at a smaller scale such as a village or at a larger scale such as city or state. Clustering model that was developed before confronted the issue of time complexity, low processing speed and were inappropriate for huge datasets. The current research work proposes the approach of K means clustering that is a subset of ML (machine learning) techniques that are capable to process huge datasets and performs quick computation compared to rest of the clustering model. Various stages in this proposed system include Dataset Collection, Pre-processing, Feature selection, K-means clustering. Among these, the K-means clustering tool which is actually a subset of data-mining and ML approach is employed to cluster observations in the form of groups. It’s a form of unsupervised learning that rectifies the clustering problem. The results reveal that the K-means clustering tool performs clustering faster than the other existing technique.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!