To see the other types of publications on this topic, follow the link: Databases and Grids.

Journal articles on the topic 'Databases and Grids'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Databases and Grids.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Subramanyam, R. B. V., and A. Goswami. "Mining Frequent Fuzzy Grids in Dynamic Databases with Weighted Transactions and Weighted Items." Journal of Information & Knowledge Management 05, no. 03 (September 2006): 243–57. http://dx.doi.org/10.1142/s0219649206001487.

Full text
Abstract:
Incremental mining algorithms that derive the latest mining output by making use of previous mining results are attractive to business organisations. In this paper, a fuzzy data mining algorithm for incremental mining of frequent fuzzy grids from quantitative dynamic databases is proposed. It extends the traditional association rule problem by allowing a weight to be associated with each item in a transaction and with each transaction in a database to reflect the interest/intensity of items and transactions. It uses the information about fuzzy grids that are already mined from original database and avoids start-from-scratch process. In addition, we deal with "weights-of-significance" which are automatically regulated as the incremental databases are evolved and implant themselves in the original database. We maintain "hopeful fuzzy grids" and "frequent fuzzy grids" and our algorithm changes the status of the grids which have been discovered earlier so that they reflect the pattern drift in the updated quantitative databases. Our heuristic approach avoids maintaining many "hopeful fuzzy grids" at the initial level. The algorithm is illustrated with one numerical example and demonstration of experimental results are also incorporated.
APA, Harvard, Vancouver, ISO, and other styles
2

Breton, V., I. E. Magnin, and J. Montagnat. "Partitioning Medical Image Databases for Content-based Queries on a Grid." Methods of Information in Medicine 44, no. 02 (2005): 154–60. http://dx.doi.org/10.1055/s-0038-1633937.

Full text
Abstract:
Summary Objectives: In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. Methods: A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. Results: We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Conclusions: Grids are promising for content-based image retrieval in medical databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Paventhan, A., Kenji Takeda, Simon J. Cox, and Denis A. Nicole. "Federated Database Services for Wind Tunnel Experiment Workflows." Scientific Programming 14, no. 3-4 (2006): 173–84. http://dx.doi.org/10.1155/2006/729069.

Full text
Abstract:
Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.
APA, Harvard, Vancouver, ISO, and other styles
4

Minty, Brian R. S., Peter R. Milligan, Tony Luyendyk, and Timothy Mackey. "Merging airborne magnetic surveys into continental‐scale compilations." GEOPHYSICS 68, no. 3 (May 2003): 988–95. http://dx.doi.org/10.1190/1.1581070.

Full text
Abstract:
Regional compilations of airborne magnetic data are becoming more common as national databases grow. Grids of the magnetic survey data are joined together to form geological province‐scale or even continental‐scale compilations. The advantage of these compilations is that large tectonic features and geological provinces can be better mapped and interpreted. We take a holistic approach to the joining of survey grids. The leveling of the grids into a regional compilation is treated as a single inverse problem. We use the weighted least‐squares method to find the best adjustment for each survey grid such that the data value differences in the grid overlap areas are minimized. The method spreads any inconsistencies between grids among all of the grid overlap areas and minimizes the introduction of long‐wavelength errors into the composite grid. This is an improvement on the conventional approach of joining grids sequentially. A comparison of leveled data over Western Australia with diurnally‐corrected long aeromagnetic traverses shows long‐wavelength errors of about 200 nT over distances of more than 5000 km. This is an improvement on the sequential grid‐joining method, which gives errors of about 450 nT over the same distance. The application of the method to a smaller area covered by good quality surveys resulted in long‐wavelength errors of about 30 nT over a distance of 1200 km. This is within the estimated accuracy of the original survey measurements. The new method is also fast—what used to take many weeks of effort can now be achieved in a matter of hours.
APA, Harvard, Vancouver, ISO, and other styles
5

Bai, Yi Ming, Xian Yao Meng, and Xin Jie Han. "Mining Fuzzy Association Rules in Quantitative Databases." Applied Mechanics and Materials 182-183 (June 2012): 2003–7. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.2003.

Full text
Abstract:
In this paper, we introduce a novel technique for mining fuzzy association rules in quantitative databases. Unlike other data mining techniques who can only discover association rules in discrete values, the algorithm reveals the relationships among different quantitative values by traversing through the partition grids and produces the corresponding Fuzzy Association Rules. Fuzzy Association Rules employs linguistic terms to represent the revealed regularities and exceptions in quantitative databases. After the fuzzy rule base is built, we utilize the definition of Support Degree in data mining to reduce the rule number and save the useful rules. Throughout this paper, we will use a set of real data from a wine database to demonstrate the ideas and test the models.
APA, Harvard, Vancouver, ISO, and other styles
6

Estrella, F., C. del Frate, T. Hauer, M. Odeh, D. Rogulin, S. R. Amendolia, D. Schottlander, T. Solomonides, R. Warren, and R. McClatchey. "Resolving Clinicians’ Queries Across a Grid’s Infrastructure." Methods of Information in Medicine 44, no. 02 (2005): 149–53. http://dx.doi.org/10.1055/s-0038-1633936.

Full text
Abstract:
Summary Objectives: The past decade has witnessed order of magnitude increases in computing power, data storage capacity and network speed, giving birth to applications which may handle large data volumes of increased complexity, distributed over the internet. Methods: Medical image analysis is one of the areas for which this unique opportunity likely brings revolutionary advances both for the scientist’s research study and the clinician’s everyday work. Grids [1] computing promises to resolve many of the difficulties in facilitating medical image analysis to allow radiologists to collaborate without having to co-locate. Results: The EU-funded MammoGrid project [2] aims to investigate the feasibility of developing a Grid-enabled European database of mammograms and provide an information infrastructure which federates multiple mammogram databases. This will enable clinicians to develop new common, collaborative and co-operative approaches to the analysis of mammographic data. Conclusion: This paper focuses on one of the key requirements for large-scale distributed mammogram analysis: resolving queries across a grid-connected federation of images.
APA, Harvard, Vancouver, ISO, and other styles
7

Paiva, L. M. S., G. C. R. Bodstein, and L. C. G. Pimentel. "Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems." Geoscientific Model Development Discussions 6, no. 4 (December 16, 2013): 6659–715. http://dx.doi.org/10.5194/gmdd-6-6659-2013.

Full text
Abstract:
Abstract. Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.
APA, Harvard, Vancouver, ISO, and other styles
8

Fox, Geoffrey, Shrideep Pallickara, Marlon Pierce, and Harshawardhan Gadgil. "Building messaging substrates for Web and Grid applications." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 363, no. 1833 (July 18, 2005): 1757–73. http://dx.doi.org/10.1098/rsta.2005.1605.

Full text
Abstract:
Grid application frameworks have increasingly aligned themselves with the developments in Web services. Web services are currently the most popular infrastructure based on service-oriented architecture (SOA) paradigm. There are three core areas within the SOA framework: (i) a set of capabilities that are remotely accessible, (ii) communications using messages and (iii) metadata pertaining to the aforementioned capabilities. In this paper, we focus on issues related to the messaging substrate hosting these services; we base these discussions on the NaradaBrokering system. We outline strategies to leverage capabilities available within the substrate without the need to make any changes to the service implementations themselves. We also identify the set of services needed to build Grids of Grids. Finally, we discuss another technology, HPS earch , which facilitates the administration of the substrate and the deployment of applications via a scripting interface. These issues have direct relevance to scientific Grid applications, which need to go beyond remote procedure calls in client-server interactions to support integrated distributed applications that couple databases, high performance computing codes and visualization codes.
APA, Harvard, Vancouver, ISO, and other styles
9

Dieulin, Claudine, Gil Mahé, Jean-Emmanuel Paturel, Soundouss Ejjiyar, Yves Tramblay, Nathalie Rouché, and Bouabid EL Mansouri. "A New 60-year 1940/1999 Monthly-Gridded Rainfall Data Set for Africa." Water 11, no. 2 (February 22, 2019): 387. http://dx.doi.org/10.3390/w11020387.

Full text
Abstract:
The African continent has a very low density of rain gauge stations, and long time-seriesfor recent years are often limited and poorly available. In the context of global change, it is veryimportant to be able to characterize the spatio-temporal variability of past rainfall, on the basis ofdatasets issued from observations, to correctly validate simulations. The quality of the rainfall datais for instance of very high importance to improve the efficiency of the hydrological modeling,through calibration/validation experiments.The HydroSciences Montpellier Laboratory (HSM) has a long experience in collecting andmanaging hydro-climatological data. Thus, HSM had initiated a program to elaborate a referencedataset, in order to build monthly rainfall grids over the African continent, over a period of 60 years(1940/1999). The large quantity of data collected (about 7,000 measurement points were used in thisproject) allowed for interpolation using only observed data, with no statistical use of a referenceperiod. Compared to other databases that are used to build the grids of the Global HistoricalClimatology Network (GHCN) or the Climatic Research Unit of University of East Anglia, UK (CRU),the number of available observational stations (a was significantly much higher, including the end ofthe century when the number of measurement stations dropped dramatically, everywhere.Inverse distance weighed (IDW) was the chosen method to build the 720 monthly grids and amean annual grid, from rain gauges. The mean annual grid was compared to the CRU grid. The gridswere significantly different in many places, especially in North Africa, Sahel, the horn of Africa, andthe South Western coast of Africa, with HSM_SIEREM data (database HydroSciencesMontpellier_Système d’Information Environnementales pour les Ressources en Eau et leurModélisation) being closer to the observed rain gauge values. The quality of the grids computed waschecked, following two approaches—cross-validation of the two interpolation methods, ordinarykriging and inverse distance weighting, which gave a comparable reliability, with regards to theobserved data, long time-series analysis, and analysis of long-term signals over the continent,compared to previous studies. The statistical tests, computed on the observed and gridded data,detected a rupture in the rainfall regime around 1979/1980, on the scale of the whole continent; thiswas congruent with the results in the literature. At the monthly time-scale, the most widely observedsignal over the period of 1940/1999, was a significant decrease of the austral rainy season betweenMarch and May, which has not earlier been well-documented. Thus, this would lead to a furtherdetailed climatological study from this HSM_SIEREM database.
APA, Harvard, Vancouver, ISO, and other styles
10

Bradley, J. "Short Notes: Join Dependencies in Relational Databases and the Geometry of Spatial Grids." Computer Journal 29, no. 4 (April 1, 1986): 378–79. http://dx.doi.org/10.1093/comjnl/29.4.378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gonzales, J., S. Pomel, B. Clot, J. L. Gutknecht, B. Irthum, Y. Legré, and V. Breton. "Empowering Humanitarian Medical Development Using Grid Technology." Methods of Information in Medicine 44, no. 02 (2005): 186–89. http://dx.doi.org/10.1055/s-0038-1633943.

Full text
Abstract:
Summary Background: The training of local clinicians is the best way to raise the standard of medical knowledge in developing countries. This requires transferring skills, techniques and resources. Objectives: Grid technology opens new perspectives for preparation and follow-up of medical missions in developing countries as well as support to local medical centers in terms of teleconsulting, telediagnosis and patient follow-up. Indeed, grids allow to hide the complexity of handling distributed data in such a way that physicians will be able to access patient data while ignoring where these data are stored. Methods: To meet requirements of a development project of the French NPO Chain of Hope in China, we propose to deploy a grid-based federation of databases. First Results and Conclusions: A first protocol was established for describing the patients’ pathologies and their pre- and post-surgery states through a web interface in a language-independent way. This protocol was evaluated by French and Chinese clinicians during medical missions in the fall of 2003. The first sets of medical patients recorded in the databases will be used to evaluate grid implementation of services.
APA, Harvard, Vancouver, ISO, and other styles
12

Jaziri, Faouzi, Eric Peyretaillade, Mohieddine Missaoui, Nicolas Parisot, Sébastien Cipière, Jérémie Denonfoux, Antoine Mahul, Pierre Peyret, and David R. C. Hill. "Large Scale Explorative Oligonucleotide Probe Selection for Thousands of Genetic Groups on a Computing Grid: Application to Phylogenetic Probe Design Using a Curated Small Subunit Ribosomal RNA Gene Database." Scientific World Journal 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/350487.

Full text
Abstract:
Phylogenetic Oligonucleotide Arrays (POAs) were recently adapted for studying the huge microbial communities in a flexible and easy-to-use way. POA coupled with the use of explorative probes to detect the unknown part is now one of the most powerful approaches for a better understanding of microbial community functioning. However, the selection of probes remains a very difficult task. The rapid growth of environmental databases has led to an exponential increase of data to be managed for an efficient design. Consequently, the use of high performance computing facilities is mandatory. In this paper, we present an efficient parallelization method to select known and explorative oligonucleotide probes at large scale using computing grids. We implemented a software that generates and monitors thousands of jobs over the European Computing Grid Infrastructure (EGI). We also developed a new algorithm for the construction of a high-quality curated phylogenetic database to avoid erroneous design due to bad sequence affiliation. We present here the performance and statistics of our method on real biological datasets based on a phylogenetic prokaryotic database at the genus level and a complete design of about 20,000 probes for 2,069 genera of prokaryotes.
APA, Harvard, Vancouver, ISO, and other styles
13

Xiu, Wanjing, and Yuan Liao. "Development of a Software Tool for Calculating Transmission Line Parameters and Updating Related Databases." International Journal of Emerging Electric Power Systems 15, no. 6 (December 1, 2014): 513–26. http://dx.doi.org/10.1515/ijeeps-2014-0093.

Full text
Abstract:
Abstract Transmission lines are essential components of electric power grids. Diverse power system applications and simulation based studies require transmission line parameters including series resistance, reactance, and shunt susceptance, and accurate parameters are pivotal in ensuring the accuracy of analyses and reliable system operation. Commercial software packages for performing power system studies usually have their own databases that store the power system model including line parameters. When there is a physical system model change, the corresponding component in the database of the software packages will need to be modified. Manually updating line parameters are tedious and error-prone. This paper proposes a solution for streamlining the calculation of line parameters and updating of their values in respective software databases. The algorithms used for calculating the values of line parameters are described. The software developed for implementing the solution is described, and typical results are presented. The proposed solution is developed for a utility and has a potential to be put into use by other utilities.
APA, Harvard, Vancouver, ISO, and other styles
14

Viñán Robalino, Willyam Marcelo, and Alex Ricardo Guamán Andrade. "Optimization of the implementation design of distributed energy resources in micro-grids." INGENIERÍA Y COMPETITIVIDAD 23, no. 2 (May 18, 2021): e20610454. http://dx.doi.org/10.25100/iyc.v23i2.10454.

Full text
Abstract:
This article develops a study about the energy resources available in microgrids, extracting and analyzing data form satellite databases, in order to apply it to the technologies available. On the other hand, a demand estimation is carried out based on measurements, and finally an optimization is performed to determine the most economically profitable option with the correspondent graphs and analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Reddy, Ayaluri Mallikarjuna, Vakulabharanam Venkata Krishna, Lingamgunta Sumalatha, and Avuku Obulesh. "Age Classification Using Motif and Statistical Features Derived On Gradient Facial Images." Recent Advances in Computer Science and Communications 13, no. 5 (November 5, 2020): 965–76. http://dx.doi.org/10.2174/2213275912666190417151247.

Full text
Abstract:
Background: Age estimation using face images has become increasingly significant in the recent years, due to diversity of potentially useful applications. Age group feature extraction, the local features, has received a great deal of attention. Objective: This paper derived a new age estimation operator called “Gradient Dual-Complete Motif Matrix (GD-CMM)” on the 3 x 3 neighborhood of gradient image. The GD-CMM divides the 3 x 3 neighborhood in to dual grids of size 2 x 2 each and on each 2 x 2 grid complete motif matrices are derived. Methods: The local features are extracted by using Motif Co-occurrence Matrix (MCM) and it is derived on 2 x 2 grid and the main disadvantage of this Motifs or Peano Scan Motifs (PSM) is they are static i.e. the initial position on a 2 x2 grid is fixed in deriving motifs, resulting with six different motifs. The advantage 3 x 3 neighborhood approaches over 2x 2 grids is the 3x3 grid identify the spatial relations among the pixels more precisely. The gradient images represent facial features more efficiently and human beings are more sensitive to gradient changes than original grey level intensities. Results: The proposed method is compared with other existing methods on FGNET, Google and scanned facial image databases. The experimental outcomes exhibited the superiority of proposed method than existing methods. Conclusion: On the GD-CMM, this paper derived co-occurrence features and machine learning classifiers are used for age group classification.
APA, Harvard, Vancouver, ISO, and other styles
16

Gopagoni, Praveen Kumar, and Mohan Rao S K. "Distributed elephant herding optimization for grid-based privacy association rule mining." Data Technologies and Applications 54, no. 3 (May 15, 2020): 365–82. http://dx.doi.org/10.1108/dta-07-2019-0104.

Full text
Abstract:
PurposeAssociation rule mining generates the patterns and correlations from the database, which requires large scanning time, and the cost of computation associated with the generation of the rules is quite high. On the other hand, the candidate rules generated using the traditional association rules mining face a huge challenge in terms of time and space, and the process is lengthy. In order to tackle the issues of the existing methods and to render the privacy rules, the paper proposes the grid-based privacy association rule mining.Design/methodology/approachThe primary intention of the research is to design and develop a distributed elephant herding optimization (EHO) for grid-based privacy association rule mining from the database. The proposed method of rule generation is processed as two steps: in the first step, the rules are generated using apriori algorithm, which is the effective association rule mining algorithm. In general, the extraction of the association rules from the input database is based on confidence and support that is replaced with new terms, such as probability-based confidence and holo-entropy. Thus, in the proposed model, the extraction of the association rules is based on probability-based confidence and holo-entropy. In the second step, the generated rules are given to the grid-based privacy rule mining, which produces privacy-dependent rules based on a novel optimization algorithm and grid-based fitness. The novel optimization algorithm is developed by integrating the distributed concept in EHO algorithm.FindingsThe experimentation of the method using the databases taken from the Frequent Itemset Mining Dataset Repository to prove the effectiveness of the distributed grid-based privacy association rule mining includes the retail, chess, T10I4D100K and T40I10D100K databases. The proposed method outperformed the existing methods through offering a higher degree of privacy and utility, and moreover, it is noted that the distributed nature of the association rule mining facilitates the parallel processing and generates the privacy rules without much computational burden. The rate of hiding capacity, the rate of information preservation and rate of the false rules generated for the proposed method are found to be 0.4468, 0.4488 and 0.0654, respectively, which is better compared with the existing rule mining methods.Originality/valueData mining is performed in a distributed manner through the grids that subdivide the input data, and the rules are framed using the apriori-based association mining, which is the modification of the standard apriori with the holo-entropy and probability-based confidence replacing the support and confidence in the standard apriori algorithm. The mined rules do not assure the privacy, and hence, the grid-based privacy rules are employed that utilize the adaptive elephant herding optimization (AEHO) for generating the privacy rules. The AEHO inherits the adaptive nature in the standard EHO, which renders the global optimal solution.
APA, Harvard, Vancouver, ISO, and other styles
17

Zsargó, J., C. R. Fierro-Santillán, J. Klapp, A. Arrieta, L. Arias, J. M. Valencia, L. Di G. Sigalotti, M. Hareter, and R. E. Puebla. "Creating and using large grids of precalculated model atmospheres for a rapid analysis of stellar spectra." Astronomy & Astrophysics 643 (November 2020): A88. http://dx.doi.org/10.1051/0004-6361/202038066.

Full text
Abstract:
Aims. We present a database of 43 340 atmospheric models (∼80 000 models at the conclusion of the project) for stars with stellar masses between 9 and 120 M⊙, covering the region of the OB main-sequence and Wolf-Rayet stars in the Hertzsprung-Russell diagram. Methods. The models were calculated using the ABACUS I supercomputer and the stellar atmosphere code CMFGEN. Results. The parameter space has six dimensions: the effective temperature Teff, the luminosity L, the metallicity Z, and three stellar wind parameters: the exponent β, the terminal velocity V∞, and the volume filling factor Fcl. For each model, we also calculate synthetic spectra in the UV (900−2000 Å), optical (3500−7000 Å), and near-IR (10 000−40 000 Å) regions. To facilitate comparison with observations, the synthetic spectra can be rotationally broadened using ROTIN3, by covering v sin i velocities between 10 and 350 km s−1 with steps of 10 km s−1. Conclusions. We also present the results of the reanalysis of ϵ Ori using our grid to demonstrate the benefits of databases of precalculated models. Our analysis succeeded in reproducing the best-fit parameter ranges of the original study, although our results favor the higher end of the mass-loss range and a lower level of clumping. Our results indirectly suggest that the resonance lines in the UV range are strongly affected by the velocity-space porosity, as has been suggested by recent theoretical calculations and numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
18

Hassler, B., G. E. Bodeker, and M. Dameris. "Technical Note: A new global database of trace gases and aerosols from multiple sources of high vertical resolution measurements." Atmospheric Chemistry and Physics 8, no. 17 (September 10, 2008): 5403–21. http://dx.doi.org/10.5194/acp-8-5403-2008.

Full text
Abstract:
Abstract. A new database of trace gases and aerosols with global coverage, derived from high vertical resolution profile measurements, has been assembled as a collection of binary data files; hereafter referred to as the "Binary DataBase of Profiles" (BDBP). Version 1.0 of the BDBP, described here, includes measurements from different satellite- (HALOE, POAM II and III, SAGE I and II) and ground-based measurement systems (ozonesondes). In addition to the primary product of ozone, secondary measurements of other trace gases, aerosol extinction, and temperature are included. All data are subjected to very strict quality control and for every measurement a percentage error on the measurement is included. To facilitate analyses, each measurement is added to 3 different instances (3 different grids) of the database where measurements are indexed by: (1) geographic latitude, longitude, altitude (in 1 km steps) and time, (2) geographic latitude, longitude, pressure (at levels ~1 km apart) and time, (3) equivalent latitude, potential temperature (8 levels from 300 K to 650 K) and time. In contrast to existing zonal mean databases, by including a wider range of measurement sources (both satellite and ozonesondes), the BDBP is sufficiently dense to permit calculation of changes in ozone by latitude, longitude and altitude. In addition, by including other trace gases such as water vapour, this database can be used for comprehensive radiative transfer calculations. By providing the original measurements rather than derived monthly means, the BDBP is applicable to a wider range of applications than databases containing only monthly mean data. Monthly mean zonal mean ozone concentrations calculated from the BDBP are compared with the database of Randel and Wu, which has been used in many earlier analyses. As opposed to that database which is generated from regression model fits, the BDBP uses the original (quality controlled) measurements with no smoothing applied in any way and as a result displays higher natural variability.
APA, Harvard, Vancouver, ISO, and other styles
19

Hassler, B., G. E. Bodeker, and M. Dameris. "Technical Note: A new global database of trace gases and aerosols from multiple sources of high vertical resolution measurements." Atmospheric Chemistry and Physics Discussions 8, no. 2 (April 18, 2008): 7657–702. http://dx.doi.org/10.5194/acpd-8-7657-2008.

Full text
Abstract:
Abstract. A new database of trace gases and aerosols with global coverage, derived from high vertical resolution profile measurements, has been assembled as a collection of binary data files; hereafter referred to as the "Binary DataBase of Profiles" (BDBP). Version 1.0 of the BDBP, described here, includes measurements from different satellite- (HALOE, POAM II and III, SAGE I and II) and ground-based measurement systems (ozonesondes). In addition to the primary product of ozone, secondary measurements of other trace gases, aerosol extinction, and temperature are included. All data are subjected to very strict quality control and for every measurement a percentage error on the measurement is included. To facilitate analyses, each measurement is added to 3 different instances (3 different grids) of the database where measurements are indexed by: (1) geographic latitude, longitude, altitude (in 1 km steps) and time, (2) geographic latitude, longitude, pressure (at levels ~1 km apart) and time, (3) equivalent latitude, potential temperature (8 levels from 300 K to 650 K) and time. In contrast to existing zonal mean databases, by including a wider range of measurement sources (both satellite and ozonesondes), the BDBP is sufficiently dense to permit calculation of changes in ozone by latitude, longitude and altitude. In addition, by including other trace gases such as water vapour, this database can be used for comprehensive radiative transfer calculations. By providing the original measurements rather than derived monthly means, the BDBP is applicable to a wider range of applications than databases containing only monthly mean data. Monthly mean zonal mean ozone concentrations calculated from the BDBP are compared with the database of Randel and Wu, which has been used in many earlier analyses. As opposed to that database which is generated from regression model fits, the BDBP uses the original (quality controlled) measurements with no smoothing applied in any way and as a result displays higher natural variability.
APA, Harvard, Vancouver, ISO, and other styles
20

Toledo-Orozco, Marco, Carlos Arias-Marin, Carlos Álvarez-Bel, Diego Morales-Jadan, Javier Rodríguez-García, and Eddy Bravo-Padilla. "Innovative Methodology to Identify Errors in Electric Energy Measurement Systems in Power Utilities." Energies 14, no. 4 (February 11, 2021): 958. http://dx.doi.org/10.3390/en14040958.

Full text
Abstract:
Many electric utilities currently have a low level of smart meter implementation on traditional distribution grids. These utilities commonly have a problem associated with non-technical energy losses (NTLs) to unidentified energy flows consumed, but not billed in power distribution grids. They are usually due to either the electricity theft carried out by their own customers or failures in the utilities’ energy measurement systems. Non-technical energy losses lead to significant economic losses for electric utilities around the world. For instance, in Latin America and the Caribbean countries, NTLs represent around 15% of total energy generated in 2018, varying between 5 and 30% depending on the country because of the strong correlation with social, economic, political, and technical variables. According to this, electric utilities have a strong interest in finding new techniques and methods to mitigate this problem as much as possible. This research presents the results of determining with the precision of the existing data-oriented methods for detecting NTL through a methodology based on data analytics, machine learning, and artificial intelligence (multivariate data, analysis methods, classification, grouping algorithms, i.e., k-means and neural networks). The proposed methodology was implemented using the MATLAB computational tool, demonstrating improvements in the probability to identify the suspected customer’s measurement systems with error in their records that should be revised to reduce the NTLs in the distribution system and using the information from utilities’ databases associated with customer information (customer information system), the distribution grid (geographic information system), and socio-economic data. The proposed methodology was tested and validated in a real situation as a part of a recent Ecuadorian electric project.
APA, Harvard, Vancouver, ISO, and other styles
21

Oh, Seong-Gi, and TaeYong Kim. "Facial Expression Recognition by Regional Weighting with Approximated Q-Learning." Symmetry 12, no. 2 (February 23, 2020): 319. http://dx.doi.org/10.3390/sym12020319.

Full text
Abstract:
Several facial expression recognition methods cluster facial elements according to similarity and weight them considering the importance of each element in classification. However, these methods are limited by the pre-definitions of units restricting modification of the structure during optimization. This study proposes a modified support vector machine classifier called Grid Map, which is combined with reinforcement learning to improve the classification accuracy. To optimize training, the input image size is normalized according to the cascade rules of a pre-processing detector, and the regional weights are assigned by an adaptive cell size that divides each region of the image using bounding grids. Reducing the size of the bounding grid reduces the area used for feature extraction, allowing more detailed weighted features to be extracted. Error-correcting output codes with a histogram of gradient is selected as the classification method via an experiment to determine the optimal feature and classifier selection. The proposed method is formulated into a decision process and solved via Q-learning. To classify seven emotions, the proposed method exhibits accuracies of 96.36% and 98.47% for four databases and Extended Cohn-–Kanade Dataset (CK+), respectively. Compared to the basic method exhibiting a similar accuracy, the proposed method requires 68.81% fewer features and only 66.33% of the processing time.
APA, Harvard, Vancouver, ISO, and other styles
22

Hu, Yue, Daniel Harabor, Long Qin, and Quanjun Yin. "Regarding Goal Bounding and Jump Point Search." Journal of Artificial Intelligence Research 70 (February 10, 2021): 631–81. http://dx.doi.org/10.1613/jair.1.12255.

Full text
Abstract:
Jump Point Search (JPS) is a well known symmetry-breaking algorithm that can substantially improve performance for grid-based optimal pathfinding. When the input grid is static further speedups can be obtained by combining JPS with goal bounding techniques such as Geometric Containers (instantiated as Bounding Boxes) and Compressed Path Databases. Two such methods, JPS+BB and Two-Oracle Path PlannING (Topping), are currently among the fastest known approaches for computing shortest paths on grids. The principal drawback for these algorithms is the overhead costs: each one requires an all-pairs precomputation step, the running time and subsequent storage costs of which can be prohibitive. In this work we consider an alternative approach where we precompute and store goal bounding data only for grid cells which are also jump points. Since the number of jump points is usually much smaller than the total number of grid cells, we can save up to orders of magnitude in preprocessing time and space. Considerable precomputation savings do not necessarily mean performance degradation. For a second contribution we show how canonical orderings, partial expansion strategies and enhanced intermediate pruning can be leveraged to improve online query performance despite a reduction in preprocessed data. The combination of faster preprocessing and stronger online reasoning leads to three new and highly performant algorithms: JPS+BB+ and Two-Oracle Pathfinding Search (TOPS) based on search, and Topping+ based on path extraction. We give a theoretical analysis showing that each method is complete and optimal. We also report convincing gains in a comprehensive empirical evaluation that includes almost all current and cutting-edge algorithms for grid-based pathfinding.
APA, Harvard, Vancouver, ISO, and other styles
23

Bosisio, Alessandro, Matteo Moncecchi, Andrea Morotti, and Marco Merlo. "Machine Learning and GIS Approach for Electrical Load Assessment to Increase Distribution Networks Resilience." Energies 14, no. 14 (July 8, 2021): 4133. http://dx.doi.org/10.3390/en14144133.

Full text
Abstract:
Currently, distribution system operators (DSOs) are asked to operate distribution grids, managing the rise of the distributed generators (DGs), the rise of the load correlated to heat pump and e-mobility, etc. Nevertheless, they are asked to minimize investments in new sensors and telecommunication links and, consequently, several nodes of the grid are still not monitored and tele-controlled. At the same time, DSOs are asked to improve the network’s resilience, looking for a reduction in the frequency and impact of power outages caused by extreme weather events. The paper presents a machine learning GIS-based approach to estimate a secondary substation’s load profiles, even in those cases where monitoring sensors are not deployed. For this purpose, a large amount of data from different sources has been collected and integrated to describe secondary substation load profiles adequately. Based on real measurements of some secondary substations (medium-voltage to low-voltage interface) given by Unareti, the DSO of Milan, and georeferenced data gathered from open-source databases, unknown secondary substations load profiles are estimated. Three types of machine learning algorithms, regression tree, boosting, and random forest, as well as geographic information system (GIS) information, such as secondary substation locations, building area, types of occupants, etc., are considered to find the most effective approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Rodríguez, Fermín, Fernando Martín, Luis Fontán, and Ainhoa Galarza. "Very Short-Term Load Forecaster Based on a Neural Network Technique for Smart Grid Control." Energies 13, no. 19 (October 6, 2020): 5210. http://dx.doi.org/10.3390/en13195210.

Full text
Abstract:
Electrical load forecasting plays a crucial role in the proper scheduling and operation of power systems. To ensure the stability of the electrical network, it is necessary to balance energy generation and demand. Hence, different very short-term load forecast technologies are being designed to improve the efficiency of current control strategies. This paper proposes a new forecaster based on artificial intelligence, specifically on a recurrent neural network topology, trained with a Levenberg–Marquardt learning algorithm. Moreover, a sensitivity analysis was performed for determining the optimal input vector, structure and the optimal database length. In this case, the developed tool provides information about the energy demand for the next 15 min. The accuracy of the forecaster was validated by analysing the typical error metrics of sample days from the training and validation databases. The deviation between actual and predicted demand was lower than 0.5% in 97% of the days analysed during the validation phase. Moreover, while the root mean square error was 0.07 MW, the mean absolute error was 0.05 MW. The results suggest that the forecaster’s accuracy is considered sufficient for installation in smart grids or other power systems and for predicting future energy demand at the chosen sites.
APA, Harvard, Vancouver, ISO, and other styles
25

Grilli, S. T., S. Dubosq, N. Pophet, Y. Pérignon, J. T. Kirby, and F. Shi. "Numerical simulation and first-order hazard analysis of large co-seismic tsunamis generated in the Puerto Rico trench: near-field impact on the North shore of Puerto Rico and far-field impact on the US East Coast." Natural Hazards and Earth System Sciences 10, no. 10 (October 8, 2010): 2109–25. http://dx.doi.org/10.5194/nhess-10-2109-2010.

Full text
Abstract:
Abstract. We perform numerical simulations of the coastal impact of large co-seismic tsunamis, initiated in the Puerto Rican trench, both in far-field areas along the upper US East coast (and other Caribbean islands), and in more detail in the near-field, along the Puerto Rico North Shore (PRNS). We first model a magnitude 9.1 extreme co-seismic source and then a smaller 8.7 magnitude source, which approximately correspond to 600 and 200 year return periods, respectively. In both cases, tsunami generation and propagation (both near- and far-field) are first performed in a coarse 2′ basin scale grid, with ETOPO2 bathymetry, using a fully nonlinear and dispersive long wave tsunami model (FUNWAVE). Coastal runup and inundation are then simulated for two selected areas, using finer coastal nested grids. Thus, a 15″ (450 m) grid is used to calculate detailed far-field impact along the US East Coast, from New Jersey to Maine, and a 3″ (90 m) grid (for the finest resolution), encompassing the entire PRNS, is used to compute detailed near-field impact along the PRNS (runup and inundation). To perform coastal simulations in nested grids, accurate bathymetry/topography databases are constructed by combining ETOPO2 2′ data (in deep water) and USGS' or NOAA's 15″ or 3″ (in shallow water) data. In the far-field, runup caused by the extreme 9.1 source would be severe (over 10 m) for some nearby Caribbean islands, but would only reach up to 3 m along the selected section of the East coast. A sensitivity analysis to the bathymetric resolution (for a constant 3″ model grid) of runup along the PRNS, confirms the convergence of runup results for a topographic resolution 24″ or better, and thus stresses the importance of using sufficiently resolved bathymetric data, in order to accurately predict extreme runup values, particularly when bathymetric focusing is significant. Runup (10–22 m) and inundation are found to be very large at most locations for the extreme 9.1 source. Both simulated spatial inundation snapshots and time series indicate, the inundation would be particularly severe near and around the low-lying city of San Juan. For the 8.7 source, runup along the PRNS would be much less severe (3–6 m), but still significant, while inundation would only be significant near and around San Juan. This first-order tsunami hazard analysis stresses the importance of conducting more detailed and comprehensive studies, particularly of tsunami hazard along the PRNS, for a more complete and realistic selection of sources; such work is ongoing as part of a US funded (NTHMP) tsunami inundation mapping effort in Puerto Rico.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Hai Guang. "Design and Implementation of Fishery Rescue Data Mart System." Advanced Materials Research 850-851 (December 2013): 557–60. http://dx.doi.org/10.4028/www.scientific.net/amr.850-851.557.

Full text
Abstract:
A novel data mart based system for fishery rescue field was designed and implemented. The system runs ETL process to deal with original data from various databases and data warehouses, and then reorganized the data into the fishery rescue data mart. Next, online analytical processing (OLAP) are carried out and statistical reports are generated automatically. Particularly, quick configuration schemes are designed to configure query dimensions and OLAP data sets. The configuration file will be transformed into statistic interfaces automatically through a wizard-style process. The system provides various forms of reporting files, including crystal reports, flash graphical reports, and two-dimensional data grids. In addition, a wizard style interface was designed to guide users customizing inquiry processes, making it possible for nontechnical staffs to access customized reports. Characterized by quick configuration, safeness and flexibility, the system has been successfully applied in city fishery rescue department
APA, Harvard, Vancouver, ISO, and other styles
27

Vakulenko, Ihor, Liudmyla Saher, Oleksii Lyulyov, and Tetyana Pimonenko. "A systematic literature review of smart grids." E3S Web of Conferences 250 (2021): 08006. http://dx.doi.org/10.1051/e3sconf/202125008006.

Full text
Abstract:
The development and implementation of smart grids involve developing new and improvements in existing energy technologies, introducing information systems to manage the smart grid, monitoring and controlling energy consumption, and closely related to alternative energy and decarbonization of the economy. Scientific research of smart grids differs significantly in terms of topics because they aim to solve problems in each of these areas. Thus, this research aims to present a bibliometric overview to define the current scientific production state regarding “Smart Grid.” A review of 1359 publications from the Scopus database (2008–2020) was conducted. The “Title, abstract, keywords” field of search in the Scopus database was done. The visualization of the results was made using VOSviewer program to map the material graphically. The study used the cooccurrence of keywords and co-authorship (country) analyzes. As a result, the most productive authors and journals were defined. The most cited studies were determined. Country clusters and keywords (co-occurrence) clusters were represented. The obtained results of the analysis and graphical presentations are relevant, and they form the basis for a better understanding of Smart Grid’s concept.
APA, Harvard, Vancouver, ISO, and other styles
28

Jordan, L. "APPLYING THIESSEN POLYGON CATCHMENT AREAS AND GRIDDED POPULATION WEIGHTS TO ESTIMATE CONFLICT-DRIVEN POPULATION CHANGES IN SOUTH SUDAN." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4/W2 (October 19, 2017): 23–30. http://dx.doi.org/10.5194/isprs-annals-iv-4-w2-23-2017.

Full text
Abstract:
Recent violence in South Sudan produced significant levels of conflict-driven migration undermining the accuracy and utility of both national and local level population forecasts commonly used in demographic estimates, public health metrics and food security proxies. This article explores the use of Thiessen Polygons and population grids (Gridded Population of the World, WorldPop and LandScan) as weights for estimating the catchment areas for settlement locations that serve large populations of internally displaced persons (IDP), in order to estimate the county-level in- and out-migration attributable to conflict-driven displacement between 2014-2015. Acknowledging IDP totals improves internal population estimates presented by global population databases. Unlike other forecasts, which produce spatially uniform increases in population, accounting for displaced population reveals that 15 percent of counties (<i>n</i> = 12) increased in population over 20 percent, and 30 percent of counties (<i>n</i> = 24) experienced zero or declining population growth, due to internal displacement and refugee out-migration. Adopting Thiessen Polygon catchment zones for internal migration estimation can be applied to other areas with United Nations IDP settlement data, such as Yemen, Somalia, and Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
29

Boitani, Luigi, Luigi Maiorano, Daniele Baisero, Alessandra Falcucci, Piero Visconti, and Carlo Rondinini. "What spatial data do we need to develop global mammal conservation strategies?" Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1578 (September 27, 2011): 2623–32. http://dx.doi.org/10.1098/rstb.2011.0117.

Full text
Abstract:
Spatial data on species distributions are available in two main forms, point locations and distribution maps (polygon ranges and grids). The first are often temporally and spatially biased, and too discontinuous, to be useful (untransformed) in spatial analyses. A variety of modelling approaches are used to transform point locations into maps. We discuss the attributes that point location data and distribution maps must satisfy in order to be useful in conservation planning. We recommend that before point location data are used to produce and/or evaluate distribution models, the dataset should be assessed under a set of criteria, including sample size, age of data, environmental/geographical coverage, independence, accuracy, time relevance and (often forgotten) representation of areas of permanent and natural presence of the species. Distribution maps must satisfy additional attributes if used for conservation analyses and strategies, including minimizing commission and omission errors, credibility of the source/assessors and availability for public screening. We review currently available databases for mammals globally and show that they are highly variable in complying with these attributes. The heterogeneity and weakness of spatial data seriously constrain their utility to global and also sub-global scale conservation analyses.
APA, Harvard, Vancouver, ISO, and other styles
30

Goerlich, F. J. "Una aproximación volumétrica a la desagregación espacial de la población combinando cartografía temática y datos LIDAR." Revista de Teledetección, no. 46 (June 27, 2016): 147. http://dx.doi.org/10.4995/raet.2016.4710.

Full text
Abstract:
<p>Availability of high resolution population distribution data, independent of the administrative units in which demographic statistics are collected, is a real necessity in many fields: risk evaluation due to earthquakes, flooding or fires, to name just a few, integration between socio-demographic and environmental or geographical information collected in different formats, policy design for the provision public services, such as health, education or public transport, or mobility studies in urban areas or metropolitan regions. Because of this, the literature has explored various methods of population downscaling, collected at communality or census tract level, into smaller areas; typically urban polygons from high resolution topographic maps or land use/land cover databases, or grid cells, allowing the elaboration of raster population layers. A common feature of all these methods is that they do not incorporate building height. In this way, downscaling methods don´t distinguish between the urban sprawl type of settlement, where most of the houses are detached or semi-detached, and compact cities with high buildings. This paper examines error reduction in downscaling census tract population into 1×1 km and 1 ha grids, when we add the third dimension, building height from LIDAR remote sensing data. Algorithms used are simple, and based on areal weighting with or without auxiliary land use/land cover information, since our focus is not in fine turning algorithms, but in measuring improvements due to the missing dimension: building height. Our results indicate that improvements are noticeable. They are comparable to the ones obtained when we move from binary dasymetric methods to more general models combining densities for different land use/land cover types. Hence, adding the third dimension to population downscaling algorithms seems worth pursuing.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Pita-Díaz, Oscar, and David Ortega-Gaucin. "Analysis of Anomalies and Trends of Climate Change Indices in Zacatecas, Mexico." Climate 8, no. 4 (April 11, 2020): 55. http://dx.doi.org/10.3390/cli8040055.

Full text
Abstract:
Sufficient evidence is currently available to demonstrate the reality of the warming of our planet’s climate system. Global warming has different effects on climate at the regional and local levels. The detection of changes in extreme events using instrumental data provides further evidence of such warming and allows for the characterization of its local manifestations. The present study analyzes changes in temperature and precipitation extremes in the Mexican state of Zacatecas using climate change indices developed by the Expert Team on Climate Change Detection and Indices (ETCCDI). We studied a 40-year period (1976–2015) using annual and seasonal time series. Maximum and minimum temperature data were used, as well as precipitation statistics from the Mexican climatology database (CLICOM) provided by the Mexican Meteorological Service. Weather stations with at least 80% of data availability for the selected study period were selected; these databases were subjected to quality control, homogenization, and data filling using Climatol, which runs in the R programming language. These homogenized series were used to obtain daily grids of the three variables at a resolution of 1.3 km. Results reveal important changes in temperature-related indices, such as the increase in maximum temperature and the decrease in minimum temperature. Irregular variability was observed in the case of precipitation, which could be associated with low-frequency oscillations such as the Pacific Decadal Oscillation and the El Niño–Southern Oscillation. The possible impact of these changes in temperature and the increased irregularity of precipitation could have a negative impact on the agricultural sector, especially given that the state of Zacatecas is the largest national bean producer. The most important problems in the short term will be related to the difficulty of adapting to these rapid changes and the new climate scenario, which will pose new challenges in the future.
APA, Harvard, Vancouver, ISO, and other styles
32

VANIACHINE, ALEXANDRE, DAVID MALON, and MATTHEW VRANICAR. "ADVANCED TECHNOLOGIES FOR DISTRIBUTED DATABASE SERVICES HYPERINFRASTRUCTURE." International Journal of Modern Physics A 20, no. 16 (June 30, 2005): 3877–79. http://dx.doi.org/10.1142/s0217751x05027862.

Full text
Abstract:
HEP collaborations are deploying grid technologies to address petabyte-scale data processing challenges. In addition to file-based event data, HEP data processing requires access to terabytes of non-event data (detector conditions, calibrations, etc.) stored in relational databases. Inadequate for non-event data delivery in these amounts, database access control technologies for grid computing are limited to encrypted message transfers. To overcome these database access limitations one must go beyond the existing grid infrastructure. A proposed hyperinfrastructure of distributed database services implements efficient secure data access methods. We introduce several technologies laying a foundation of a new hyperinfrastructure. We present efficient secure data transfer methods and secure grid query engine technologies federating heterogeneous databases. Lessons learned in a production environment of ATLAS Data Challenges are presented.
APA, Harvard, Vancouver, ISO, and other styles
33

Salami, Mesbaholdin, Farzad Movahedi Sobhani, and Mohammad Ghazizadeh. "Short-Term Forecasting of Electricity Supply and Demand by Using the Wavelet-PSO-NNs-SO Technique for Searching in Big Data of Iran’s Electricity Market." Data 3, no. 4 (October 23, 2018): 43. http://dx.doi.org/10.3390/data3040043.

Full text
Abstract:
The databases of Iran’s electricity market have been storing large sizes of data. Retail buyers and retailers will operate in Iran’s electricity market in the foreseeable future when smart grids are implemented thoroughly across Iran. As a result, there will be very much larger data of the electricity market in the future than ever before. If certain methods are devised to perform quick search in such large sizes of stored data, it will be possible to improve the forecasting accuracy of important variables in Iran’s electricity market. In this paper, available methods were employed to develop a new technique of Wavelet-Neural Networks-Particle Swarm Optimization-Simulation-Optimization (WT-NNPSO-SO) with the purpose of searching in Big Data stored in the electricity market and improving the accuracy of short-term forecasting of electricity supply and demand. The electricity market data exploration approach was based on the simulation-optimization algorithms. It was combined with the Wavelet-Neural Networks-Particle Swarm Optimization (Wavelet-NNPSO) method to improve the forecasting accuracy with the assumption Length of Training Data (LOTD) increased. In comparison with previous techniques, the runtime of the proposed technique was improved in larger sizes of data due to the use of metaheuristic algorithms. The findings were dealt with in the Results section.
APA, Harvard, Vancouver, ISO, and other styles
34

Kukla, Tamas, Tamas Kiss, Peter Kacsuk, and Gabor Terstyanszky. "Integrating Open Grid Services Architecture Data Access and Integration with computational Grid workflows." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, no. 1897 (June 28, 2009): 2521–32. http://dx.doi.org/10.1098/rsta.2009.0040.

Full text
Abstract:
Although many scientific applications rely on data stored in databases, most workflow management systems are not capable of establishing database connections during workflow execution. For this reason, e-Scientists have to use different tools before workflow submission to access their datasets and gather the required data on which they want to carry out computational experiments. Open Grid Services Architecture Data Access and Integration (OGSA-DAI) is a good candidate to use as middleware providing access to several structured and semi-structured database products through Web/Grid services. The integration technique and its reference implementation described in this paper enable e-Scientists to reach databases via OGSA-DAI within their scientific workflows at run-time and give a general solution that can be adopted by any workflow management system.
APA, Harvard, Vancouver, ISO, and other styles
35

Cao, Jian, and Cong Yan. "A Database Grid Service Based on CSGPA." Applied Mechanics and Materials 543-547 (March 2014): 3419–22. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.3419.

Full text
Abstract:
Database grid service provides users with a unified interface to access to distributed heterogeneous databases resources. To overcome the weakness of collaborative services ability in different grid portal, a new grid portal architecture based on CSGPA (Collaborative Services Grid Portal Architecture), is proposed. It devotes integrating database into Grid environment with grid service. In comparison with the current mainstream grid portal architecture, the results show that CSGPA has great advantage in efficiency, deployment costs, scalability and reusability etc.
APA, Harvard, Vancouver, ISO, and other styles
36

Grythe, Henrik, Susana Lopez-Aparicio, Matthias Vogt, Dam Vo Thanh, Claudia Hak, Anne Karine Halse, Paul Hamer, and Gabriela Sousa Santos. "The MetVed model: development and evaluation of emissions from residential wood combustion at high spatio-temporal resolution in Norway." Atmospheric Chemistry and Physics 19, no. 15 (August 13, 2019): 10217–37. http://dx.doi.org/10.5194/acp-19-10217-2019.

Full text
Abstract:
Abstract. We present here emissions estimated from a newly developed emission model for residential wood combustion (RWC) at high spatial and temporal resolution, which we name the MetVed model. The model estimates hourly emissions resolved on a 250 m grid resolution for several compounds, including particulate matter (PM), black carbon (BC) and polycyclic aromatic hydrocarbons (PAHs) in Norway for a 12-year period. The model uses novel input data and calculation methods that combine databases built with an unprecedented high level of detail and near-national coverage. The model establishes wood burning potential at the grid based on the dependencies between variables that influence emissions: i.e. outdoor temperature, number of and type and size of dwellings, type of available heating technologies, distribution of wood-based heating installations and their associated emission factors. RWC activity with a 1 h temporal profile was produced by combining heating degree day and hourly and weekday activity profiles reported by wood consumers in official statistics. This approach results in an improved characterisation of the spatio-temporal distribution of wood use, and subsequently of emissions, required for urban air quality assessments. Whereas most variables are calculated based on bottom-up approaches on a 250 m spatial grid, the MetVed model is set up to use official wood consumption at the county level and then distributes consumption to individual grids proportional to the physical traits of the residences within it. MetVed combines consumption with official emission factors that makes the emissions also upward scalable from the 250 m grid to the national level. The MetVed spatial distribution obtained was compared at the urban scale to other existing emissions at the same scale. The annual urban emissions, developed according to different spatial proxies, were found to have differences up to an order of magnitude. The MetVed total annual PM2.5 emissions in the urban domains compare well to emissions adjusted based on concentration measurements. In addition, hourly PM2.5 concentrations estimated by an Eulerian dispersion model using MetVed emissions were compared to measurements at air quality stations. Both hourly daily profiles and the seasonality of PM2.5 show a slight overestimation of PM2.5 levels. However, a comparison with black carbon from biomass burning and benzo(a)pyrene measurements indicates higher emissions during winter than that obtained by MetVed. The accuracy of urban emissions from RWC relies on the accuracy of the wood consumption (activity data), emission factors and the spatio-temporal distribution. While there are still knowledge gaps regarding emissions, MetVed represents a vast improvement in the spatial and temporal distribution of RWC.
APA, Harvard, Vancouver, ISO, and other styles
37

Swarna Bharathi D and Boopathy Raja A. "In Silico Studies On Colon Cancer Against Hexadecane, Hexadecanoic Acid Methyl Ester And Quinoline, 1,2-Dihydro-2,2,4-Trimethyl Compounds From Brown Seaweed." International Journal of Research in Pharmaceutical Sciences 11, no. 2 (April 16, 2020): 1927–35. http://dx.doi.org/10.26452/ijrps.v11i2.2110.

Full text
Abstract:
The main objective of this was to evaluate the potential drug for treating colon cancer and compound must be extracted from natural source especially in seaweed. The above mentioned compounds taken from GC-MS studies of Dictyotabartayresiana. There are more number of compounds present in GC-MS analysis but hexadecane, hexadecanoic acid methyl ester and quinoline, 1,2-dihydro-2,2,4-trimethyl compounds have shown best acivities compared to others that proven in previous research. NIST and SDBS databases are helps to draw the compound structure, name, fragments, molecular weight, chemical formula, hydrogen bond donor and acceptor count. The protein were obtained from PDB (Research Collaboratory for Structural Bioinformatics, Protein Data Bank) which is repository for 3D structural data of large biological macromolecules. Auto Dock tools was utilized to generate grids, calculate dock score and evaluate the conformers of activators bound in the active site of protein as targets. The compound hexadecanoic acid methyl ester (C17 H34 O2) has the binding affinity of -6.60 kcal/mol that similar to results obtained from Quinoline, 1,2-dihydro-2,2,4-trimethyl- (C12 H15N) and Hexadecane (C16H34) shows the binding affinity of -6.20 kcal/mol which greater than standard drug 5-fluorouracil (C4H3FN2O2). In present study technically proven the anticancer property of Dictyotabartayresiana. To conclude, these resulting compounds are might be an alternate to synthetic anticancer drugs available in the market.
APA, Harvard, Vancouver, ISO, and other styles
38

Mothi Kumar, K. E., S. Singh, P. Attri, R. Kumar, A. Kumar, Sarika, R. S. Hooda, et al. "GIS based Cadastral level Forest Information System using World View-II data in Bir Hisar (Haryana)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-8 (November 28, 2014): 605–12. http://dx.doi.org/10.5194/isprsarchives-xl-8-605-2014.

Full text
Abstract:
Identification and demarcation of Forest lands on the ground remains a major challenge in Forest administration and management. Cadastral forest mapping deals with forestlands boundary delineation and their associated characterization (forest/non forest). The present study is an application of high resolution World View-II data for digitization of Protected Forest boundary at cadastral level with integration of Records of Right (ROR) data. Cadastral vector data was generated by digitization of spatial data using scanned mussavies in <i>ArcGIS</i> environment. Ortho-images were created from World View-II digital stereo data with Universal Transverse Mercator coordinate system with WGS 84 datum. <br><br> Cadastral vector data of Bir Hisar (Hisar district, Haryana) and adjacent villages was spatially adjusted over ortho-image using <i>ArcGIS</i> software. Edge matching of village boundaries was done with respect to khasra boundaries of individual village. The notified forest grids were identified on ortho-image and grid vector data was extracted from georeferenced cadastral data. Cadastral forest boundary vectors were digitized from ortho-images. Accuracy of cadastral data was checked by comparison of randomly selected geo-coordinates points, tie lines and boundary measurements of randomly selected parcels generated from image data set with that of actual field measurements. <br><br> Area comparison was done between cadastral map area, the image map area and RoR area. The area covered under Protected Forest was compared with ROR data and within an accuracy of less than 1 % from ROR area was accepted. The methodology presented in this paper is useful to update the cadastral forest maps. The produced GIS databases and large-scale Forest Maps may serve as a data foundation towards a land register of forests. The study introduces the use of very high resolution satellite data to develop a method for cadastral surveying through on - screen digitization in a less time as compared to the old fashioned cadastral parcel boundaries surveying method.
APA, Harvard, Vancouver, ISO, and other styles
39

Ma, Chao-Tsung. "System Planning of Grid-Connected Electric Vehicle Charging Stations and Key Technologies: A Review." Energies 12, no. 21 (November 4, 2019): 4201. http://dx.doi.org/10.3390/en12214201.

Full text
Abstract:
The optimal planning of electric vehicle (EV) charging stations (ECSs) with advanced control algorithms is very important to accelerate the development of EVs, which is a promising solution to reduce carbon emissions of conventional internal combustion engine vehicles (ICEVs). The large and fluctuant load currents of ECSs can bring negative impacts to both EV-related power converters and power distribution systems if the energy flow is not regulated properly. Recent review papers related to EVs found in open literature have mainly focused on the design of power converter-based chargers and power interfaces, analyses of power quality (PQ) issues, the development of wireless charging techniques, etc. There is currently no review paper that focuses on key technologies in various system configurations, optimal energy management and advanced control issues in practical applications. To compensate for this insufficiency and provide timely research directions, this paper reviews 143 previously published papers related to the aforementioned topics in recent literature including 17 EV-related review papers found in Institute of Electrical and Electronics Engineers (IEEE)/Institution of Engineering and Technology (IET) (IEEE/IET) Electronic Library (IEL) and ScienceDirect OnSite (SDOS) databases. In this paper, existing system configurations, related design methods, algorithms and key technologies for ECSs are systematically reviewed. Based on discussions given in the reviewed papers, the most popular ECS configuration is a hybrid system design that integrates renewable energy (RE)-based power generation (REBPG), various energy storage systems (ESSs), and utility grids. It is noteworthy that the addition of an ESS with properly designed control algorithms can simultaneously buffer the fast, fluctuant power demand during charging, smooth the intermittent power generation of REBPG, and increase the overall efficiency and operating flexibility of ECSs. In addition, verifying the significance of the flexibility and possible profits that portable ESSs provide in ECS networks is a potential research theme in ECS fields, in which the potential applications of portable ESSs in the grid-tied ECSs are numerous and could cover a full technical spectrum.
APA, Harvard, Vancouver, ISO, and other styles
40

Matvieieva, Y., I. Myroshnychenko, S. Kolosok, and R. Kotyuk. "GEOSPATIAL, FINANCIAL, HUMAN, AND TEMPORAL FACTORS IN THE STUDY OF THE DEVELOPMENT OF RENEWABLE ENERGY AND SMART GRIDS." Vìsnik Sumsʹkogo deržavnogo unìversitetu, no. 3 (2020): 84–96. http://dx.doi.org/10.21272/1817-9215.2020.3-9.

Full text
Abstract:
Balanced development of smart grids is becoming an increasingly important issue for the energy sector's successful operation. This article provides a bibliographic review of publications in the study of renewable energy and smart grids' deployment parameters. A sample of works for 2009-2020 from the Scopus® database, which contains bibliographic information about scientific publications in peer-reviewed journals, books, and conferences, was selected for analysis. The authors identified three clusters of research areas using VOSviewer (version 1.6.15) in the context of the impact of geospatial parameters on smart grids' development. The first cluster consists of the financial, human, and temporal components of the geospatial factor of smart grid deployment. The authors found the largest number of links in the first cluster in terms of "costs" (a total of 29 links with an average impact of 9). The second cluster coincides with concepts related to geospatial information systems (GIS), digital storage, information systems, and cartographic information use. Research on renewable energy also belongs to the second cluster of publications. And the third cluster highlights all the concepts of smart grids by their technical types and in the context of optimization. The third cluster focuses on the ideas with the strongest link power. The results of the analysis of the Scopus® database allowed to determine the level and dynamics of scientific interest in the geospatial factors of the development of smart grids over the past 10 years. It is established that research in the field of geospatial factors of smart grid development is carried out by different countries, but the most active analysis of the impact of geospatial parameters on the development of smart grids in the following countries: USA, Canada and China. Based on the use of the Scopus® database, the article identified institutions and organizations that fund the study of geospatial factors and smart grids and made a significant contribution to the development of this topic.
APA, Harvard, Vancouver, ISO, and other styles
41

Pavlovic, Tomislav, Dragana Milosavljevic, and Danica Pirsl. "Simulation of photovoltaic systems electricity generation using homer software in specific locations in Serbia." Thermal Science 17, no. 2 (2013): 333–47. http://dx.doi.org/10.2298/tsci120727004p.

Full text
Abstract:
In this paper basic information of Homer software for PV system electricity generation, NASA - Surface meteorology and solar energy database, RETScreen, PVGIS and HMIRS (Hydrometeorological Institute of Republic of Serbia) solar databases are given. The comparison of the monthly average values for daily solar radiation per square meter received by the horizontal surface taken from NASA, RETScreen, PVGIS and HMIRS solar databases for three locations in Serbia (Belgrade, Negotin and Zlatibor) is given. It was found that the annual average values of daily solar radiation taken from RETScreen solar database are the closest to the annual average values of daily solar radiation taken from HMIRS solar database for Belgrade, Negotin and Zlatibor. Monthly and total for year values of electricity production of fixed on-grid PV system of 1 kW with optimal inclinated and south oriented solar modules, in Belgrade, Negotin and Zlatibor using HOMER software simulation based on data for daily solar radiation taken from NASA, RETScreen, PVGIS and HMIRS databases are calculated. The relative deviation of electricity production of fixed on-grid PV system of 1 kW using HOMER software simulation based on data for daily solar radiation taken from NASA, RETScreen, and PVGIS databases compared to electricity production of fixed on-grid PV system of 1 kW using HOMER software simulation based on data for daily solar radiation taken from HMIRS databases in Belgrade, Negotin and Zlatibor are given.
APA, Harvard, Vancouver, ISO, and other styles
42

Zheng, B., H. Huo, Q. Zhang, Z. L. Yao, X. T. Wang, X. F. Yang, H. Liu, and K. B. He. "A new vehicle emission inventory for China with high spatial and temporal resolution." Atmospheric Chemistry and Physics Discussions 13, no. 12 (December 9, 2013): 32005–52. http://dx.doi.org/10.5194/acpd-13-32005-2013.

Full text
Abstract:
Abstract. This study is the first in a series of papers that aim to develop high-resolution emission databases for different anthropogenic sources in China. Here we focus on on-road transportation. Because of the increasing impact of on-road transportation on regional air quality, developing an accurate and high-resolution vehicle emission inventory is important for both the research community and air quality management. This work proposes a new inventory methodology to improve the spatial and temporal accuracy and resolution of vehicle emissions in China. We calculate, for the first time, the monthly vehicle emissions (CO, NMHC, NOx, and PM2.5) for 2008 in 2364 counties (an administrative unit one level lower than city) by developing a set of approaches to estimate vehicle stock and monthly emission factors at county-level, and technology distribution at provincial level. We then introduce allocation weights for the vehicle kilometers traveled to assign the county-level emissions onto 0.05° × 0.05° grids based on the China Digital Road-network Map (CDRM). The new methodology overcomes the common shortcomings of previous inventory methods, including neglecting the geographical differences between key parameters and using surrogates that are weakly related to vehicle activities to allocate vehicle emissions. The new method has great advantages over previous methods in depicting the spatial distribution characteristics of vehicle activities and emissions. This work provides a better understanding of the spatial representation of vehicle emissions in China and can benefit both air quality modeling and management with improved spatial accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Zheng, B., H. Huo, Q. Zhang, Z. L. Yao, X. T. Wang, X. F. Yang, H. Liu, and K. B. He. "High-resolution mapping of vehicle emissions in China in 2008." Atmospheric Chemistry and Physics 14, no. 18 (September 17, 2014): 9787–805. http://dx.doi.org/10.5194/acp-14-9787-2014.

Full text
Abstract:
Abstract. This study is the first in a series of papers that aim to develop high-resolution emission databases for different anthropogenic sources in China. Here we focus on on-road transportation. Because of the increasing impact of on-road transportation on regional air quality, developing an accurate and high-resolution vehicle emission inventory is important for both the research community and air quality management. This work proposes a new inventory methodology to improve the spatial and temporal accuracy and resolution of vehicle emissions in China. We calculate, for the first time, the monthly vehicle emissions for 2008 in 2364 counties (an administrative unit one level lower than city) by developing a set of approaches to estimate vehicle stock and monthly emission factors at county-level, and technology distribution at provincial level. We then introduce allocation weights for the vehicle kilometers traveled to assign the county-level emissions onto 0.05° × 0.05° grids based on the China Digital Road-network Map (CDRM). The new methodology overcomes the common shortcomings of previous inventory methods, including neglecting the geographical differences between key parameters and using surrogates that are weakly related to vehicle activities to allocate vehicle emissions. The new method has great advantages over previous methods in depicting the spatial distribution characteristics of vehicle activities and emissions. This work provides a better understanding of the spatial representation of vehicle emissions in China and can benefit both air quality modeling and management with improved spatial accuracy.
APA, Harvard, Vancouver, ISO, and other styles
44

SUBRAMANYAM, R. B. V., and A. GOSWAMI. "A FUZZY DATA MINING ALGORITHM FOR INCREMENTAL MINING OF QUANTITATIVE SEQUENTIAL PATTERNS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 13, no. 06 (December 2005): 633–52. http://dx.doi.org/10.1142/s0218488505003722.

Full text
Abstract:
In real world applications, the databases are constantly added with a large number of transactions and hence maintaining latest sequential patterns valid on the updated database is crucial. Existing data mining algorithms can incrementally mine the sequential patterns from databases with binary values. Temporal transactions with quantitative values are commonly seen in real world applications. In addition, several methods have been proposed for representing uncertain data in a database. In this paper, a fuzzy data mining algorithm for incremental mining of sequential patterns from quantitative databases is proposed. Proposed algorithm called IQSP algorithm uses the fuzzy grid notion to generate fuzzy sequential patterns validated on the updated database containing the transactions in the original database and in the incremental database. It uses the information about sequential patterns that are already mined from original database and avoids start-from-scratch process. Also, it minimizes the number of candidates to check as well as number of scans to original database by identifying the potential sequences in incremental database.
APA, Harvard, Vancouver, ISO, and other styles
45

Yu, Xiaopeng, Changqing Xu, Dan Lu, Zeyu Zhu, Zhipeng Zhou, Nan Ye, and Chuanmin Mi. "Design and Application of a Case Analysis System for Handling Power Grid Operational Accidents Based on Case-Based Reasoning." Information 11, no. 2 (February 8, 2020): 91. http://dx.doi.org/10.3390/info11020091.

Full text
Abstract:
In recent years, power grid accidents have occurred frequently and higher requirements have been placed on their safety operation. In current safety management in the world, there is an effective practice that uses a unified standard for structuring an accident case database and based on that database, conducts quantitative analysis to cope with accident risks. However, that is not the case for power safety management. Case-based reasoning (CBR) is such a process that solves new problems based on the solutions to similar past problems. It works by matching a current problem with historical cases and solutions in a database, in order to obtain similar case solutions or inspirations. In the matching process, if necessary, such past solutions may be modified in order to better adapt to the current actual problems. Based on the CBR method, this paper proposes how to construct a case database of power grid operational accidents, provide data support for management of power grid risks and provide knowledge services for accurate grasping of grid accident development dynamics and making quick decisions to rapidly response to the emergencies. First, it designs an operational accident case database after considering the following three aspects: case features, power grid features and accident features based on safety management theory. Secondly, in terms of how to use the power grid operational accident case database, it proposed a two-level search strategy, as well as the corresponding similarity calculation methods for different feature attributes of the case. Finally, it carried out a demonstration to verify the model by selecting four typical grid accidents. The grid database and CBR strategy proposed in this article could help China’s power grids practice intelligent analysis of grid operational accidents and improve digitalization in safety management.
APA, Harvard, Vancouver, ISO, and other styles
46

Hutchison, Keith, and Barbara Iisager. "Creating Truth Data to Quantify the Accuracy of Cloud Forecasts from Numerical Weather Prediction and Climate Models." Atmosphere 10, no. 4 (April 3, 2019): 177. http://dx.doi.org/10.3390/atmos10040177.

Full text
Abstract:
Clouds are critical in mechanisms that impact climate sensitivity studies, air quality and solar energy forecasts, and a host of aerodrome flight and safety operations. However, cloud forecast accuracies are seldom described in performance statistics provided with most numerical weather prediction (NWP) and climate models. A possible explanation for this apparent omission involves the difficulty in developing cloud ground truth databases for the verification of large-scale numerical simulations. Therefore, the process of developing highly accurate cloud cover fraction truth data from manually generated cloud/no-cloud analyses of multispectral satellite imagery is the focus of this article. The procedures exploit the phenomenology to maximize cloud signatures in a variety of remotely sensed satellite spectral bands in order to create accurate binary cloud/no-cloud analyses. These manual analyses become cloud cover fraction truth after being mapped to the grids of the target datasets. The process is demonstrated by examining all clouds in a NAM dataset along with a 24 h WRF cloud forecast field generated from them. Quantitative comparisons with the cloud truth data for the case study show that clouds in the NAM data are under-specified while the WRF model greatly over-predicts them. It is concluded that highly accurate cloud cover truth data are valuable for assessing cloud model input and output datasets and their creation requires the collection of satellite imagery in a minimum set of spectral bands. It is advocated that these remote sensing requirements be considered for inclusion into the designs of future environmental satellite systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Cao, Jian, Yan Bin Li, and Cong Yan. "A Database Grid Service with a Novel Mutation Operator." Advanced Materials Research 989-994 (July 2014): 4869–72. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4869.

Full text
Abstract:
Database grid service provides users with a unified interface to access to distributed heterogeneous databases resources. To overcome the weakness of collaborative services ability in different grid portal, a new grid portal architecture based on CSGPA (Collaborative Services Grid Portal Architecture), is proposed. This paper aims to enhance the performance of PSO in complex optimization problems and proposes an improved PSO variant by incorporating a novel mutation operator. Experimental studies on some well-known benchmark problems show that our approach achieves promising results.
APA, Harvard, Vancouver, ISO, and other styles
48

Furano, Fabrizio. "Large databases on the GRID." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 623, no. 2 (November 2010): 855–58. http://dx.doi.org/10.1016/j.nima.2010.03.094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Goel, Sushant, Hema Sharda, and David Taniar. "Replica synchronisation in grid databases." International Journal of Web and Grid Services 1, no. 1 (2005): 87. http://dx.doi.org/10.1504/ijwgs.2005.007551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Arias-Londoño, Andrés, Oscar Danilo Montoya, and Luis Fernando Grisales-Noreña. "A Chronological Literature Review of Electric Vehicle Interactions with Power Distribution Systems." Energies 13, no. 11 (June 11, 2020): 3016. http://dx.doi.org/10.3390/en13113016.

Full text
Abstract:
In the last decade, the deployment of electric vehicles (EVs) has been largely promoted. This development has increased challenges in the power systems in the context of planning and operation due to the massive amount of recharge needed for EVs. Furthermore, EVs may also offer new opportunities and can be used to support the grid to provide auxiliary services. In this regard, and considering the research around EVs and power grids, this paper presents a chronological background review of EVs and their interactions with power systems, particularly electric distribution networks, considering publications from the IEEE Xplore database. The review is extended from 1973 to 2019 and is developed via systematic classification using key categories that describe the types of interactions between EVs and power grids. These interactions are in the framework of the power quality, study of scenarios, electricity markets, demand response, demand management, power system stability, Vehicle-to-Grid (V2G) concept, and optimal location of battery swap and charging stations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography