To see the other types of publications on this topic, follow the link: Statistical decomposition measures.

Journal articles on the topic 'Statistical decomposition measures'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Statistical decomposition measures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

ANTONIOU, IOANNIS, COSTAS KARANIKAS, and STANISLAV SHKARIN. "DECOMPOSITIONS OF SPACES OF MEASURES." Infinite Dimensional Analysis, Quantum Probability and Related Topics 11, no. 01 (March 2008): 119–26. http://dx.doi.org/10.1142/s0219025708003014.

Full text
Abstract:
Let 𝔐 be the Banach space of σ-additive complex-valued measures on an abstract measurable space. We prove that any closed, with respect to absolute continuity norm-closed, linear subspace L of 𝔐 is complemented and describe the unique complement, projection onto L along which has norm 1. Using this fact we prove a decomposition theorem, which includes the Jordan decomposition theorem, the generalized Radon–Nikodým theorem and the decomposition of measures into decaying and non-decaying components as particular cases. We also prove an analog of the Jessen–Wintner purity theorem for our decompositions.
APA, Harvard, Vancouver, ISO, and other styles
2

Alberts, Tom, and Hugo Duminil-Copin. "Bridge Decomposition of Restriction Measures." Journal of Statistical Physics 140, no. 3 (June 5, 2010): 467–93. http://dx.doi.org/10.1007/s10955-010-9999-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dębowski, Łukasz. "Approximating Information Measures for Fields." Entropy 22, no. 1 (January 9, 2020): 79. http://dx.doi.org/10.3390/e22010079.

Full text
Abstract:
We supply corrected proofs of the invariance of completion and the chain rule for the Shannon information measures of arbitrary fields, as stated by Dębowski in 2009. Our corrected proofs rest on a number of auxiliary approximation results for Shannon information measures, which may be of an independent interest. As also discussed briefly in this article, the generalized calculus of Shannon information measures for fields, including the invariance of completion and the chain rule, is useful in particular for studying the ergodic decomposition of stationary processes and its links with statistical modeling of natural language.
APA, Harvard, Vancouver, ISO, and other styles
4

Carrington, M. E., R. Kobes, G. Kunstatter, D. Ostapchuk, and G. Passante. "Geometric measures of entanglement and the Schmidt decomposition." Journal of Physics A: Mathematical and Theoretical 43, no. 31 (July 6, 2010): 315302. http://dx.doi.org/10.1088/1751-8113/43/31/315302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xie, Yongjian, Aili Yang, and Fang Ren. "Super quantum measures on effect algebras with the Riesz decomposition properties." Journal of Mathematical Physics 56, no. 10 (October 2015): 103509. http://dx.doi.org/10.1063/1.4933324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Capobianco, Enrico. "Statistical Embedding in Complex Biosystems." Journal of Integrative Bioinformatics 3, no. 2 (December 1, 2006): 90–108. http://dx.doi.org/10.1515/jib-2006-30.

Full text
Abstract:
Summary Complex high-dimensional systems represent an important area of interdisciplinary research in systems biology. Gene expression values obtained by microarray data represent a good example, owing to their various features that depend on biological network dynamics. This work emphasizes the role of blind source separation for dealing with dimensionality reduction and feature selection, and their useful combination with fuzzy rules, embedding principles and entropic measures. In particular, entropy and embedding are useful tools for controlling the robustness and stability of the decomposition of a system with larger than intrinsic dimensionality. As a result, the convergence to a small intrinsic dimensionality occurs by the means of least dependent components, seen as a minimal number of salient features.
APA, Harvard, Vancouver, ISO, and other styles
7

SMOLYANOV, O. G., and H. v. WEIZSÄCKER. "SMOOTH PROBABILITY MEASURES AND ASSOCIATED DIFFERENTIAL OPERATORS." Infinite Dimensional Analysis, Quantum Probability and Related Topics 02, no. 01 (March 1999): 51–78. http://dx.doi.org/10.1142/s0219025799000047.

Full text
Abstract:
We compare different notions of differentiability of a measure along a vector field on a locally convex space. We consider in the L2-space of a differentiable measure the analog of the classical concepts of gradient, divergence and Laplacian (which coincides with the Ornstein–Uhlenbeck operator in the Gaussian case). We use these operators for the extension of the basic results of Malliavin and Stroock on the smoothness of finite dimensional image measures under certain nonsmooth mappings to the case of non-Gaussian measures. The proof of this extension is quite straight forward and does not use any Chaos-decomposition. Finally, the role of this Laplacian in the procedure of quantization of anharmonic oscillators is discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

ACCARDI, LUIGI, ROMUALD LENCZEWSKI, and RAFAŁ SAŁAPATA. "DECOMPOSITIONS OF THE FREE PRODUCT OF GRAPHS." Infinite Dimensional Analysis, Quantum Probability and Related Topics 10, no. 03 (September 2007): 303–34. http://dx.doi.org/10.1142/s0219025707002750.

Full text
Abstract:
We study the free product of rooted graphs and its various decompositions using quantum probabilistic methods. We show that the free product of rooted graphs is canonically associated with free independence, which completes the proof of the conjecture that there exists a product of rooted graphs canonically associated with each notion of noncommutative independence which arises in the axiomatic theory. Using the orthogonal product of rooted graphs, we decompose the branches of the free product of rooted graphs as "alternating orthogonal products". This leads to alternating decompositions of the free product itself, with the star product or the comb product followed by orthogonal products. These decompositions correspond to the recently studied decompositions of the free additive convolution of probability measures in terms of boolean and orthogonal convolutions, or monotone and orthogonal convolutions. We also introduce a new type of quantum decomposition of the free product of graphs, where the distance partition of the set of vertices is taken with respect to a set of vertices instead of a single vertex. We show that even in the case of widely studied graphs this yields new and more complete information on their spectral properties, like spectral measures of a (usually infinite) set of cyclic vectors under the action of the adjacency matrix.
APA, Harvard, Vancouver, ISO, and other styles
9

Osborne, T. J. "Convex hulls of varieties and entanglement measures based on the roof construction." Quantum Information and Computation 7, no. 3 (March 2007): 209–27. http://dx.doi.org/10.26421/qic7.3-3.

Full text
Abstract:
In this paper we study the problem of calculating the convex hull of certain affine algebraic varieties. As we explain, the motivation for considering this problem is that certain pure-state measures of quantum entanglement, which we call \emph{polynomial entanglement measures}, can be represented as affine algebraic varieties. We consider the evaluation of certain mixed-state extensions of these polynomial entanglement measures, namely \emph{convex and concave roofs}. We show that the evaluation of a roof-based mixed-state extension is equivalent to calculating a hyperplane which is multiply tangent to the variety in a number of places equal to the number of terms in an optimal decomposition for the measure. In this way we provide an \emph{implicit} representation of optimal decompositions for mixed-state entanglement measures based on the roof construction.
APA, Harvard, Vancouver, ISO, and other styles
10

LEE, YUH-JIA, and HSIN-HUNG SHIH. "AN APPLICATION OF THE SEGAL–BARGMANN TRANSFORM TO THE CHARACTERIZATION OF LÉVY WHITE NOISE MEASURES." Infinite Dimensional Analysis, Quantum Probability and Related Topics 13, no. 02 (June 2010): 191–221. http://dx.doi.org/10.1142/s0219025710004012.

Full text
Abstract:
Being inspired by the observation that the Stein's identity is closely connected to the quantum decomposition of probability measures and the Segal–Bargmann transform, we are able to characterize the Lévy white noise measures on the space [Formula: see text] of tempered distributions associated with a Lévy spectrum having finite second moment. The results not only extends the Stein and Chen's lemma for Gaussian and Poisson distributions to infinite dimensions but also to many other infinitely divisible distributions such as Gamma and Pascal distributions and corresponding Lévy white noise measures on [Formula: see text].
APA, Harvard, Vancouver, ISO, and other styles
11

Winter, Andreas. "??Extrinsic?? and ??Intrinsic?? Data in Quantum Measurements: Asymptotic Convex Decomposition of Positive Operator Valued Measures." Communications in Mathematical Physics 244, no. 1 (January 1, 2004): 157–85. http://dx.doi.org/10.1007/s00220-003-0989-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mecke, Klaus R. "Integral Geometry in Statistical Physics." International Journal of Modern Physics B 12, no. 09 (April 10, 1998): 861–99. http://dx.doi.org/10.1142/s0217979298000491.

Full text
Abstract:
The aim of this paper is to point out the importance of a morphological characterization of patterns in Statistical Physics. Integral geometry furnishes a suitable family of morphological descriptors, known as Minkowski functionals. They characterize not only the connectivity (topology) but also the content and shape (geometry) of spatial patterns. Integral geometry provides also powerful theorems and formulae, which makes the calculus convenient for many models of stochastic geometries, for instance, for the Boolean grain model. This model generates random structures in space by overlapping bodies or "grains" (balls, sticks) each with arbitrary location and orientation. We illustrate the integral geometric approach to stochastic geometries by applying morphological measures to such diverse topics as percolation, complex fluids, and the large-scale structure of the universe: (A) Porous media may be generated by overlapping holes of arbitrary shape distributed uniformly in space. The percolation thresold of such porous media can be estimated accurately in terms of the morphology of the distributed pores. (B) Under rather natural assumptions a general expression for the Hamiltonian of complex fluids can be derived that includes energy contributions related to the morphology of the spatial domains of homogeneous mesophases. We find that the Euler characteristic in the Hamiltonian stabilizes a highly connected bicontinuous structure resembling the middle-phase in oil-water microemulsions, for instance. (C) Morphological measures are a novel method for the description of complex spatial structures aiming for relevant order parameters and structure information complement to correlation functions. Typical applications address Turing patterns in chemical reaction diffusion systems, homogeneous phases evolving during spinodal decomposition, and the distribution of galaxies and clusters of galaxies in the Universe as a prominent example of a point process in nature.
APA, Harvard, Vancouver, ISO, and other styles
13

Vasić, Slavko, Charles A. Lin, Isztar Zawadzki, Olivier Bousquet, and Diane Chaumont. "Evaluation of Precipitation from Numerical Weather Prediction Models and Satellites Using Values Retrieved from Radars." Monthly Weather Review 135, no. 11 (November 1, 2007): 3750–66. http://dx.doi.org/10.1175/2007mwr1955.1.

Full text
Abstract:
Abstract Precipitation is evaluated from two weather prediction models and satellites, taking radar-retrieved values as a reference. The domain is over the central and eastern United States, with hourly accumulated precipitation over 21 days for the models and radar, and 13 days for satellite. Conventional statistical measures and scale decomposition methods are used. The models generally underestimate strong precipitation and show nearly constant modest skill over a 24-h forecast period. The scale decomposition results show that the effective model resolution for precipitation is many times the grid size. The model predictability extends beyond a few hours for only the largest scales.
APA, Harvard, Vancouver, ISO, and other styles
14

Fan, Yun Tan, Kai Pei Liu, Xiao Qiang Chen, Xiao Ning Tong, and Jian Le. "Optimization of Distribution Network Voltage Deviation Control Measures Based on LCC Theory." Advanced Materials Research 694-697 (May 2013): 807–16. http://dx.doi.org/10.4028/www.scientific.net/amr.694-697.807.

Full text
Abstract:
This paper presents an optimization method for distribution network voltage deviation control measures based on life cycle cost (LCC) theory. The overall process of this method had been introduced, and the LCC decomposition model of distribution network reconstruction had been analyzed in details. The existing statistical data were used to build the LCC cost models for changing the cross section of the conductor, reactive power compensation and adjusting the transformer tap. Finally, a 10kV feeder was taken as an example to illustrate the application of the proposed method, it included the voltage deviation estimation, voltage improvement plan development and plan optimization based on LCC theory. The method proposed in this paper can be used to improve scientificalness and economical efficiency in planning distribution network voltage deviation control measures by power supply department.
APA, Harvard, Vancouver, ISO, and other styles
15

MENDES, R. VILELA. "TOOLS FOR NETWORK DYNAMICS." International Journal of Bifurcation and Chaos 15, no. 04 (April 2005): 1185–213. http://dx.doi.org/10.1142/s0218127405012715.

Full text
Abstract:
Networks have been studied mainly by statistical methods which emphasize their topological structure. Here, one collects some mathematical tools and results which might be useful to study both the dynamics of agents living on the network and the networks themselves as evolving dynamical systems. They include decomposition of differential dynamics, ergodic techniques, estimates of invariant measures, construction of non deterministic automata, logical approaches, etc. A few network examples are discussed as an application of the dynamical tools.
APA, Harvard, Vancouver, ISO, and other styles
16

Nemchenko, G. I. "Legal Interpretation of Bribery (Statistical Analysis)." Lex Russica, no. 5 (May 31, 2019): 105–16. http://dx.doi.org/10.17803/1729-5920.2019.150.5.105-116.

Full text
Abstract:
Anti-corruption is one of the priorities of state policy and the most important activity of law enforcement agencies, which are given a central place in the implementation of anti-corruption legislation and its enforcement. In order to combat corruption and in accordance with paragraph 1 part 1 of the Federal law of 25.12.2008№ 273-FZ «On combating corruption» the decree of the President of the Russian Federation of 29.062018 № 378 approved the national anti-corruption plan for 2018-2020.Bribery, which is the core of economic and legal violations, occupies a special place in the structure of corruption crime. The number of crimes on the registered facts of bribery is constantly growing and deviation from the norms is transformed into an acceptable norm that contradicts the interests of the civil service and state power.Considering bribery as a type of economic criminal offense, the author proposes an analytical decomposition of crimes committed in the period 2001-2015, the natural trends and random deviations from the trend. New coverage of crimes consists in building statistical relationships, dependencies and interdependencies between contractors, which are governed by the ideology of Law and Economics. An interdisciplinary approach aimed at improving anti-corruption measures expands the scientific and methodological potential of bribery research.As a measure of dependence, correlation and regression coefficients are used in the microsystem of the transaction organized without intermediaries. The axiom of a «bribe» is offered, increasing the cost of the service of the bribetaker and distributed subsequently by the bribe-giver on consumers of the benefits created by it.The tested procedures can be used to create mechanisms for promoting anti-corruption values and scientific support of anti-corruption.
APA, Harvard, Vancouver, ISO, and other styles
17

FAUST, OLIVER, WENWEI YU, and NAHRIZUL ADIB KADRI. "COMPUTER-BASED IDENTIFICATION OF NORMAL AND ALCOHOLIC EEG SIGNALS USING WAVELET PACKETS AND ENERGY MEASURES." Journal of Mechanics in Medicine and Biology 13, no. 03 (May 14, 2013): 1350033. http://dx.doi.org/10.1142/s0219519413500334.

Full text
Abstract:
This paper describes a computer-based identification system of normal and alcoholic Electroencephalography (EEG) signals. The identification system was constructed from feature extraction and classification algorithms. The feature extraction was based on wavelet packet decomposition (WPD) and energy measures. Feature fitness was established through the statistical t-test method. The extracted features were used as training and test data for a competitive 10-fold cross-validated analysis of six classification algorithms. This analysis showed that, with an accuracy of 95.8%, the k-nearest neighbor (k-NN) algorithm outperforms naïve Bayes classification (NBC), fuzzy Sugeno classifier (FSC), probabilistic neural network (PNN), Gaussian mixture model (GMM), and decision tree (DT). The 10-fold stratified cross-validation instilled reliability in the result, therefore we are confident when we state that EEG signals can be used to automate both diagnosis and treatment monitoring of alcoholic patients. Such an automatization can lead to cost reduction by relieving medical experts from routine and administrative tasks.
APA, Harvard, Vancouver, ISO, and other styles
18

Gyamfi, Emmanuel N., Frederick A. A. Sarpong, and Anokye M. Adam. "Drivers of Stock Prices in Ghana: An Empirical Mode Decomposition Approach." Mathematical Problems in Engineering 2021 (September 25, 2021): 1–7. http://dx.doi.org/10.1155/2021/2321042.

Full text
Abstract:
This study utilized the empirical mode decomposition (EMD) technique and examined which group of investors based on their trading frequencies influence stock prices in Ghana. We applied this technique to a dataset of daily closing prices of GSE Financial Stock Index for the period 04/01/2011 to 28/08/2015. The daily closing prices were decomposed into six intrinsic mode functions (IMFs) and a residue. We used the hierarchical clustering method to reconstruct the IMFs into high frequency, low frequency, and trend components. Using statistical measures such as Pearson product moment correlation coefficient and the Kendall rank correlation, we found that the low frequency and trend components of stock prices are the main drivers of the GSE stock index. These low-frequency traders are the institutional investors. Therefore, stock prices on the GSE are affected by real economic growth but not short-lived market fluctuations.
APA, Harvard, Vancouver, ISO, and other styles
19

Gao, Jian-Qiang, Li-Ya Fan, Li Li, and Li-Zhong Xu. "A practical application of kernel-based fuzzy discriminant analysis." International Journal of Applied Mathematics and Computer Science 23, no. 4 (December 1, 2013): 887–903. http://dx.doi.org/10.2478/amcs-2013-0066.

Full text
Abstract:
Abstract A novel method for feature extraction and recognition called Kernel Fuzzy Discriminant Analysis (KFDA) is proposed in this paper to deal with recognition problems, e.g., for images. The KFDA method is obtained by combining the advantages of fuzzy methods and a kernel trick. Based on the orthogonal-triangular decomposition of a matrix and Singular Value Decomposition (SVD), two different variants, KFDA/QR and KFDA/SVD, of KFDA are obtained. In the proposed method, the membership degree is incorporated into the definition of between-class and within-class scatter matrices to get fuzzy between-class and within-class scatter matrices. The membership degree is obtained by combining the measures of features of samples data. In addition, the effects of employing different measures is investigated from a pure mathematical point of view, and the t-test statistical method is used for comparing the robustness of the learning algorithm. Experimental results on ORL and FERET face databases show that KFDA/QR and KFDA/SVD are more effective and feasible than Fuzzy Discriminant Analysis (FDA) and Kernel Discriminant Analysis (KDA) in terms of the mean correct recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
20

Yarrow, Stuart, Edward Challis, and Peggy Seriès. "Fisher and Shannon Information in Finite Neural Populations." Neural Computation 24, no. 7 (July 2012): 1740–80. http://dx.doi.org/10.1162/neco_a_00292.

Full text
Abstract:
The precision of the neural code is commonly investigated using two families of statistical measures: Shannon mutual information and derived quantities when investigating very small populations of neurons and Fisher information when studying large populations. These statistical tools are no longer the preserve of theorists and are being applied by experimental research groups in the analysis of empirical data. Although the relationship between information-theoretic and Fisher-based measures in the limit of infinite populations is relatively well understood, how these measures compare in finite-size populations has not yet been systematically explored. We aim to close this gap. We are particularly interested in understanding which stimuli are best encoded by a given neuron within a population and how this depends on the chosen measure. We use a novel Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the SSI) for populations of up to 256 neurons and show that Fisher information can be used to accurately estimate both mutual information and SSI for populations of the order of 100 neurons, even in the presence of biologically realistic variability, noise correlations, and experimentally relevant integration times. According to both measures, the stimuli that are best encoded are those falling at the flanks of the neuron's tuning curve. In populations of fewer than around 50 neurons, however, Fisher information can be misleading.
APA, Harvard, Vancouver, ISO, and other styles
21

Bernatz, Richard. "A statistical, spatial, and hydrologic comparison of gauge-based and MPE-based rainfall measurements." Journal of the Iowa Academy of Science 124, no. 1-4 (January 1, 2017): 11–23. http://dx.doi.org/10.17833/124-02.1.

Full text
Abstract:
Gauge-based and multi-sensor precipitation estimation (MPE) data are compared on hourly, daily, monthly and event time scales at site locations over a 12-year period. Gauge data is collected at 16 sites within a 950 km2 portion of the Upper Iowa River in northeast Iowa. Average relative MPE bias is positive for all but the event time scale, and has a magnitude of less than 0.10 for all scales. Gauge and MPE average correlation coefficients range from 0.73 on the hourly scale to 0.92 on the event and monthly scales. The MPE relative bias standard deviation decreases from 1.70 mm on the hourly scale to 0.27 mm on the monthly scale. Decomposition of hourly bias reveals that the false positive portion is the most significant component. Seventy percent of MPE accumulation have a relative bias of 0.5 or less when hourly accumulations are 7 mm or greater. Pearson product-moment coefficient analysis reveals strong similarities in spatial correlations as a function of site separation. Rainfall time series for the basin are constructed from the two data sources and used as input to a Blocked Topmodel rainfall runoff scheme to provide another means of comparison on a basin-wide spatial scale. Five goodness-of-fit measures are used for quantifying the viability of simulated flows. No statistically significant difference in annual means using the difference sources is found for any of the measures.
APA, Harvard, Vancouver, ISO, and other styles
22

Netrdová, Pavlína, and Vojtěch Nosek. "Development regularities and specific features of geographic differentiation of population and its structure on the level of Czechia’s municipalities in transformation period." Geografie 123, no. 2 (2018): 225–51. http://dx.doi.org/10.37040/geografie2018123020225.

Full text
Abstract:
The paper is focused on the geographical differentiation of the population in Czechia between the years 1980 and 2011. Data from population censuses were adjusted in all years to the municipal structure in 2011, so an analysis of evolution on a municipal level could be undertaken. Besides analyzing geographical differentiation of the population for different types of phenomena (demographic, social, and economic) and its evolution, we study underlying processes (such as concentration/deconcentration, convergence/divergence) and conditional factors and mechanisms. When studying geographical differentiation, we distinguish between simple regional differentiation measured by standard statistical measures, relative regional differentiation measured by Theil index decomposition, and spatial differentiation quantified by Moran’s I. The empirical results show that the geographical differentiation of the population in the transformation period and beyond has been steadily decreasing in a majority of studied variables. Variables with increasing geographical differentiation of the population are always connected with specific conditional factors and mechanisms. Moreover, the geographical differentiation of the population has shixed to lower geographical levels.
APA, Harvard, Vancouver, ISO, and other styles
23

Kitsios, V., L. Cordier, J. P. Bonnet, A. Ooi, and J. Soria. "On the coherent structures and stability properties of a leading-edge separated aerofoil with turbulent recirculation." Journal of Fluid Mechanics 683 (August 19, 2011): 395–416. http://dx.doi.org/10.1017/jfm.2011.285.

Full text
Abstract:
AbstractThe present study is motivated by a need to produce stability modes to assist in the understanding and control of unsteady separated flows. The flow configuration is a NACA 0015 aerofoil with laminar leading-edge separation and turbulent recirculation. In previous water tunnel experiments, this flow configuration was measured in an unperturbed (uncontrolled) separated state, and a harmonically perturbed (controlled) reattached state. This study presents numerical data of the unperturbed case, and recovers stability modes to describe the evolution of perturbations in this environment. The unperturbed flow is numerically generated using large eddy simulation. Its temporal properties are quantified via a Fourier analysis of the velocity time history at selected points in space. The leading-edge shear layer instability is characterized by instantaneous vortex structures, and the bluff body shedding is illustrated by proper orthogonal decomposition modes. Statistical measures of the velocity field agree well with the water tunnel measurements. Finally a stability analysis is undertaken using a triple decomposition to distinguish between the time averaged field, the unsteady scales of motion, and a coherent wave (perturbation). This analysis identifies that perturbations in the region immediately downstream of the separated shear layer have the highest spatial growth rates. The associated frequency is of the order of the sub-harmonic of the shear layer instability.
APA, Harvard, Vancouver, ISO, and other styles
24

Barr, Gordon, Christopher J. Gilmore, and Jonathan Paisley. "SNAP-1D: a computer program for qualitative and quantitative powder diffraction pattern analysis using the full pattern profile." Journal of Applied Crystallography 37, no. 4 (July 17, 2004): 665–68. http://dx.doi.org/10.1107/s0021889804011847.

Full text
Abstract:
SNAP-1Dis a computer program for the qualitative and quantitative analysis of powder diffraction data using the full measured data set. As measures of similarity between patterns, non-parametric statistical tests based on Spearman's correlation coefficient and the Kolmogorov–Smirnov test are used. Traditional correlation coefficients based on the Pearson formalism are also employed. This combination, suitably weighted, gives a reliable measure of qualitative pattern similarity. The method can be extended to the quantitative analysis of mixtures by using the above methods in conjunction with singular value decomposition techniques. A full description of the theory with suitable examples has been published elsewhere [Gilmoreet al.(2004).J. Appl. Cryst.37, 231–242]; here the focus is on the computer software itself. The program is commercially available, and runs on PCs under the Windows 2000 and XP operating systems with modest hardware requirements. An easy to use graphical interface is supplied.
APA, Harvard, Vancouver, ISO, and other styles
25

KITSIOS, V., L. CORDIER, J. P. BONNET, A. OOI, and J. SORIA. "Development of a nonlinear eddy-viscosity closure for the triple-decomposition stability analysis of a turbulent channel." Journal of Fluid Mechanics 664 (October 8, 2010): 74–107. http://dx.doi.org/10.1017/s0022112010003617.

Full text
Abstract:
The analysis of the instabilities in an unsteady turbulent flow is undertaken using a triple decomposition to distinguish between the time-averaged field, a coherent wave and the remaining turbulent scales of motion. The stability properties of the coherent scale are of interest. Previous studies have relied on prescribed constants to close the equations governing the evolution of the coherent wave. Here we propose an approach where the model constants are determined only from the statistical measures of the unperturbed velocity field. Specifically, a nonlinear eddy-viscosity model is used to close the equations, and is a generalisation of earlier linear eddy-viscosity closures. Unlike previous models the proposed approach does not assume the same dissipation rate for the time- and phase-averaged fields. The proposed approach is applied to a previously published turbulent channel flow, which was harmonically perturbed by two vibrating ribbons located near the channel walls. The response of the flow was recorded at several downstream stations by phase averaging the probe measurements at the same frequency as the forcing. The experimentally measured growth rates and velocity profiles, are compared to the eigenvalues and eigenvectors resulting from the stability analysis undertaken herein. The modes recovered from the solution of the eigenvalue problem, using the nonlinear eddy-viscosity model, are shown to capture the experimentally measured spatial decay rates and mode shapes of the coherent scale.
APA, Harvard, Vancouver, ISO, and other styles
26

Vanraj, SS Dhami, and BS Pabla. "Hybrid data fusion approach for fault diagnosis of fixed-axis gearbox." Structural Health Monitoring 17, no. 4 (August 28, 2017): 936–45. http://dx.doi.org/10.1177/1475921717727700.

Full text
Abstract:
Intelligent fault diagnosis system based on condition monitoring is expected to assist in the prevention of machine failures and enhance the reliability with lower maintenance cost. Most machine breakdowns related to gears are a result of improper operating conditions and loading, hence leads to failure of the whole mechanism. With advancement in technology, various gear fault diagnosis techniques have been reported which primarily focus on vibration analysis with statistical measures. However, acoustic signals posses a huge potential to monitor the status of the machine but a few studies have been carried out till now. This article describes the implementation of Teager–Kaiser energy operator and empirical mode decomposition methods for fault diagnosis of the gears using acoustic and vibration signals by extracting statistical features. A cross-correlation-based fault index that assists the automatic selection of the sensitive Intrinsic Mode Function (IMF) containing fault information has also been described. The features extracted by all combinations of signal processing techniques are sorted by order of relevance using floating forward selection method. The effectiveness is demonstrated using the results obtained from the experiments. The fault diagnosis is performed with k-nearest neighbor classifier. The results show that the hybrid of empirical mode decomposition–Teager–Kaiser energy operator techniques employs the advantages traits of one or the other technique to generate overall improvement in diagnosing severity of local faults.
APA, Harvard, Vancouver, ISO, and other styles
27

Chu, Xiaoquan, Yue Li, Dong Tian, Jianying Feng, and Weisong Mu. "An optimized hybrid model based on artificial intelligence for grape price forecasting." British Food Journal 121, no. 12 (November 21, 2019): 3247–65. http://dx.doi.org/10.1108/bfj-06-2019-0390.

Full text
Abstract:
Purpose The purpose of this paper is to propose an optimized hybrid model based on artificial intelligence methods, use the method of time series forecasting, to deal with the price prediction issue of China’s table grape. Design/methodology/approach The approaches follows the framework of “decomposition and ensemble,” using ensemble empirical mode decomposition (EEMD) to optimize the conventional price forecasting methods, and, integrating the multiple linear regression and support vector machine to build a hybrid model which could be applied in solving price series predicting problems. Findings The proposed EEMD-ADD optimized hybrid model is validated to be considered satisfactory in a case of China’ grape price forecasting in terms of its statistical measures and prediction performance. Practical implications This study would resolve the difficulties in grape price forecasting and provides an adaptive strategy for other agricultural economic predicting problems as well. Originality/value The paper fills the vacancy of concerning researches, proposes an optimized hybrid model integrating both classical econometric and artificial intelligence models to forecast price using time series method.
APA, Harvard, Vancouver, ISO, and other styles
28

Lv and Zhou. "Study on Sea Clutter Suppression Methods based on a Realistic Radar Dataset." Remote Sensing 11, no. 23 (November 20, 2019): 2721. http://dx.doi.org/10.3390/rs11232721.

Full text
Abstract:
To improve the ability of radar to detect targets such as ships with a background of strong sea clutter, different sea-clutter suppression algorithms are developed based on the realistic Intelligent PIxel processing X-band (IPIX) radar datasets, and quantitative research is carried out. Four algorithms, namely root cycle cancellation, singular-value decomposition (SVD) suppression, wavelet weighted reconstruction, and empirical mode decomposition (EMD) weighted reconstruction and their corresponding suppression methods are introduced. Then, the differences between the four algorithms before and after sea-clutter suppression are compared and analyzed. The average clutter-suppression and target-suppression amplitudes are selected as measures to verify the suppression effect. Sea-clutter data collected in the high-sea state, low-sea state, near-sea area, and far-sea area are selected for statistical analysis after suppression. All four methods have certain suppression effects, among which EMD reconstruction is best, reaching an average clutter-suppression range of 15.507 dB and a signal-suppression range of about 1 dB, which can improve the ability of radar to detect targets such as ships with a background of strong sea clutter.
APA, Harvard, Vancouver, ISO, and other styles
29

Uma Maheswari, R., and R. Umamaheswari. "Adaptive data-driven nonlinear synchro squeezed transform with single class radial basis function kernel support vector machine applied to wind turbine planetary gearbox fault diagnostics." Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy 234, no. 7 (November 9, 2019): 1015–25. http://dx.doi.org/10.1177/0957650919886227.

Full text
Abstract:
Planetary stage gears operated at low rotational speed and varying wind speed result variation in load. Variable speed and variable load induce nonstationary operating conditions. Vibration signal measured from Wind power gear transmission systems are embedded with multiple sources of vibration and attenuated considerably as it travels from source of vibration to measuring point. Efficacious multi-component decomposition without mode mixing ensures the accurate fault signature recognition. Synchro squeezing transform is the promising tool that represents the ridges with high resolution in time as well as in frequency axis. An efficient vibration analysis technique, short windowed Fourier synchro squeezing transform with nonlinear radial basis function kernel support vector machine is proposed to detect the mechanical faults in low speed planetary stage of wind turbines. Raw vibration is modeled in time–frequency plane to extract fault pattern signatures effectively with high resolution by adapting an empirical nonlinear tool synchro squeezing transforms. Amplitude modulation and frequency modulation parameters are sculpted from instantaneous amplitude and instantaneous phase, frequency. Hybrid feature space with signal attributes, statistical moments, and randomness measures are extricated from amplitude modulation-frequency modulation components. Single class radial basis function support vector machine is trained with hybrid features. The fault detection accuracy of the proposed method is compared with the standard variants of empirical mode decomposition. The proposed short windowed Fourier synchro squeezing transform-radial basis function kernel support vector machine shows 98.2% accuracy, 98% sensitivity, and 98% specificity.
APA, Harvard, Vancouver, ISO, and other styles
30

Landes, T., M. Heissler, M. Koehl, T. Benazzi, and T. Nivola. "UNCERTAINTY VISUALIZATION APPROACHES FOR 3D MODELS OF CASTLES RESTITUTED FROM ARCHEOLOGICAL KNOWLEDGE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (January 31, 2019): 409–16. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-409-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In the cultural heritage field, several specialists like archaeologists, architects, geomaticians, historians, etc. are used to work together. With the upcoming technologies allowing to capture efficiently data in the field, to digitize historical documents, to collect worldwide information related to the monuments under study, the wish to summarize all the sources of data (including the knowledge of the specialists) into one 3D model is a big challenge. In order to guarantee the reliability of the proposed reconstructed 3D model, it is of crucial importance to integrate the level of uncertainty assigned to it. From a geometric point of view, uncertainty is often defined, quantified and expressed with the help of statistical measures. However, for objects reconstructed based on archaeological assumptions, statistical measures are not appropriate. This paper focuses on the decomposition of 3D models into levels of uncertainties (LoUs) and on the best way to visualize them through two case studies: the castle of Kagenfels and the Horbourg-Wihr Castellum, both located in Alsace, France. The first one is well documented through still ongoing excavations around its remains, whereas the second one disappeared under the urbanization of the city. An approach enabling, on the 3D models, not only to quantify but also to visualize uncertainties coming from archaeological assumptions is addressed. Finally, the efficiency of the approach for qualifying the proposed 3D model of the reconstructed castle regarding its reliability is demonstrated.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

LERNER, VLADIMIR S. "MACRODYNAMIC COOPERATIVE COMPLEXITY OF BIOSYSTEMS." Journal of Biological Systems 14, no. 01 (March 2006): 131–68. http://dx.doi.org/10.1142/s0218339006001714.

Full text
Abstract:
The introduced dynamic macrocomplexity (MC) is an indicator of the phenomena and parameters of the cooperative dynamic process in a complex system with the macrolevel's dynamics and random microlevel's stochastics. The MC, arising from the unification (or decomposition) of the system's elements, is defined by the information measure, allowing for both analytical formulation and computer evaluation. The MC cooperative mechanism is analyzed using the minimax variation principle of Information Macrodynamics (IMD). When applied, this principle creates a transition from a local, unstable process' movement to a local, stable process — the former associated with the current influx of information that precedes cooperation and the latter continuing after the cooperation with incoming information and its accumulation. This transition enables producing the cooperative phenomena, in particular, the contributions from different superimposing processes, measured by the MC. The IMD optimal consolidation process forms the information hierarchical network (IN) consisting of the model eigenvalues' sequential cooperation by three. The MC of such optimal cooperative triplets' structure is measured by the IN's triplet's code as an algorithm of the minimal program. The MC covers the Kolmogorov's complexity, which measures a deterministic order over a stochastic disorder by a minimal program, as well as the statistical complexity. The MC specific consists of providing a precise complexity measure of a dynamic irreversible process, which evaluates the aforementioned forms of complexities in bits of information. The geometrical space curvature conceals information of the cooperative structures in the cells' form, whose code, in particular, measures the cooperative complexity. The process' trajectory, located in this geometrical space, acquires a sequence of the code cells.
APA, Harvard, Vancouver, ISO, and other styles
32

Hu, Shaohua, Jie Yang, Zhigang Jiang, Minda Ma, and Wei Cai. "CO2 Emission and Energy Consumption from Automobile Industry in China: Decomposition and Analyses of Driving Forces." Processes 9, no. 5 (May 6, 2021): 810. http://dx.doi.org/10.3390/pr9050810.

Full text
Abstract:
Despite the increasing contribution of the automotive industry to China’s national economy, CO2 emissions have become a challenge. However, the research about its energy consumption and carbon emissions is lacking. The significance of this study is to fill the research gap and provide suggestions for China’s automotive industry to reduce its carbon emissions. In this paper, the extended logarithmic Division index (LMDI) method is adopted to decompose the factors affecting carbon emissions and determine the key driving forces. According to provincial statistical data in China in 2017, the annual emissions of six provinces exceeded five million tons, accounting for 55.44% of the total emissions in China. The largest source of emissions in China is in Jilin Province, followed by Jiangsu, Shandong, Shanghai, Hubei and Henan. The decomposition results show that investment intensity effect is the greatest factor for CO2 emissions, while R&D intensity and energy intensity are the two principal factors for emission reduction. After the identification of driving factors, mitigation measures are proposed considering the current state of affairs and real situation, including improving energy structure, accelerating product structure transformation, stimulating sound R&D investment activities, promoting energy conservation and new energy automobile industry development and boosting industrial cluster development.
APA, Harvard, Vancouver, ISO, and other styles
33

Tao, Lizhi, Xinguang He, and Rui Wang. "A Hybrid LSSVM Model with Empirical Mode Decomposition and Differential Evolution for Forecasting Monthly Precipitation." Journal of Hydrometeorology 18, no. 1 (January 1, 2017): 159–76. http://dx.doi.org/10.1175/jhm-d-16-0109.1.

Full text
Abstract:
Abstract In this study, a hybrid least squares support vector machine (HLSSVM) model is presented for effectively forecasting monthly precipitation. The hybrid method is designed by incorporating the empirical mode decomposition (EMD) for data preprocessing, partial information (PI) algorithm for input identification, and differential evolution (DE) for model parameter optimization into least squares support vector machine (LSSVM). The HLSSVM model is examined by forecasting monthly precipitation at 138 rain gauge stations in the Yangtze River basin and compared with the LSSVM and LSSVM–DE. The LSSVM–DE is built by combining the LSSVM and DE. Two statistical measures, Nash–Sutcliffe efficiency (NSE) and relative absolute error (RAE), are employed to evaluate the performance of the models. The comparison of results shows that the LSSVM–DE gets a superior performance to LSSVM, and the HLSSVM provides the best performance among the three models for monthly precipitation forecasts. Meanwhile, it is also observed that all the models exhibit significant spatial variability in forecast performance. The prediction is most skillful in the western and northwestern regions of the basin. In contrast, the prediction skill in the eastern and southeastern regions is generally low, which shows a strong relationship with the randomness of precipitation. Compared to LSSVM and LSSVM–DE, the proposed HLSSVM model gives a more significant improvement for most of the stations in the eastern and southeastern regions with higher randomness.
APA, Harvard, Vancouver, ISO, and other styles
34

Lerner, Vladimir S. "Macrodynamic Cooperative Complexity of Information Dynamics." Open Systems & Information Dynamics 15, no. 03 (September 2008): 231–79. http://dx.doi.org/10.1142/s1230161208000195.

Full text
Abstract:
The introduced concept and measure of macrocomplexity (MC) arise in an irreversible macrodynamic cooperative process and determine the process components ability to assemble into an integrated system. MC serves as a common indicator of the origin of the cooperative complexity measured by the specific entropy speed per assembled volume, rather than the entropy, as it has been accepted before. The MC cooperative mechanism is studied using the variation principle (VP) of information macrodynamics (IMD), applied to a complex system with the macrolevel dynamics and random microlevel stochastics and different forms of physical and virtual transitions of information. This principle describes a transition from a local unstable process movement to a local stable process — the former is associated with the current influx of information that precedes cooperation, and the latter is continued after the cooperation with incoming information and its accumulation. This transition enables the production of the cooperative phenomena and, in particular, the contributions from different superimposing processes, which are measured by the MC. MC, arising as an indicator of these phenomena at the unification (or decomposition) of the system processes, is defined by the invariant information measure, allowing for both analytical formulation and computer evaluation. An optimal multi-dimensional consolidation process, satisfying the VP, forms the information hierarchical network (IN) consisting of the model eigenvalues sequential cooperation in triples. The MC of such an optimal cooperative triplet structure is measured by the IN triplet code (as an algorithm of the minimal program, which evaluates the IN hierarchical structure by the triplet information contributions in bits of information). The distributed IN allows for the automatic arrangement and measurement of the MC-local complexities in a multi-dimensional cooperative process, taking into account the MC time-space locations and mutual dependencies, and providing the MC hierarchical invariant information measure by quantity and quality in the triplet code. The MC is connected to Kolmogorov (K) complexity, which measures a deterministic order over a stochastic disorder by a minimal program. The MC specific consists of providing a computable complexity measure of a cooperative dynamic irreversible process, as an opposite to the K incomputability.
APA, Harvard, Vancouver, ISO, and other styles
35

Priya, Ebenezer, Subramanian Srinivasan, and Swaminathan Ramakrishnan. "Retrospective Non-Uniform Illumination Correction Techniques in Images of Tuberculosis." Microscopy and Microanalysis 20, no. 5 (August 13, 2014): 1382–91. http://dx.doi.org/10.1017/s1431927614012896.

Full text
Abstract:
AbstractImage pre-processing is highly significant in automated analysis of microscopy images. In this work, non-uniform illumination correction has been attempted using the surface fitting method (SFM), multiple regression method (MRM), and bidirectional empirical mode decomposition (BEMD) in digital microscopy images of tuberculosis (TB). The sputum smear positive and negative images recorded under a standard image acquisition protocol were subjected to illumination correction techniques and evaluated by error and statistical measures. Results show that SFM performs more efficiently than MRM or BEMD. The SFM produced sharp images of TB bacilli with better contrast. To further validate the results, multifractal analysis was performed that showed distinct variation before and after implementation of illumination correction by SFM. Results demonstrate that after illumination correction, there is a 26% increase in the number of bacilli, which aids in classification of the TB images into positive and negative, as TB positivity depends on the count of bacilli.
APA, Harvard, Vancouver, ISO, and other styles
36

Bokde, Neeraj, Andrés Feijóo, Daniel Villanueva, and Kishore Kulat. "A Review on Hybrid Empirical Mode Decomposition Models for Wind Speed and Wind Power Prediction." Energies 12, no. 2 (January 15, 2019): 254. http://dx.doi.org/10.3390/en12020254.

Full text
Abstract:
Reliable and accurate planning and scheduling of wind farms and power grids to ensure sustainable use of wind energy can be better achieved with the use of precise and accurate prediction models. However, due to the highly chaotic, intermittent and stochastic behavior of wind, which means a high level of difficulty when predicting wind speed and, consequently, wind power, the evolution of models capable of narrating data of such a complexity is an emerging area of research. A thorough review of literature, present research overviews, and information about possible expansions and extensions of models play a significant role in the enhancement of the potential of accurate prediction models. The last few decades have experienced a remarkable breakthrough in the development of accurate prediction models. Among various physical, statistical and artificial intelligent models developed over this period, the models hybridized with pre-processing or/and post-processing methods have seen promising prediction results in wind applications. The present review is focused on hybrid empirical mode decomposition (EMD) or ensemble empirical mode decomposition (EEMD) models with their advantages, timely growth and possible future in wind speed and power forecasting. Over the years, the practice of EEMD based hybrid models in wind data predictions has risen steadily and has become popular because of the robust and accurate nature of this approach. In addition, this review is focused on distinct attributes including the evolution of EMD based methods, novel techniques of treating Intrinsic Mode Functions (IMFs) generated with EMD/EEMD and overview of suitable error measures for such studies.
APA, Harvard, Vancouver, ISO, and other styles
37

Oscroft, Sarah, Adam M. Sykulski, and Jeffrey J. Early. "Separating Mesoscale and Submesoscale Flows from Clustered Drifter Trajectories." Fluids 6, no. 1 (December 31, 2020): 14. http://dx.doi.org/10.3390/fluids6010014.

Full text
Abstract:
Drifters deployed in close proximity collectively provide a unique observational data set with which to separate mesoscale and submesoscale flows. In this paper we provide a principled approach for doing so by fitting observed velocities to a local Taylor expansion of the velocity flow field. We demonstrate how to estimate mesoscale and submesoscale quantities that evolve slowly over time, as well as their associated statistical uncertainty. We show that in practice the mesoscale component of our model can explain much first and second-moment variability in drifter velocities, especially at low frequencies. This results in much lower and more meaningful measures of submesoscale diffusivity, which would otherwise be contaminated by unresolved mesoscale flow. We quantify these effects theoretically via computing Lagrangian frequency spectra, and demonstrate the usefulness of our methodology through simulations as well as with real observations from the LatMix deployment of drifters. The outcome of this method is a full Lagrangian decomposition of each drifter trajectory into three components that represent the background, mesoscale, and submesoscale flow.
APA, Harvard, Vancouver, ISO, and other styles
38

Loureiro, Paulo R. A., Mario J. C. Mendonça, Adolfo Sachsida, Antônio Nascimento Junior, Roberto Ellery, and Tito Belchior Silva Moreira. "Crime and Discrimination in the Labor Market: An Empirical Approach." International Journal of Economics and Finance 10, no. 3 (February 20, 2018): 196. http://dx.doi.org/10.5539/ijef.v10n3p196.

Full text
Abstract:
This paper investigates the existence of wage discrimination to inmates. Based on data collected from the Coordination Center for the Execution of Penalties and Alternative Measures (CEPEMA) for people serving in an open prison in Brasília (DF), a comparative approach was conducted with data collected from PNAD. It was then possible to verify using the decomposition process of Oaxaca-Ransom that there is statistical discrimination regarding to ex-convicts in the job market. Furthermore, it has been noticed that the full labor market participation of prisoners seems to be compromised to the extent that the empiricalresults support the assumption of Nagin and Waldfogel (1993). It indicates that access of the individual who been in prison to the job market is limited to the so-called spot market or temporary labor market. This segment of the labor market should not be confused with the so-called part time.Thus, one of the negative effects that can be understood from this is a reduction in the current value of the individual's discounted income, since long-term jobs are those that offer higher income perspectives.
APA, Harvard, Vancouver, ISO, and other styles
39

Casmir Chinaemerem, Osuji, Nzotta Samuel Mbadike, Ebiringa Oforegbunam Thaddeus, Ebiringa Oforegbunam Thaddeus, and Chigbu Emmanuel Ezeji. "Money control indicators and investment in Nigeria." International Journal Of Innovation And Economic Development 1, no. 2 (2015): 12–19. http://dx.doi.org/10.18775/ijied.1849-7551-7020.2015.12.2002.

Full text
Abstract:
This study empirically investigates the impact of money control indicators on investment in Nigeria. This study was necessitated by the apparent failure of the increasing volume of money supply in the Nigerian economy to induce a corresponding increase in investment, output and employment. The study made use of secondary data covering a period of 34 years from 1980 to 2013. Data were sourced from World Bank indicators and CBN Statistical Bulletin. Econometric techniques such as unit root test, co-integration, vector autoregressive model, and impulse response and variance decomposition analysis with granger causality test were used in the data analysis. The findings show that deposit rates impacted positively on investment in Nigeria and private sector borrowing from banks (PSBFB) was significant. These study shows that deposit rates and private sector borrowing from banks were determinants of investment on account of money control indicators. The study, therefore, recommends that government should device monetary measures capable of ensuring stability in deposit rates and private sector borrowing from banks with strict regulations to ensure sustainable money control for investment in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
40

Rempel, Martin, Fabian Senf, and Hartwig Deneke. "Object-Based Metrics for Forecast Verification of Convective Development with Geostationary Satellite Data." Monthly Weather Review 145, no. 8 (August 2017): 3161–78. http://dx.doi.org/10.1175/mwr-d-16-0480.1.

Full text
Abstract:
Object-based metrics are adapted and applied to geostationary satellite observations with the evaluation of cloud forecasts in convective situations as the goal. Forecasts of the convection-permitting German-focused Consortium for Small-Scale Modeling (COSMO-DE) numerical model are transformed into synthetic observations using the RTTOV radiative transfer model, and contrasted with the corresponding real observations. Threshold-based segmentation techniques are applied to the fields for object identification. The statistical properties of the traditional measures cold cloud cover and average brightness temperature amplitude are contrasted to object-based metrics of spatial aggregation and object structure. Based on 59 case days from the summer half-years between 2012 and 2014, a variance decomposition technique is applied to the time series of the metrics to identify deficits in day-to-day, diurnal, and weather-regime-related variability of cold cloud characteristics in the forecasts. Furthermore, sensitivities of the considered metrics are discussed, which result from uncertainties in the satellite forward operator and from the choice of parameters in the object identification techniques.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhou, Jianguo, Xuechao Yu, and Xiaolei Yuan. "Predicting the Carbon Price Sequence in the Shenzhen Emissions Exchange Using a Multiscale Ensemble Forecasting Model Based on Ensemble Empirical Mode Decomposition." Energies 11, no. 7 (July 21, 2018): 1907. http://dx.doi.org/10.3390/en11071907.

Full text
Abstract:
Accurately predicting the carbon price sequence is important and necessary for promoting the development of China’s national carbon trading market. In this paper, a multiscale ensemble forecasting model that is based on ensemble empirical mode decomposition (EEMD-ADD) is proposed to predict the carbon price sequence. First, the ensemble empirical mode decomposition (EEMD) is applied to decompose a carbon price sequence, SZA2013, into several intrinsic mode functions (IMFs) and one residual. Second, the IMFs and the residual are restructured via a fine-to-coarse reconstruction algorithm to generate three stationary and regular frequency components that high frequency component, low frequency component, and trend component. The fluctuation of each component can effectively reveal the factors that influence market operation. Third, extreme learning machine (ELM) is applied to forecast the trend component, support vector machine (SVM) is applied to forecast the low frequency component and the high frequency component is predicted via PSO-ELM, which means extreme learning machine whose input weights and bias threshold were optimized by particle swarm optimization. Then, the predicted values are combined to form a final predicted value. Finally, using the relevant error-type and trend-type performance indexes, the proposed multiscale ensemble forecasting model is shown to be more robust and accurate than the single format models. Three additional emission allowances from the Shenzhen Emissions Exchange are used to validate the model. The empirical results indicate that the established model is effective, efficient, and practical in terms of its statistical measures and prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Bear, Glenn W., Haydar J. Al‐Shukri, and Albert J. Rudman. "Linear inversion of gravity data for 3-D density distributions." GEOPHYSICS 60, no. 5 (September 1995): 1354–64. http://dx.doi.org/10.1190/1.1443871.

Full text
Abstract:
We have developed an improved Levenburg‐Marquart technique to rapidly invert Bouguer gravity data for a 3-D density distribution as a source of the observed field. This technique is designed to replace tedious forward modeling with an automatic solver that determines density models constrained by geologic information supplied by the user. Where such information is not available, objective models are generated. The technique estimates the density distribution within the source volume using a least‐squares inverse solution that is obtained iteratively by singular value decomposition using orthogonal decomposition of matrices with sequential Householder transformations. The source volume is subdivided into a series of right rectangular prisms of specified size but of unknown density. This discretization allows the construction of a system of linear equations relating the observed gravity field to the unknown density distribution. Convergence of the solution to the system is tightly controlled by a damping parameter which may be varied at each iteration. The associated algorithm generates statistical measures of solution quality not available with most forward methods. Along with the ability to handle large data sets within reasonable time constraints, the advantages of this approach are: (1) the ease with which pre‐existing geological information can be included to constrain the solution, (2) its minimization of subjective user input, (3) the avoidance of difficulties encountered during wavenumber domain transformations, and (4) the objective nature of the solution. Application to a gravity data set from Hamilton County, Indiana, has yielded a geologically reasonable result that agrees with published models derived from interpretation of gravity, magnetic, seismic, and drilling data.
APA, Harvard, Vancouver, ISO, and other styles
43

Zak, D., H. Reuter, J. Augustin, T. Shatwell, M. Barth, J. Gelbrecht, and R. J. McInnes. "Changes of the CO<sub>2</sub> and CH<sub>4</sub> production potential of rewetted fens in the perspective of temporal vegetation shifts." Biogeosciences 12, no. 8 (April 24, 2015): 2455–68. http://dx.doi.org/10.5194/bg-12-2455-2015.

Full text
Abstract:
Abstract. Rewetting of long-term drained fens often results in the formation of eutrophic shallow lakes with an average water depth of less than 1 m. This is accompanied by a fast vegetation shift from cultivated grasses via submerged hydrophytes to helophytes. As a result of rapid plant dying and decomposition, these systems are highly dynamic wetlands characterised by a high mobilisation of nutrients and elevated emissions of CO2 and CH4. However, the impact of specific plant species on these phenomena is not clear. Therefore we investigated the CO2 and CH4 production due to the subaqueous decomposition of shoot biomass of five selected plant species which represent different rewetting stages (Phalaris arundinacea, Ceratophyllum demersum, Typha latifolia, Phragmites australis and Carex riparia) during a 154 day mesocosm study. Beside continuous gas flux measurements, we performed bulk chemical analysis of plant tissue, including carbon, nitrogen, phosphorus and plant polymer dynamics. Plant-specific mass losses after 154 days ranged from 25% (P. australis) to 64% (C. demersum). Substantial differences were found for the CH4 production with highest values from decomposing C. demersum (0.4 g CH4 kg−1 dry mass day) that were about 70 times higher than CH4 production from C. riparia. Thus, we found a strong divergence between mass loss of the litter and methane production during decomposition. If C. demersum as a hydrophyte is included in the statistical analysis solely nutrient contents (nitrogen and phosphorus) explain varying greenhouse gas production of the different plant species while lignin and polyphenols demonstrate no significant impact at all. Taking data of annual biomass production as important carbon source for methanogens into account, high CH4 emissions can be expected to last several decades as long as inundated and nutrient-rich conditions prevail. Different restoration measures like water level control, biomass extraction and top soil removal are discussed in the context of mitigation of CH4 emissions from rewetted fens.
APA, Harvard, Vancouver, ISO, and other styles
44

Zak, D., H. Reuter, J. Augustin, T. Shatwell, M. Barth, J. Gelbrecht, and R. J. McInnes. "Changes of the CO<sub>2</sub> and CH<sub>4</sub> production potential of rewetted fens in the perspective of temporal vegetation shifts." Biogeosciences Discussions 11, no. 10 (October 10, 2014): 14453–88. http://dx.doi.org/10.5194/bgd-11-14453-2014.

Full text
Abstract:
Abstract. Rewetting of long-term drained fens often results in the formation of eutrophic shallow lakes with an average water depth of less than 1 m. This is accompanied by a fast vegetation shift from cultivated grasses via submerged hydrophytes to helophytes. As a result of rapid plant dying and decomposition, these systems are highly-dynamic wetlands characterised by a high mobilisation of nutrients and elevated emissions of CO2 and CH4. However, the impact of specific plant species on these phenomena is not clear. Therefore we investigated the CO2 and CH4 production due to the subaqueous decomposition of shoot biomass of five selected plant species which represent different rewetting stages (Phalaris arundinacea, Ceratophyllum demersum, Typha latifolia, Phragmites australis, and Carex riparia) during a 154 day mesocosm study. Beside continuous gas flux measurements, we performed bulk chemical analysis of plant tissue, including carbon, nitrogen, phosphorus, and plant polymer dynamics. Plant specific mass losses after 154 days ranged from 25 (P. australis) to 64% (C. demersum). Substantial differences were found for the CH4 production with highest values from decomposing C. demersum (0.4 g CH4 kg−1 dry mass day) that were about 70 times higher than CH4 production from C. riparia. Thus, we found a strong divergence between mass loss of the litter and methane production during decomposition. If C. demersum as a hydrophyte is included in the statistical analysis solely nutrient contents (nitrogen and phosphorus) explain varying GHG production of the different plant species while lignin and polyphenols demonstrate no significant impact at all. Taking data of annual biomass production as important carbon source for methanogens into account, high CH4 emissions can be expected to last several decades as long as inundated and nutrient-rich conditions prevail. Different restoration measures like water level control, biomass extraction and top soil removal are discussed in the context of mitigation of CH4 emissions from rewetted fens.
APA, Harvard, Vancouver, ISO, and other styles
45

Mirzaei, Mohammad, Ali R. Mohebalhojeh, Christoph Zülicke, and Riwal Plougonven. "On the Quantification of Imbalance and Inertia–Gravity Waves Generated in Numerical Simulations of Moist Baroclinic Waves Using the WRF Model." Journal of the Atmospheric Sciences 74, no. 12 (December 1, 2017): 4241–63. http://dx.doi.org/10.1175/jas-d-16-0366.1.

Full text
Abstract:
Abstract Quantification of inertia–gravity waves (IGWs) generated by upper-level jet–surface front systems and their parameterization in global models of the atmosphere relies on suitable methods to estimate the strength of IGWs. A harmonic divergence analysis (HDA) that has been previously employed for quantification of IGWs combines wave properties from linear dynamics with a sophisticated statistical analysis to provide such estimates. A question of fundamental importance that arises is how the measures of IGW activity provided by the HDA are related to the measures coming from the wave–vortex decomposition (WVD) methods. The question is addressed by employing the nonlinear balance relations of the first-order δ–γ, the Bolin–Charney, and the first- to third-order Rossby number expansion to carry out WVD. The global kinetic energy of IGWs given by the HDA and WVD are compared in numerical simulations of moist baroclinic waves by the Weather Research and Forecasting (WRF) Model in a channel on the f plane. The estimates of the HDA are found to be 2–3 times smaller than those of the optimal WVD. This is in part due to the absence of a well-defined scale separation between the waves and vortical flows, the IGW estimates by the HDA capturing only the dominant wave packets and with limited scales. It is also shown that the difference between the HDA and WVD estimates is related to the width of the IGW spectrum.
APA, Harvard, Vancouver, ISO, and other styles
46

Gaata, Methaq Talib. "Robust Watermarking Scheme for GIS Vector Maps." Ibn AL- Haitham Journal For Pure and Applied Science 31, no. 1 (May 10, 2018): 277. http://dx.doi.org/10.30526/31.1.1835.

Full text
Abstract:
With the fast progress of information technology and the computer networks, it becomes very easy to reproduce and share the geospatial data due to its digital styles. Therefore, the usage of geospatial data suffers from various problems such as data authentication, ownership proffering, and illegal copying ,etc. These problems can represent the big challenge to future uses of the geospatial data. This paper introduces a new watermarking scheme to ensure the copyright protection of the digital vector map. The main idea of proposed scheme is based on transforming the digital map to frequently domain using the Singular Value Decomposition (SVD) in order to determine suitable areas to insert the watermark data. The digital map is separated into the isolated parts.Watermark data are embedded within the nominated magnitudes in each part when satisfied the definite criteria. The efficiency of proposed watermarking scheme is assessed within statistical measures based on two factors which are fidelity and robustness. Experimental results demonstrate the proposed watermarking scheme representing ideal trade off for disagreement issue between distortion amount and robustness. Also, the proposed scheme shows robust resistance for many kinds of attacks.
APA, Harvard, Vancouver, ISO, and other styles
47

Svabova, Lucia, Eva Nahalkova Tesarova, Marek Durica, and Lenka Strakova. "Evaluation of the impacts of the COVID-19 pandemic on the development of the unemployment rate in Slovakia: counterfactual before-after comparison." Equilibrium 16, no. 2 (June 30, 2021): 261–84. http://dx.doi.org/10.24136/eq.2021.010.

Full text
Abstract:
Research background: The COVID-19 pandemic, which hit the world in the first quarter of 2020, has impacted almost every area of people's lives. Many states have introduced varying degrees of measures to prevent its spread. Most of these measures were, or still are, aimed at reducing or completely stopping the operation of shops and services, or in some cases, also the large manufacturing companies. However, as many companies have failed to cope with these restrictions, unemployment has risen in almost all EU countries. A similar situation was also observed in Slovakia, where the mentioned measures also had a significant impact on unemployment. Purpose of the article: In this study, we deal with the quantification of the impact of a pandemic, or more precisely, anti-pandemic measures, on the development of the registered unemployment rate in Slovakia. Methods: This quantification is based on the counterfactual method of before-after comparison, which is one of the most widely used methods in the field of impact assessments and brings very accurate results, based on real data. In the analysis, we use officially published data on the unemployment rate in Slovakia during the years 2013?2020 on a monthly basis. Such a long time series, using statistical methods of its decomposition and modelling of its trend, will allow predicting the development of the unemployment rate in Slovakia, assuming a counterfactual situation of no pandemic, and compare this development with the actual situation that occurred during 2020. Findings & Value added: The study results indicate an increase in the unemployment rate in Slovakia during 2020 by 2?3% compared to the trend of its development, which would have occurred without a pandemic. Given the counterfactual method used, this difference can be described as the impact of the COVID-19 pandemic. The results of the study can be used in practice in the design and implementation of measures introduced to mitigate the impacts of the pandemic on unemployment and, in the long-term perspective, also to eliminate these effects as much as possible. It can also be used as a theoretical tool in conducting impact assessments, which have so far been carried out very rarely in Slovakia.
APA, Harvard, Vancouver, ISO, and other styles
48

Hu, Meng, and Hualou Liang. "Noise-Assisted Instantaneous Coherence Analysis of Brain Connectivity." Computational Intelligence and Neuroscience 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/275073.

Full text
Abstract:
Characterizing brain connectivity between neural signals is key to understanding brain function. Current measures such as coherence heavily rely on Fourier or wavelet transform, which inevitably assume the signal stationarity and place severe limits on its time-frequency resolution. Here we addressed these issues by introducing a noise-assisted instantaneous coherence (NAIC) measure based on multivariate mode empirical decomposition (MEMD) coupled with Hilbert transform to achieve high-resolution time frequency representation of neural coherence. In our method, fully data-driven MEMD, together with Hilbert transform, is first employed to provide time-frequency power spectra for neural data. Such power spectra are typically sparse and of high resolution, that is, there usually exist many zero values, which result in numerical problems for directly computing coherence. Hence, we propose to add random noise onto the spectra, making coherence calculation feasible. Furthermore, a statistical randomization procedure is designed to cancel out the effect of the added noise. Computer simulations are first performed to verify the effectiveness of NAIC. Local field potentials collected from visual cortex of macaque monkey while performing a generalized flash suppression task are then used to demonstrate the usefulness of our NAIC method to provide highresolution time-frequency coherence measure for connectivity analysis of neural data.
APA, Harvard, Vancouver, ISO, and other styles
49

Ma, Shu-Jiao, Zhuo-Ming Ren, Chun-Ming Ye, Qiang Guo, and Jian-Guo Liu. "Node influence identification via resource allocation dynamics." International Journal of Modern Physics C 25, no. 11 (October 15, 2014): 1450065. http://dx.doi.org/10.1142/s012918311450065x.

Full text
Abstract:
Identifying the node influence in complex networks is an important task to optimally use the network structure and ensure the more efficient spreading in information. In this paper, by taking into account the resource allocation dynamics (RAD) and the k-shell decomposition method, we present an improved method namely RAD to generate the ranking list to evaluate the node influence. First, comparing with the epidemic process results for four real networks, the RAD method could identify the node influence more accurate than the ones generated by the topology-based measures including the degree, k-shell, closeness and the betweenness. Then, a growing scale-free network model with tunable assortative coefficient is introduced to analyze the effect of the assortative coefficient on the accuracy of the RAD method. Finally, the positive correlation is found between the RAD method and the k-shell values which display an exponential form. This work would be helpful for deeply understanding the node influence of a network.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Jie, Cancan Zhao, Songze Wan, Xiaoli Wang, Lixia Zhou, and Shenglei Fu. "Soil nematode assemblages in an acid soil as affected by lime application." Nematology 17, no. 2 (2015): 179–91. http://dx.doi.org/10.1163/15685411-00002860.

Full text
Abstract:
Liming can affect soil biota through alterations in soil pH and soil structure. Many earlier studies monitored the responses of soil nematode communities to lime application but they did not come to a consensus and did not use indices of soil nematode community and multivariate statistical approaches developed over the past two decades. The present research explored the short-term effects of lime application on soil nematode communities in an acrisol in three Eucalyptus plantations in southern China. Nematodes were sampled from control and lime-treated plots at three periods from October 2011 to February 2012 at 0-10 cm and 10-20 cm soil depths. Repeated measures ANOVA showed that lime application significantly reduced the abundance of herbivores at 10-20 cm depth during the study. Lime application tended to increase the bacterivore index at 0-10 cm depth over time. Principal response curves of soil nematode community structure, in terms of nematode trophic group composition, revealed that the differences between control and lime application treatments increased over time, primarily because of the decline of fungivores in plots treated with lime. The decline in fungivores resulted mainly from declines of Filenchus and Ditylenchus. The results suggest that the fungal-mediated decomposition channel in the soil food web was suppressed by lime application. Our study also demonstrated that the sensitivity of different nematode genera to lime application varied widely, even for genera within the same trophic group. In particular, the abundance of several bacterivorous genera (Prismatolaimus, Plectus, Wilsonema, Protorhabditis, Diploscapter and Heterocephalobus) gradually declined and that of Rhabditonema at 0-10 cm depth gradually increased following lime application during the study; two herbivorous genera, Trophotylenchulus and Helicotylenchus, had opposite responses to lime application at 0-10 cm depth. Integrating univariate statistical approaches with multivariate approaches facilitated the analysis of soil nematode responses to lime application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography