To see the other types of publications on this topic, follow the link: Metric estimation.

Journal articles on the topic 'Metric estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Metric estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Díaz, Álvaro, Javier González-Bayon, and Pablo Sánchez. "Security Estimation in Wireless Sensor Network Simulator." Journal of Circuits, Systems and Computers 25, no. 07 (April 22, 2016): 1650067. http://dx.doi.org/10.1142/s0218126616500675.

Full text
Abstract:
Sensor nodes are low-power and low-cost devices with the requirement of a long autonomous lifetime. Therefore, the nodes have to use the available power carefully and avoid expensive computations or radio transmissions. In addition, as some wireless sensor networks (WSNs) process sensitive data, selecting a security protocol is vital. Cryptographic methods used in WSNs should fulfill the constraints of sensor nodes and should be evaluated for their security and power consumption. WSN engineers use several metrics to obtain estimations prior to network deployment. These metrics are usually related to power and execution time estimation. However, security is a feature that cannot be estimated and it is either “active” or “inactive”, with no possibility of introducing intermediate security levels. This lack of flexibility is a disadvantage in real deployments where different operation modes with different security and power specifications are often needed. This paper proposes including a new security estimation metric in a previously proposed framework for WSN simulation and embedded software (SW) performance analysis. This metric is called Security Estimation Metric (SEM) and it provides information about the security encryption used in WSN transmissions. Results show that the metric improves flexibility, granularity and execution time compared to other cryptographic tests.
APA, Harvard, Vancouver, ISO, and other styles
2

COSTAGLIOLA, G., F. FERRUCCI, G. TORTORA, and G. VITIELLO. "A METRIC FOR THE SIZE ESTIMATION OF OBJECT-ORIENTED GRAPHICAL USER INTERFACES." International Journal of Software Engineering and Knowledge Engineering 10, no. 05 (October 2000): 581–603. http://dx.doi.org/10.1142/s0218194000000304.

Full text
Abstract:
In order to achieve quality products with reliable cost and effort estimations, one of the main tasks for planning software project development is size estimation. This is especially true when dealing with interactive applications which represent critical components in a software project. In the paper, we address the problem of the size estimation of interactive graphical applications developed using the object-oriented methodology. In particular, we define and validate a metric, the Class Point metric, for estimating the size of object-oriented GUIs. The method is based on the idea of quantifying classes in a program analogous to function counting performed by the function point metric. Theoretical validation has proven the consistency of the Class Point metric as size measure. Empirical validation provides evidence that the Class Point metric is a useful measure for OO software size.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Jae Young, Martin Röösli, and Martina S. Ragettli. "Estimation of Heat-Attributable Mortality Using the Cross-Validated Best Temperature Metric in Switzerland and South Korea." International Journal of Environmental Research and Public Health 18, no. 12 (June 13, 2021): 6413. http://dx.doi.org/10.3390/ijerph18126413.

Full text
Abstract:
This study presents a novel method for estimating the heat-attributable fractions (HAF) based on the cross-validated best temperature metric. We analyzed the association of eight temperature metrics (mean, maximum, minimum temperature, maximum temperature during daytime, minimum temperature during nighttime, and mean, maximum, and minimum apparent temperature) with mortality and performed the cross-validation method to select the best model in selected cities of Switzerland and South Korea from May to September of 1995–2015. It was observed that HAF estimated using different metrics varied by 2.69–4.09% in eight cities of Switzerland and by 0.61–0.90% in six cities of South Korea. Based on the cross-validation method, mean temperature was estimated to be the best metric, and it revealed that the HAF of Switzerland and South Korea were 3.29% and 0.72%, respectively. Furthermore, estimates of HAF were improved by selecting the best city-specific model for each city, that is, 3.34% for Switzerland and 0.78% for South Korea. To the best of our knowledge, this study is the first to observe the uncertainty of HAF estimation originated from the selection of temperature metric and to present the HAF estimation based on the cross-validation method.
APA, Harvard, Vancouver, ISO, and other styles
4

Schober, R., and W. H. Gerstacker. "Metric for noncoherent sequence estimation." Electronics Letters 35, no. 25 (1999): 2178. http://dx.doi.org/10.1049/el:19991489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Corl, Vickie. "Developing Classroom Science Skills: Metric Estimation Estimating Metric Units: Upper Elementary through High School." Science Activities: Classroom Projects and Curriculum Ideas 27, no. 2 (July 1990): 23–24. http://dx.doi.org/10.1080/00368121.1990.9956720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Jungho, Sungwon Kang, Jongsun Ahn, and Seonah Lee. "EMSA: Extensibility Metric for Software Architecture." International Journal of Software Engineering and Knowledge Engineering 28, no. 03 (March 2018): 371–405. http://dx.doi.org/10.1142/s0218194018500134.

Full text
Abstract:
Software extensibility, the capability of adding new functions to a software system, is established based on software architecture. Therefore, developers need to evaluate the capability when designing software architecture. To support the evaluation, researchers have proposed metrics based on quality models or scenarios. However, those metrics are vague or subjective, depending on specific systems and evaluators. We propose the extensibility metric for software architecture (EMSA), which represents the degree of extensibility of a software system based on its architecture. To reduce the subjectivity of the metric, we first identify a typical task of adding new functions to a software system. Second, we define the metrics based on the characteristics of software architecture and its changes and finally combine them into a single metric. The originality of EMSA comes from defining metrics based on software architecture and extensibility tasks and integrating them into one. Furthermore, we made an effort to translate the degree into effort estimation expressed as person-hours. To evaluate EMSA, we conducted two types of user studies, obtaining measurements in both a laboratory and a real-world project. The results show that the EMSA estimation is reasonably accurate [6.6% MMRE and 100% PRED(25%)], even in a real-world project (93.2% accuracy and 8.5% standard deviation).
APA, Harvard, Vancouver, ISO, and other styles
7

BHARKAD, SANGITA D., and MANESH KOKARE. "PERFORMANCE EVALUATION OF DISTANCE METRICS: APPLICATION TO FINGERPRINT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 25, no. 06 (September 2011): 777–806. http://dx.doi.org/10.1142/s0218001411009007.

Full text
Abstract:
Distance metric is widely used in similarity estimation which plays a key role in fingerprint recognition. In this work we propose the detailed comparison of 29 distinct distance metrics. Features of fingerprint images are extracted using Fast Fourier Transform (FFT). Recognition rate, receiver operating curve (ROC), time and space complexity parameters are used for evaluation of each distance metric. To consolidate our conclusion we used the standard fingerprint database available at Bologna University and FVC2000 databases. After evaluation of 29 distinct distance metrics we found Sorgel distance metric performs best. Genuine acceptance rate (GAR) of Sorgel distance metric is observed to be ~5% higher than traditional Euclidean distance metric at low false acceptance rate (FAR). Sorgel distance gives good GAR at low FAR with moderate computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Najadat, Hassan, Izzat Alsmadi, and Yazan Shboul. "Predicting Software Projects Cost Estimation Based on Mining Historical Data." ISRN Software Engineering 2012 (April 10, 2012): 1–8. http://dx.doi.org/10.5402/2012/823437.

Full text
Abstract:
In this research, a hybrid cost estimation model is proposed to produce a realistic prediction model that takes into consideration software project, product, process, and environmental elements. A cost estimation dataset is built from a large number of open source projects. Those projects are divided into three domains: communication, finance, and game projects. Several data mining techniques are used to classify software projects in terms of their development complexity. Data mining techniques are also used to study association between different software attributes and their relation to cost estimation. Results showed that finance metrics are usually the most complex in terms of code size and some other complexity metrics. Results showed also that games applications have higher values of the SLOCmath, coupling, cyclomatic complexity, and MCDC metrics. Information gain is used in order to evaluate the ability of object-oriented metrics to predict software complexity. MCDC metric is shown to be the first metric in deciding a software project complexity. A software project effort equation is created based on clustering and based on all software projects’ attributes. According to the software metrics weights values developed in this project, we can notice that MCDC, LOC, and cyclomatic complexity of the traditional metrics are still the dominant metrics that affect our classification process, while number of children and depth of inheritance are the dominant from the object-oriented metrics as a second level.
APA, Harvard, Vancouver, ISO, and other styles
9

Wurfel, J. D., M. Padilla, and N. M. Grzywacz. "Metric estimation of visual-deformation motions." Journal of Vision 5, no. 8 (March 16, 2010): 328. http://dx.doi.org/10.1167/5.8.328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pons-Moll, Gerard, Jonathan Taylor, Jamie Shotton, Aaron Hertzmann, and Andrew Fitzgibbon. "Metric Regression Forests for Correspondence Estimation." International Journal of Computer Vision 113, no. 3 (April 11, 2015): 163–75. http://dx.doi.org/10.1007/s11263-015-0818-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mizukami, Naoki, Oldrich Rakovec, Andrew J. Newman, Martyn P. Clark, Andrew W. Wood, Hoshin V. Gupta, and Rohini Kumar. "On the choice of calibration metrics for “high-flow” estimation using hydrologic models." Hydrology and Earth System Sciences 23, no. 6 (June 17, 2019): 2601–14. http://dx.doi.org/10.5194/hess-23-2601-2019.

Full text
Abstract:
Abstract. Calibration is an essential step for improving the accuracy of simulations generated using hydrologic models. A key modeling decision is selecting the performance metric to be optimized. It has been common to use squared error performance metrics, or normalized variants such as Nash–Sutcliffe efficiency (NSE), based on the idea that their squared-error nature will emphasize the estimates of high flows. However, we conclude that NSE-based model calibrations actually result in poor reproduction of high-flow events, such as the annual peak flows that are used for flood frequency estimation. Using three different types of performance metrics, we calibrate two hydrological models at a daily step, the Variable Infiltration Capacity (VIC) model and the mesoscale Hydrologic Model (mHM), and evaluate their ability to simulate high-flow events for 492 basins throughout the contiguous United States. The metrics investigated are (1) NSE, (2) Kling–Gupta efficiency (KGE) and its variants, and (3) annual peak flow bias (APFB), where the latter is an application-specific metric that focuses on annual peak flows. As expected, the APFB metric produces the best annual peak flow estimates; however, performance on other high-flow-related metrics is poor. In contrast, the use of NSE results in annual peak flow estimates that are more than 20 % worse, primarily due to the tendency of NSE to underestimate observed flow variability. On the other hand, the use of KGE results in annual peak flow estimates that are better than from NSE, owing to improved flow time series metrics (mean and variance), with only a slight degradation in performance with respect to other related metrics, particularly when a non-standard weighting of the components of KGE is used. Stochastically generated ensemble simulations based on model residuals show the ability to improve the high-flow metrics, regardless of the deterministic performances. However, we emphasize that improving the fidelity of streamflow dynamics from deterministically calibrated models is still important, as it may improve high-flow metrics (for the right reasons). Overall, this work highlights the need for a deeper understanding of performance metric behavior and design in relation to the desired goals of model calibration.
APA, Harvard, Vancouver, ISO, and other styles
12

Koutlis, Christos, Manos Schinas, Symeon Papadopoulos, and Ioannis Kompatsiaris. "GAP: Geometric Aggregation of Popularity Metrics." Information 11, no. 6 (June 15, 2020): 323. http://dx.doi.org/10.3390/info11060323.

Full text
Abstract:
Estimating and analyzing the popularity of an entity is an important task for professionals in several areas, e.g., music, social media, and cinema. Furthermore, the ample availability of online data should enhance our insights into the collective consumer behavior. However, effectively modeling popularity and integrating diverse data sources are very challenging problems with no consensus on the optimal approach to tackle them. To this end, we propose a non-linear method for popularity metric aggregation based on geometrical shapes derived from the individual metrics’ values, termed Geometric Aggregation of Popularity metrics (GAP). In this work, we particularly focus on the estimation of artist popularity by aggregating web-based artist popularity metrics. Finally, even though the most natural choice for metric aggregation would be a linear model, our approach leads to stronger rank correlation and non-linear correlation scores compared to linear aggregation schemes. More precisely, our approach outperforms the simple average method in five out of seven evaluation measures.
APA, Harvard, Vancouver, ISO, and other styles
13

Jahromi, Hamed Z., Declan Delaney, and Andrew Hines. "A Sign of Things to Come: Predicting the Perception of Above-the-Fold Time in Web Browsing." Future Internet 13, no. 2 (February 17, 2021): 50. http://dx.doi.org/10.3390/fi13020050.

Full text
Abstract:
Content is a key influencing factor in Web Quality of Experience (QoE) estimation. A web user’s satisfaction can be influenced by how long it takes to render and visualize the visible parts of the web page in the browser. This is referred to as the Above-the-fold (ATF) time. SpeedIndex (SI) has been widely used to estimate perceived web page loading speed of ATF content and a proxy metric for Web QoE estimation. Web application developers have been actively introducing innovative interactive features, such as animated and multimedia content, aiming to capture the users’ attention and improve the functionality and utility of the web applications. However, the literature shows that, for the websites with animated content, the estimated ATF time using the state-of-the-art metrics may not accurately match completed ATF time as perceived by users. This study introduces a new metric, Plausibly Complete Time (PCT), that estimates ATF time for a user’s perception of websites with and without animations. PCT can be integrated with SI and web QoE models. The accuracy of the proposed metric is evaluated based on two publicly available datasets. The proposed metric holds a high positive Spearman’s correlation (rs=0.89) with the Perceived ATF reported by the users for websites with and without animated content. This study demonstrates that using PCT as a KPI in QoE estimation models can improve the robustness of QoE estimation in comparison to using the state-of-the-art ATF time metric. Furthermore, experimental result showed that the estimation of SI using PCT improves the robustness of SI for websites with animated content. The PCT estimation allows web application designers to identify where poor design has significantly increased ATF time and refactor their implementation before it impacts end-user experience.
APA, Harvard, Vancouver, ISO, and other styles
14

Spivey, Alvin, and Anthony Vodacek. "Multiscale Fourier Landscape Pattern Indices For Landscape Ecology." Journal of Landscape Ecology 11, no. 2 (November 1, 2018): 5–30. http://dx.doi.org/10.2478/jlecol-2018-0004.

Full text
Abstract:
Abstract A factor analysis of 67 landscape pattern metrics was performed to quantify the ability of landscape pattern metrics to explain land cover pattern, and to report individual landscape pattern metric values that are statistically independent. This land cover pattern is measured from 7.68 x 7.68 [km] GeoTiff image tiles of the conterminous United States Geological Survey (USGS) 1992 National Land Cover Dataset (NCLD). Using factor analysis to rank independent landscape pattern information, each landscape pattern metric produces the explanatory power of that landscape pattern metric amongst the other 66 landscape pattern metrics—any landscape pattern metrics that report similar values contribute redundant information. The metrics that contribute the most information are Jackson’s Contagion statistic (P005), typically contributing to 97 % of the explained variability; the Fourier Metric of Fragmentation (FMF), typically contributing to 65 % of the explained variability; and average LCLU class lacunarity (TLAC), typically contributing to 62 % of the explained variability. Two other Fourier-based landscape pattern metrics we tested, the Least Squares Fourier Transform Fractal Dimension Estimation (LsFT) and the Fourier Metric of Proportion (FMP), contributed 50 % and 12 % to the explained variability, respectively. In addition, the values reported by each of the Fourier metrics are revealed to be relatively independent amongst commonly used landscape pattern metrics and are thus demonstrated to be appropriate for explaining general landscape pattern variability.
APA, Harvard, Vancouver, ISO, and other styles
15

He, Yujie, Yi Mao, Wenlin Chen, and Yixin Chen. "Nonlinear Metric Learning with Kernel Density Estimation." IEEE Transactions on Knowledge and Data Engineering 27, no. 6 (June 1, 2015): 1602–14. http://dx.doi.org/10.1109/tkde.2014.2384522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kopec, G. E. "Least-squares font metric estimation from images." IEEE Transactions on Image Processing 2, no. 4 (1993): 510–19. http://dx.doi.org/10.1109/83.242359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Agha, S., V. M. Dwyer, and V. Chouliaras. "Motion estimation with low resolution distortion metric." Electronics Letters 41, no. 12 (2005): 693. http://dx.doi.org/10.1049/el:20050481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kondo, Masanari, Osamu Mizuno, and Eun-Hye Choi. "Causal-Effect Analysis using Bayesian LiNGAM Comparing with Correlation Analysis in Function Point Metrics and Effort." International Journal of Mathematical, Engineering and Management Sciences 3, no. 2 (June 1, 2018): 90–112. http://dx.doi.org/10.33889/ijmems.2018.3.2-008.

Full text
Abstract:
Software effort estimation is a critical task for successful software development, which is necessary for appropriately managing software task assignment and schedule and consequently producing high quality software. Function Point (FP) metrics are commonly used for software effort estimation. To build a good effort estimation model, independent explanatory variables corresponding to FP metrics are required to avoid a multicollinearity problem. For this reason, previous studies have tackled analyzing correlation relationships between FP metrics. However, previous results on the relationships have some inconsistencies. To obtain evidences for such inconsistent results and achieve more effective effort estimation, we propose a novel analysis, which investigates causal-effect relationships between FP metrics and effort. We use an advanced linear non-Gaussian acyclic model called BayesLiNGAM for our causal-effect analysis, and compare the correlation relationships with the causal-effect relationships between FP metrics. In this paper, we report several new findings including the most effective FP metric for effort estimation investigated by our analysis using two datasets.
APA, Harvard, Vancouver, ISO, and other styles
19

ANISHCHENKO, VADIM S., and SERGEY ASTAKHOV. "RELATIVE KOLMOGOROV ENTROPY OF A CHAOTIC SYSTEM IN THE PRESENCE OF NOISE." International Journal of Bifurcation and Chaos 18, no. 09 (September 2008): 2851–55. http://dx.doi.org/10.1142/s021812740802210x.

Full text
Abstract:
The mixing property is characterized by the metric entropy that is introduced by Kolmogorov for dynamical systems. The Kolmogorov entropy is infinite for a stochastic system. In this work, a relative metric entropy is considered. The relative metric entropy allows to estimate the level of mixing in noisy dynamical systems. An algorithm for calculating the relative metric entropy is described and examples of the metric entropy estimation are provided for certain chaotic systems with various noise intensities. The results are compared to the entropy estimation given by the positive Lyapunov exponents.
APA, Harvard, Vancouver, ISO, and other styles
20

Hall, Allyson R., Keith S. Jones, Patricia R. DeLucia, and Brian R. Johnson. "Does Metric Feedback Hinder Actions Guided by Cognition?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 25 (October 2007): 1588–92. http://dx.doi.org/10.1177/154193120705102505.

Full text
Abstract:
Providing trainees with metric feedback improves their metric distance estimations, but doing so also hinders certain actions. This paper describes a possible explanation for this hindrance. Based on that explanation, it was predicted that metric feedback should not hinder actions that are guided by cognitive processing, i.e., actions guided by the ventral visual system. To investigate this possibility, participants threw underhanded to specific metric distances during Pre and Post-Testing, e.g., throwing an object so that it came to rest 30 feet away. During the intervening Training, participants generated verbal distance estimates. Half received metric feedback. The results indicated that throws improved from Pre to Post-Test, but only when participants received metric feedback during Training. This outcome supports our hypothesis. Moreover, it suggests that trainees must know whether their distance estimation training should be applied to untrained tasks. Doing so may benefit certain tasks. Others, however, may suffer from it.
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Min. "An Effective Block-matching Metric for Motion Estimation." Journal of Information and Computational Science 12, no. 14 (September 20, 2015): 5247–58. http://dx.doi.org/10.12733/jics20106596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mintz, Ofer, and Imran S. Currim. "When does metric use matter less?" European Journal of Marketing 49, no. 11/12 (November 9, 2015): 1809–56. http://dx.doi.org/10.1108/ejm-08-2014-0488.

Full text
Abstract:
Purpose – This paper aims to develop a conceptual framework, in an effort toward building a contingent theory of drivers and consequences of managerial metric use in marketing mix decisions, this paper develops a conceptual framework to test whether the relationship between metric use and marketing mix performance is moderated by firm and managerial characteristics. Design/methodology/approach – Based on reviews of the marketing, finance, management and accounting literatures, and homophily, firm resource- and decision-maker-based theories and 22 managerial interviews, a conceptual model is proposed. It is tested via generalized least squares – seemingly unrelated regression estimation of 1,287 managerial decisions. Findings – Results suggest that the impact of metric use on marketing mix performance is lower in firms which are more market oriented, larger and with worse recent business performance and for marketing and higher-level managers, while organizational involvement has a lesser nuanced effect. Research limitations/implications – While much is written on the importance of metric use to improve performance, this work is a first step toward understanding which settings are more difficult than others to accomplish this. Practical implications – Results allow identification of several conditional managerial strategies to improve marketing mix performance based on metric use. Originality/value – This paper contributes to the metric literature, as prior research has generally focused on the development of metrics or the linking of marketing efforts with performance metrics, but paid little attention to understanding the relationship between managerial metric use and performance of the marketing mix decision and has not considered how the relationship is moderated by firm and managerial characteristics.
APA, Harvard, Vancouver, ISO, and other styles
23

Ducey, Mark J., and Rasmus Astrup. "Rapid, nondestructive estimation of forest understory biomass using a handheld laser rangefinder." Canadian Journal of Forest Research 48, no. 7 (July 2018): 803–8. http://dx.doi.org/10.1139/cjfr-2017-0441.

Full text
Abstract:
The forest understory is often associated with rapid rates of carbon and nutrient cycling, but cost-efficient quantification of its biomass remains challenging. We tested a new field technique for understory biomass assessment using an off-the-shelf handheld laser rangefinder. We conducted laser sampling in a pine forest with an understory dominated by invasive woody shrubs, especially Rhamnus frangula L. Laser sampling was conducted using a rangefinder, mounted on a monopod to provide a consistent reference height, and pointed vertically downward. Subsequently, the understory biomass was measured with destructive sampling. A series of metrics derived from the airborne LiDAR literature were evaluated alone and in combination for prediction of understory biomass using best-subsets regression. Resulting fits were good (r2 = 0.85 and 0.84 for the best single metric and best additive metric, respectively, and R2 = 0.93 for the best multivariate model). The results indicate that laser sampling could substantially reduce the need for costly destructive sampling within a double-sampling context.
APA, Harvard, Vancouver, ISO, and other styles
24

Kong, Liang Liang, Lin Xiang Shi, and Lin Chen. "An Overview of Worst-Case Execution Time Estimation for Embedded Programs." Applied Mechanics and Materials 651-653 (September 2014): 624–29. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.624.

Full text
Abstract:
Most embedded systems are real-time systems, so real-time is an important performance metric for embedded systems. The worst-case execution time (WCET) estimation for embedded programs could satisfy the requirement of hard real-time evaluation, so it is widely used in embedded systems evaluation. Based on sufficient survey on the progress of WCET estimation around the world, it proposes a new classification of WCET estimation. After introducing the principle of WCET estimation, it mainly demonstrates various types of technologies to estimate WCET and classifies them into two main streams, namely, static and dynamic WCET estimations. Finally, it shows the development of WCET analysis tools.
APA, Harvard, Vancouver, ISO, and other styles
25

HONARI, SINA, BRIGITTE JAUMARD, and JAMAL BENTAHAR. "UNCERTAINTY-BASED TRUST ESTIMATION IN A MULTI-VALUED TRUST ENVIRONMENT." International Journal on Artificial Intelligence Tools 22, no. 05 (October 2013): 1360003. http://dx.doi.org/10.1142/s0218213013600038.

Full text
Abstract:
Despite the widespread usage of the evaluation mediums for online services by the clients, there is a requirement for a trust evaluation tool that provides the clients with the degree of trustworthiness of the service providers. Such a tool can provide increased familiarity with unknown third party entities, e.g. service providers, especially when those entities neither project completely trustworthy nor totally untrustworthy behaviour. Indeed, developing some metrics for trust evaluation under uncertainty can come handy, e.g., for customers interested in evaluating the trustworthiness of an unknown service provider throughout queries to other customers of unknown reliability. In this research, we propose an evaluation metric to estimate the degree of trustworthiness of an unknown agent, say aD, through the information acquired through a group of agents who have interacted with agent aD. This group of agents is assumed to have an unknown degree of reliability. In order to tackle the uncertainty associated with the trust of these set of unknown agents, we suggest to use possibility distributions. Later, we introduce a new certainty metric to measure the degree of agreement in the information reported by the group of agents in A on agent aD. Fusion rules are then used to measure an estimation of the agent aD’s degree of trustworthiness. To the best of our knowledge, this is the first work that estimates trust, out of empirical data, subject to some uncertainty, in a discrete multi-valued trust domain. Finally, numerical experiments are presented to validate the proposed tools and metrics.
APA, Harvard, Vancouver, ISO, and other styles
26

SATONAKA, TAKAMI, and KEIICHI UCHIMURA. "FEATURE FUSION AND MODEL SELECTION BASED ON INFORMATION CRITERION." International Journal of Information Acquisition 03, no. 02 (June 2006): 85–99. http://dx.doi.org/10.1142/s0219878906000915.

Full text
Abstract:
We describe the k-NN adaptive metric learning procedure combining the asymptotic variance estimation and fine adjustment of the metric parameters for the face recognition. The metric learning model based on the Mahalanobis distance suffered from the degraded performance due to the limitation of available training samples. The feature fusion methods are proposed to assume local distributions of feature patterns for the parameter estimation and learning. Firstly, the MDL criterion is formulated to decide on the trade-offs between accuracy and complexity of an asymptotic statistical model. The variance within the classes is minimized by using the asymptotic variance estimation. Secondly, optimal metric parameters are derived from the minimization of the negative log-likelihood function for the presentation of the synthesized feature patterns. The variance between the classes is increased by using the simulated annealing method. We present the simulation results using the ORL and UMIST databases.
APA, Harvard, Vancouver, ISO, and other styles
27

Schlueter-Kuck, Kristy L., and John O. Dabiri. "Model parameter estimation using coherent structure colouring." Journal of Fluid Mechanics 861 (December 28, 2018): 886–900. http://dx.doi.org/10.1017/jfm.2018.898.

Full text
Abstract:
Lagrangian data assimilation is a complex problem in oceanic and atmospheric modelling. Tracking drifters in large-scale geophysical flows can involve uncertainty in drifter location, complex inertial effects and other factors which make comparing them to simulated Lagrangian trajectories from numerical models extremely challenging. Temporal and spatial discretisation, factors necessary in modelling large scale flows, also contribute to separation between real and simulated drifter trajectories. The chaotic advection inherent in these turbulent flows tends to separate even closely spaced tracer particles, making error metrics based solely on drifter displacements unsuitable for estimating model parameters. We propose to instead use error in the coherent structure colouring (CSC) field to assess model skill. The CSC field provides a spatial representation of the underlying coherent patterns in the flow, and we show that it is a more robust metric for assessing model accuracy. Through the use of two test cases, one considering spatial uncertainty in particle initialisation, and one examining the influence of stochastic error along a trajectory and temporal discretisation, we show that error in the coherent structure colouring field can be used to accurately determine single or multiple simultaneously unknown model parameters, whereas a conventional error metric based on error in drifter displacement fails. Because the CSC field enhances the difference in error between correct and incorrect model parameters, error minima in model parameter sweeps become more distinct. The effectiveness and robustness of this method for single and multi-parameter estimation in analytical flows suggest that Lagrangian data assimilation for real oceanic and atmospheric models would benefit from a similar approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Dawkins, Bryan A., Trang T. Le, and Brett A. McKinney. "Theoretical properties of distance distributions and novel metrics for nearest-neighbor feature selection." PLOS ONE 16, no. 2 (February 8, 2021): e0246761. http://dx.doi.org/10.1371/journal.pone.0246761.

Full text
Abstract:
The performance of nearest-neighbor feature selection and prediction methods depends on the metric for computing neighborhoods and the distribution properties of the underlying data. Recent work to improve nearest-neighbor feature selection algorithms has focused on new neighborhood estimation methods and distance metrics. However, little attention has been given to the distributional properties of pairwise distances as a function of the metric or data type. Thus, we derive general analytical expressions for the mean and variance of pairwise distances for Lq metrics for normal and uniform random data with p attributes and m instances. The distribution moment formulas and detailed derivations provide a resource for understanding the distance properties for metrics and data types commonly used with nearest-neighbor methods, and the derivations provide the starting point for the following novel results. We use extreme value theory to derive the mean and variance for metrics that are normalized by the range of each attribute (difference of max and min). We derive analytical formulas for a new metric for genetic variants, which are categorical variables that occur in genome-wide association studies (GWAS). The genetic distance distributions account for minor allele frequency and the transition/transversion ratio. We introduce a new metric for resting-state functional MRI data (rs-fMRI) and derive its distance distribution properties. This metric is applicable to correlation-based predictors derived from time-series data. The analytical means and variances are in strong agreement with simulation results. We also use simulations to explore the sensitivity of the expected means and variances in the presence of correlation and interactions in the data. These analytical results and new metrics can be used to inform the optimization of nearest neighbor methods for a broad range of studies, including gene expression, GWAS, and fMRI data.
APA, Harvard, Vancouver, ISO, and other styles
29

Baíllo, Amparo, and Antonio Cuevas. "On the estimation of a star-shaped set." Advances in Applied Probability 33, no. 4 (December 2001): 717–26. http://dx.doi.org/10.1239/aap/1011994024.

Full text
Abstract:
The estimation of a star-shaped set S from a random sample of points X1,…,Xn ∊ S is considered. We show that S can be consistently approximated (with respect to both the Hausdorff metric and the ‘distance in measure’ between sets) by an estimator ŝn defined as a union of balls centered at the sample points with a common radius which can be chosen in such a way that ŝn is also star-shaped. We also prove that, under some mild conditions, the topological boundary of the estimator ŝn converges, in the Hausdorff sense, to that of S; this has a particular interest when the proposed estimation problem is considered from the point of view of statistical image analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Jayathunga, Sadeepa, Toshiaki Owari, and Satoshi Tsuyuki. "Digital Aerial Photogrammetry for Uneven-Aged Forest Management: Assessing the Potential to Reconstruct Canopy Structure and Estimate Living Biomass." Remote Sensing 11, no. 3 (February 8, 2019): 338. http://dx.doi.org/10.3390/rs11030338.

Full text
Abstract:
Scientifically robust yet economical and efficient methods are required to gather information about larger areas of uneven-aged forest resources, particularly at the landscape level, to reduce deforestation and forest degradation and to support the sustainable management of forest resources. In this study, we examined the potential of digital aerial photogrammetry (DAP) for assessing uneven-aged forest resources. Specifically, we tested the performance of biomass estimation by varying the conditions of several factors, e.g., image downscaling, vegetation metric extraction (point cloud- and canopy height model (CHM)-derived), modeling method ((simple linear regression (SLR), multiple linear regression (MLR), and random forest (RF)), and season (leaf-on and leaf-off). We built dense point clouds and CHMs using high-resolution aerial imagery collected in leaf-on and leaf-off conditions of an uneven-aged mixed conifer–broadleaf forest. DAP-derived vegetation metrics were then used to predict the dominant height and living biomass (total, conifer, and broadleaf) at the plot level. Our results demonstrated that image downscaling had a negative impact on the accuracy of the dominant height and biomass estimation in leaf-on conditions. In comparison to CHM-derived vegetation metrics, point cloud-derived metrics performed better in dominant height and biomass (total and conifer) estimations. Although the SLR (%RMSE = 21.1) and MLR (%RMSE = 18.1) modeling methods produced acceptable results for total biomass estimations, RF modeling significantly improved the plot-level total biomass estimation accuracy (%RMSE of 12.0 for leaf-on data). Overall, leaf-on DAP performed better in total biomass estimation compared to leaf-off DAP (%RMSE of 15.0 using RF modeling). Nevertheless, conifer biomass estimation accuracy improved when leaf-off data were used (from a %RMSE of 32.1 leaf-on to 23.8 leaf-off using RF modeling). Leaf-off DAP had a negative impact on the broadleaf biomass estimation (%RMSE > 35% for SLR, MLR, and RF modeling). Our results demonstrated that the performance of forest biomass estimation for uneven-aged forests varied with statistical representations as well as data sources. Thus, it would be appropriate to explore different statistical approaches (e.g., parametric and nonparametric) and data sources (e.g., different image resolutions, vegetation metrics, and leaf-on and leaf-off data) to inform the interpretation of remotely sensed data for biomass estimation for uneven-aged forest resources.
APA, Harvard, Vancouver, ISO, and other styles
31

McMillan, Audra, and Adam Smith. "When is non-trivial estimation possible for graphons and stochastic block models?‡." Information and Inference: A Journal of the IMA 7, no. 2 (August 23, 2017): 169–81. http://dx.doi.org/10.1093/imaiai/iax010.

Full text
Abstract:
Abstract Block graphons (also called stochastic block models) are an important and widely studied class of models for random networks. We provide a lower bound on the accuracy of estimators for block graphons with a large number of blocks. We show that, given only the number $k$ of blocks and an upper bound $\rho$ on the values (connection probabilities) of the graphon, every estimator incurs error ${\it{\Omega}}\left(\min\left(\rho, \sqrt{\frac{\rho k^2}{n^2}}\right)\right)$ in the $\delta_2$ metric with constant probability for at least some graphons. In particular, our bound rules out any non-trivial estimation (that is, with $\delta_2$ error substantially less than $\rho$) when $k\geq n\sqrt{\rho}$. Combined with previous upper and lower bounds, our results characterize, up to logarithmic terms, the accuracy of graphon estimation in the $\delta_2$ metric. A similar lower bound to ours was obtained independently by Klopp et al.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Jinfeng, Muye Pang, Peixuan Yu, Biwei Tang, Kui Xiang, and Zhaojie Ju. "Effect of Muscle Fatigue on Surface Electromyography-Based Hand Grasp Force Estimation." Applied Bionics and Biomechanics 2021 (February 15, 2021): 1–12. http://dx.doi.org/10.1155/2021/8817480.

Full text
Abstract:
Surface electromyography- (sEMG-) based hand grasp force estimation plays an important role with a promising accuracy in a laboratory environment, yet hardly clinically applicable because of physiological changes and other factors. One of the critical factors is the muscle fatigue concomitant with daily activities which degrades the accuracy and reliability of force estimation from sEMG signals. Conventional qualitative measurements of muscle fatigue contribute to an improved force estimation model with limited progress. This paper proposes an easy-to-implement method to evaluate the muscle fatigue quantitatively and demonstrates that the proposed metrics can have a substantial impact on improving the performance of hand grasp force estimation. Specifically, the reduction in the maximal capacity to generate force is used as the metric of muscle fatigue in combination with a back-propagation neural network (BPNN) is adopted to build a sEMG-hand grasp force estimation model. Experiments are conducted in the three cases: (1) pooling training data from all muscle fatigue states with time-domain feature only, (2) employing frequency domain feature for expression of muscle fatigue information based on case 1, and 3) incorporating the quantitative metric of muscle fatigue value as an additional input for estimation model based on case 1. The results show that the degree of muscle fatigue and task intensity can be easily distinguished, and the additional input of muscle fatigue in BPNN greatly improves the performance of hand grasp force estimation, which is reflected by the 6.3797% increase in R2 (coefficient of determination) value.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Xiaotian, Wanchao Ma, Kai Zhang, Shaoyi Li, and Jie Yan. "Complexity Estimation of Infrared Image Sequence for Automatic Target Track." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, no. 4 (August 2019): 664–72. http://dx.doi.org/10.1051/jnwpu/20193740664.

Full text
Abstract:
Infrared image complexity metrics are an important task of automatic target recognition and track performance assessment. Traditional metrics, such as statistical variance and signal-to-noise ratio, targeted to single frame infrared image. However, there are some studies on the complexity of infrared image sequences. For this problem, a method to measure the complexity of infrared image sequence for automatic target recognition and track is proposed. Firstly, based on the analysis of the factors affecting the target recognition and track, the specific reasons which background influences target recognition and track are clarified, and the method introduces the feature space into confusion degree of target and occultation degree of target respectively. Secondly, the feature selection is carried out by using the grey relational method, and the feature space is optimized, so that confusion degree of target and occultation degree of target are more reasonable, and statistical formula F1-Score is used to establish the relationship between the complexity of single-frame image and the two indexes. Finally, the complexity of image sequence is not a linear sum of the single-frame image complexity. Target recognition errors often occur in high-complexity images and the target of low-complexity images can be correctly recognized. So the neural network Sigmoid function is used to intensify the high-complexity weights and weaken the low-complexity weights for constructing the complexity of image sequence. The experimental results show that the present metric is more valid than the other, such as sequence correlation and inter-frame change degree, has a strong correlation with the automatic target track algorithm, and which is an effective complexity evaluation metric for image sequence.
APA, Harvard, Vancouver, ISO, and other styles
34

Servedio, Vito, Paolo Buttà, Dario Mazzilli, Andrea Tacchella, and Luciano Pietronero. "A New and Stable Estimation Method of Country Economic Fitness and Product Complexity." Entropy 20, no. 10 (October 12, 2018): 783. http://dx.doi.org/10.3390/e20100783.

Full text
Abstract:
We present a new metric estimating fitness of countries and complexity of products by exploiting a non-linear non-homogeneous map applied to the publicly available information on the goods exported by a country. The non homogeneous terms guarantee both convergence and stability. After a suitable rescaling of the relevant quantities, the non homogeneous terms are eventually set to zero so that this new metric is parameter free. This new map almost reproduces the results of the original homogeneous metrics already defined in literature and allows for an approximate analytic solution in case of actual binarized matrices based on the Revealed Comparative Advantage (RCA) indicator. This solution is connected with a new quantity describing the neighborhood of nodes in bipartite graphs, representing in this work the relations between countries and exported products. Moreover, we define the new indicator of country net-efficiency quantifying how a country efficiently invests in capabilities able to generate innovative complex high quality products. Eventually, we demonstrate analytically the local convergence of the algorithm involved.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Hao, Jiwen Lu, Jianjiang Feng, and Jie Zhou. "Label-Sensitive Deep Metric Learning for Facial Age Estimation." IEEE Transactions on Information Forensics and Security 13, no. 2 (February 2018): 292–305. http://dx.doi.org/10.1109/tifs.2017.2746062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kalhan, Atul, and Alan Rees. "Estimation of cardiovascular risk using 10-year risk metric." Current Opinion in Lipidology 23, no. 4 (August 2012): 402–3. http://dx.doi.org/10.1097/mol.0b013e32835529b6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fujiwara, Akio, and Hiroshi Nagaoka. "Quantum Fisher metric and estimation for pure state models." Physics Letters A 201, no. 2-3 (May 1995): 119–24. http://dx.doi.org/10.1016/0375-9601(95)00269-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Baíllo, Amparo, and Antonio Cuevas. "On the estimation of a star-shaped set." Advances in Applied Probability 33, no. 04 (December 2001): 717–26. http://dx.doi.org/10.1017/s0001867800011149.

Full text
Abstract:
The estimation of a star-shaped set S from a random sample of points X 1,…,X n ∊ S is considered. We show that S can be consistently approximated (with respect to both the Hausdorff metric and the ‘distance in measure’ between sets) by an estimator ŝ n defined as a union of balls centered at the sample points with a common radius which can be chosen in such a way that ŝ n is also star-shaped. We also prove that, under some mild conditions, the topological boundary of the estimator ŝ n converges, in the Hausdorff sense, to that of S; this has a particular interest when the proposed estimation problem is considered from the point of view of statistical image analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Petrov, Evgeniy, and Ruslan Salimov. "Quasisymmetric mappings in b-metric spaces." Ukrainian Mathematical Bulletin 18, no. 1 (March 9, 2021): 60–70. http://dx.doi.org/10.37069/1810-3200-2021-18-1-4.

Full text
Abstract:
Considering quasisymmetric mappings between b-metric spaces we have found a new estimation for the ratio of diameters of two subsets which are images of two bounded subsets. This result generalizes the well-known Tukia-Vaisala inequality. The condition under which the image of a b-metric space under a quasisymmetric mapping is also a b-metric space is established. Moreover, the latter question is investigated for additive metric spaces.
APA, Harvard, Vancouver, ISO, and other styles
40

Shim, Yun Taek, Ye Hwon Jeong, Yi-Suk Kim, Nahyun Aum, Seung Gyu Choi, Se-Min Oh, Ji Hwan Park, Dong Yeong Kim, and Hyung Nam Koo. "Estimation of Forensic Sex Based on Three-Dimensional Reconstruction of Skull in Korean: Non-metric Study." Korean Journal of Legal Medicine 45, no. 3 (August 31, 2021): 79–86. http://dx.doi.org/10.7580/kjlm.2021.45.3.79.

Full text
Abstract:
This study performed the forensic anthropological sex estimation of Koreans in a non-metric way by reconstructing three-dimensional (3D) computed tomography (CT) images of skulls. The skull CT images used in this study were 100 (51 males, 49 females), and all CT images were taken with a slice thickness of 0.75 mm and then reconstructed into 3D images using the MIMICS 23.0 program. Using the reconstructed 3D image, measurements were repeated twice. The sex determination was male if the 4 point to 5 point was relatively more in five landmarks, and female if the points of 1 to 2 were relatively more. Results of the study show that, 88 of the 100 cases matched the actual sex. Among the 12 discrepant cases, ten cases were mismatched with the actual sex even though the estimation and repeated estimation readout of sexestimating were the same. Two cases, were “unknown,” showing different sexes in the first and repeated estimations. In conclusion, this study indicated that a forensic anthropological analysis from 3D images provided accurate point information on the landmarks of skulls, showing as high an accuracy as the sex estimation method using real bones. The ten cases of sex mismatch, except the two “Unknown” cases, are considered to be errors that did not consider differences in population groups. In further studies, further establishing a nonmetric, specifically Korean methods to increase the accuracy and reliability of sex estimation is need.
APA, Harvard, Vancouver, ISO, and other styles
41

Egunsola, Oluwaseun, Jacques Raubenheimer, and Nicholas Buckley. "Variability in the burden of disease estimates with or without age weighting and discounting: a methodological study." BMJ Open 9, no. 8 (August 2019): e027825. http://dx.doi.org/10.1136/bmjopen-2018-027825.

Full text
Abstract:
ObjectivesThis study examines the impact of the type of method used on the estimation of the burden of diseases.DesignComparison of methods of estimating disease burden.SettingFour metrics of burden of disease estimation, namely, years of potential life lost (YPLL), non-age weighted years of life lost (YLL) without discounting and YLL with uniform or non-uniform age weighting and discounting were used to calculate the burden of selected diseases in three countries: Australia, USA and South Africa.ParticipantsMortality data for all individuals from birth were obtained from the WHO database.OutcomesThe burden of 10 common diseases with four metrices, and the relative contribution of each disease to the overall national burden when each metric is used.ResultsThere were variations in the burden of disease estimates with the four methods. The standardised YPLL estimates were higher than other methods of calculation for diseases common among young adults and lower for diseases common among the elderly. In the three countries, discounting decreased the contributions of diseases common among younger adults to the total burden of disease, while the contributions of diseases of the elderly increased. After discounting with age weighting, there were no distinct patterns for diseases of the elderly and young adults in the three countries.ConclusionsGiven the variability in the estimates of the burden of disease with different approaches, there should be transparency regarding the type of metric used and a generally acceptable method that incorporates all the relevant social values should be developed.
APA, Harvard, Vancouver, ISO, and other styles
42

Robitzsch, Alexander. "About the Equivalence of the Latent D-Scoring Model and the Two-Parameter Logistic Item Response Model." Mathematics 9, no. 13 (June 22, 2021): 1465. http://dx.doi.org/10.3390/math9131465.

Full text
Abstract:
This article shows that the recently proposed latent D-scoring model of Dimitrov is statistically equivalent to the two-parameter logistic item response model. An analytical derivation and a numerical illustration are employed for demonstrating this finding. Hence, estimation techniques for the two-parameter logistic model can be used for estimating the latent D-scoring model. In an empirical example using PISA data, differences of country ranks are investigated when using different metrics for the latent trait. In the example, the choice of the latent trait metric matters for the ranking of countries. Finally, it is argued that an item response model with bounded latent trait values like the latent D-scoring model might have advantages for reporting results in terms of interpretation.
APA, Harvard, Vancouver, ISO, and other styles
43

Aksu, Hakan, and Alparslan Arikan. "Satellite-based estimation of actual evapotranspiration in the Buyuk Menderes Basin, Turkey." Hydrology Research 48, no. 2 (June 3, 2016): 559–70. http://dx.doi.org/10.2166/nh.2016.226.

Full text
Abstract:
Evapotranspiration (ET) is one of the most important components of the hydrological cycle, but it is often the most difficult variable to determine at basin scale. Traditionally, ET is estimated using point-based measurements collected at meteorological stations, but the non-spatial nature of these measurements often leads to significant errors when utilized at watershed scale. In this study, the METRIC (mapping evapotranspiration at high resolution with internalized calibration) approach was evaluated using remotely sensed observations from the moderate resolution imaging spectroradiometer sensor and data from meteorological stations in the lower catchment of the Buyuk Menderes Basin in western Turkey in the form of actual ET maps at daily and monthly intervals between 1st April and 30th September 2010. The energy fluxes and daily ET maps resulting from METRIC were compared with ET data estimated with the help of meteorological parameters. These results were found to be compatible with the ground-based estimations which suggest considerable potential for the METRIC model for estimating spatially distributed actual ET values with little ground-based weather data.
APA, Harvard, Vancouver, ISO, and other styles
44

Islam, Maheen, M. Lutfar Rahman, and Mamun-Or Rashid. "An Efficient Traffic-Load and Link-Interference Aware Routing Metric for Multi Radio Multi Channel Wireless Mesh Networks Based on Link’s Effective Capacity Estimation." Computer and Information Science 7, no. 4 (October 30, 2014): 129. http://dx.doi.org/10.5539/cis.v7n4p129.

Full text
Abstract:
Routing metrics proposed for Wireless Mesh Networks (WMNs) has various concerns like hop count, packet transmission delay, power consumption, congestion control, load balance and message collision. The routing metric of expected effective capacity (EEC) proposed in this paper guarantees to a select a path providing maximum throughput and minimum delay. A forwarding link constituting routing path is characterized by its quality, capacity, traffic demand and the degree of intervention experienced due to inter-flow and intra-flow interference. Thus the bandwidth actually attainable on a link for a flow is affected by those link properties. Our proposed metric computes the attainable bandwidth for a flow over a path which actually reflects congestion, node delay and traffic pressure on the desired path. Experiments conducted on ns-2 simulations demonstrate that our proposed routing metric can achieve significant improvements in overall network throughput, minimize end-to-end delay and able to distribute network load.
APA, Harvard, Vancouver, ISO, and other styles
45

Kilpi, Jorma, Timo Kyntäjä, and Tomi Räty. "Online Percentile Estimation (OPE)." Journal of Signal Processing Systems 93, no. 9 (June 21, 2021): 1085–100. http://dx.doi.org/10.1007/s11265-021-01673-z.

Full text
Abstract:
AbstractWe describe a control loop over a sequential online algorithm. The control loop either computes or uses the sequential algorithm to estimate the temporal percentiles of any univariate input data sequence. It transforms an input sequence of numbers into output sequence of histograms. This is performed without storing or sorting all of the observations. The algorithm continuously tests whether the input data is stationary, and reacts to events that do not appear to be stationary. It also indicates how to interpret the histograms since their information content varies according to whether a histogram was computed or estimated. It works with parameter-defined, fixed-size small memory and limited CPU power. It can be used for a statistical reduction of a numerical data stream and, therefore, applied to various Internet of Things applications, edge analytics or fog computing. The algorithm has a built-in feasibility metric called applicability. The applicability metric indicates whether the use of the algorithm is justified for a data source: it works for an arbitrary univariate numerical input, but it is statistically feasible only under some requirements, which are explicitly stated here. We also show the results of a performance study, which was executed on the algorithm with synthetic data.
APA, Harvard, Vancouver, ISO, and other styles
46

Vriens, Marco, Michel Wedel, and Tom Wilms. "Metric Conjoint Segmentation Methods: A Monte Carlo Comparison." Journal of Marketing Research 33, no. 1 (February 1996): 73–85. http://dx.doi.org/10.1177/002224379603300107.

Full text
Abstract:
The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared conceptually and empirically in a Monte Carlo study. The empirical comparison pertains to measures that assess parameter recovery, goodness-of-fit, and predictive accuracy. Most of the integrated conjoint segmentation methods outperform the two-stage clustering procedures under the conditions specified, in which a latent class procedure performs best. However, differences in predictive accuracy were small. The effects of degrees of freedom for error and the number of respondents were considerably smaller than those of number of segments, error variance, and within-segment heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
47

Yeager, Mike, Bill Gregory, Chris Key, and Michael Todd. "On using robust Mahalanobis distance estimations for feature discrimination in a damage detection scenario." Structural Health Monitoring 18, no. 1 (January 9, 2018): 245–53. http://dx.doi.org/10.1177/1475921717748878.

Full text
Abstract:
In this study, a damage detection and localization scenario is presented for a composite laminate with a network of embedded fiber Bragg gratings. Strain time histories from a pseudorandom simulated operational loading are mined for multivariate damage-sensitive feature vectors that are then mapped to the Mahalanobis distance, a covariance-weighted distance metric for discrimination. The experimental setup, data acquisition, and feature extraction are discussed briefly, and special attention is given to the statistical model used for a binary hypothesis test for damage diagnosis. This article focuses on the performance of different estimations of the Mahalanobis distance metric using robust estimates for location and scatter, and these alternative formulations are compared to traditional, less robust estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
48

Hasnain, Sarah S., Michael D. Escobar, and Brian J. Shuter. "Estimating thermal response metrics for North American freshwater fish using Bayesian phylogenetic regression." Canadian Journal of Fisheries and Aquatic Sciences 75, no. 11 (November 2018): 1878–85. http://dx.doi.org/10.1139/cjfas-2017-0278.

Full text
Abstract:
Physiological performance in fish peaks within a well-defined range of temperatures, which is distinct for each species. Species-specific thermal responses for growth, survival, and reproduction are most commonly quantified directly through laboratory experiment or field observation, with a focus on six specific metrics: optimum growth temperature and final temperature preferendum (growth), upper incipient lethal temperature and critical thermal maximum (survival), and optimum spawning temperature and optimum egg development temperature (reproduction). These values remain unknown for many North American freshwater fish species. In this paper, we present a new statistical method (Bayesian phylogenetic regression) that uses relationships between these metrics and phenetic relatedness to estimate unknown metric values. The reliability of these estimates was compared with those derived from models incorporating taxonomic family and models without any taxonomic information. Overall, incorporating taxonomic family relatedness improved estimation accuracy across all metrics. For Salmonidae and Cyprinidae, estimates derived from Bayesian phylogenetic regression typically had the highest expected reliability. We used our methods to generate 274 estimates of unknown metric values for over 100 North American freshwater fish species.
APA, Harvard, Vancouver, ISO, and other styles
49

Matawale, Chhabi Ram, Saurav Datta, and Siba Sankar Mahapatra. "Leanness metric evaluation platform in fuzzy context." Journal of Modelling in Management 10, no. 2 (July 20, 2015): 238–67. http://dx.doi.org/10.1108/jm2-10-2013-0057.

Full text
Abstract:
Purpose – The purpose of this study is to provide an efficient index system for evaluating leanness extent of the organizational supply chain. In today’s competitive global marketplace, the concept of lean manufacturing has gained vital consciousness to all manufacturing sectors, their supply chains and, hence, a logical measurement index system is indeed required in implementing leanness in practice. Such leanness estimation can help the enterprises to assess their existing leanness level and can compare different industries that are adapting this lean concept. Design/methodology/approach – The present work exhibits an efficient fuzzy-based leanness assessment system using trapezoidal fuzzy numbers set. Findings – The methodology described here has been found fruitful while applying for a particular industry, in India, as a case study. Apart from estimating overall lean performance metric, the model presented here can identify ill-performing areas toward lean achievement. Originality/value – The major contributions of this work have been summarized as follows: development and implementation of an efficient decision-making procedural hierarchy to support leanness extent evaluation; an overall lean performance index evaluation platform has been introduced; concept of generalized trapezoidal fuzzy numbers has been efficiently explored to facilitate this decision-making; and the appraisement index system has been extended with the capability to search ill-performing areas which require future progress.
APA, Harvard, Vancouver, ISO, and other styles
50

Shen, Jianxiu, and Fiona H. Evans. "The Potential of Landsat NDVI Sequences to Explain Wheat Yield Variation in Fields in Western Australia." Remote Sensing 13, no. 11 (June 4, 2021): 2202. http://dx.doi.org/10.3390/rs13112202.

Full text
Abstract:
Long-term maps of within-field crop yield can help farmers understand how yield varies in time and space and optimise crop management. This study investigates the use of Landsat NDVI sequences for estimating wheat yields in fields in Western Australia (WA). By fitting statistical crop growth curves, identifying the timing and intensity of phenological events, the best single integrated NDVI metric in any year was used to estimate yield. The hypotheses were that: (1) yield estimation could be improved by incorporating additional information about sowing date or break of season in statistical curve fitting for phenology detection; (2) the integrated NDVI metrics derived from phenology detection can estimate yield with greater accuracy than the observed NDVI values at one or two time points only. We tested the hypotheses using one field (~235 ha) in the WA grain belt for training and another field (~143 ha) for testing. Integrated NDVI metrics were obtained using: (1) traditional curve fitting (SPD); (2) curve fitting that incorporates sowing date information (+SD); and (3) curve fitting that incorporates rainfall-based break of season information (+BOS). Yield estimation accuracy using integrated NDVI metrics was further compared to the results using a scalable crop yield mapper (SCYM) model. We found that: (1) relationships between integrated NDVI metrics using the three curve fitting models and yield varied from year to year; (2) overall, +SD marginally improved yield estimation (r = 0.81, RMSE = 0.56 tonnes/ha compared to r = 0.80, RMSE = 0.61 tonnes/ha using SPD), but +BOS did not show obvious improvement (r = 0.80, RMSE = 0.60 tonnes/ha); (3) use of integrated NDVI metrics was more accurate than SCYM (r = 0.70, RMSE = 0.62 tonnes/ha) on average and had higher spatial and yearly consistency with actual yield than using SCYM model. We conclude that sequences of Landsat NDVI have the potential for estimation of wheat yield variation in fields in WA but they need to be combined with additional sources of data to distinguish different relationships between integrated NDVI metrics and yield in different years and locations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography