Academic literature on the topic 'Standard Error of Indices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Standard Error of Indices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Standard Error of Indices"

1

Muhammad, Imran, and Seher Zaidi Fatima. "Errors of Indices in Household Surveys of Punjab Urban through Principal Components." Pakistan Journal of Humanities and Social Sciences( PJHSS) 9, no. 1 (2021): 51–58. https://doi.org/10.52131/pjhss.2021.0901.0112.

Full text
Abstract:
In this research article, household wealth indices are calculated to estimate the sampling errors, which gave us complete information on the quality and reliability of published data upon the household surveys. Estimates are calculated based on simple random sampling, which contains sampling errors. Here principal components analysis (PCA) estimate standard errors of wealth indices as orthogonal transformation to develop solid measures of individual economic status. These measures evaluate the significance and explain the living status and economic dissimilarity of Punjab urban. Instrumental variables are used here to the enlightened social status of the Punjab urban area of Pakistan by using PCA of household surveys.  Twenty-five variables are included in this study. Total variance analysis explains the variation of total components.  A comparison study (PCA) approaches to estimating the standard error of indices of the household survey are presented in this paper. We conclude that errors of indices in household surveys through PCA, when compared to direct measures of estimating household wealth indices, are an efficient and reliable method.
APA, Harvard, Vancouver, ISO, and other styles
2

Imran, Muhammad, and Fatima Seher Zaidi. "Errors of Indices in Household Surveys of Punjab Urban through Principal Components." Pakistan Journal of Humanities and Social Sciences 9, no. 1 (2021): 51–58. http://dx.doi.org/10.52131/pjhss.2021.0901.0112.

Full text
Abstract:
In this research article, household wealth indices are calculated to estimate the sampling errors, which gave us complete information on the quality and reliability of published data upon the household surveys. Estimates are calculated based on simple random sampling, which contains sampling errors. Here principal components analysis (PCA) estimate standard errors of wealth indices as orthogonal transformation to develop solid measures of individual economic status. These measures evaluate the significance and explain the living status and economic dissimilarity of Punjab urban. Instrumental variables are used here to the enlightened social status of the Punjab urban area of Pakistan by using PCA of household surveys. Twenty-five variables are included in this study. Total variance analysis explains the variation of total components. A comparison study (PCA) approaches to estimating the standard error of indices of the household survey are presented in this paper. We conclude that errors of indices in household surveys through PCA, when compared to direct measures of estimating household wealth indices, are an efficient and reliable method.
APA, Harvard, Vancouver, ISO, and other styles
3

Inoue, Yusuke, Hiroyasu Itoh, Anri Waga, Ryosuke Sasa, and Kohei Mitsui. "Radiation Dose Management in Pediatric Brain CT According to Age and Weight as Continuous Variables." Tomography 8, no. 2 (2022): 985–98. http://dx.doi.org/10.3390/tomography8020079.

Full text
Abstract:
The diagnostic reference levels (DRLs) for pediatric brain computed tomography (CT) are provided for groups divided according to age. We investigated the relationships of radiation dose indices (volume CT dose index and dose length product) with age and weight, as continuous variables, in pediatric brain CT. In a retrospective analysis, 980 pediatric brain CT examinations were analyzed. Curve fitting was performed for plots of the CT dose indices versus age and weight, and equations to estimate age- and weight-dependent standard dose indices were derived. Standard dose indices were estimated using the equations, and the errors were calculated. The results showed a biphasic increase in dose indices with increasing age and weight, characterized by a rapid initial and subsequent slow increase. Logarithmic, power, and bilinear functions were well fitted to the plots, allowing estimation of standard dose indices at an arbitrary age or weight. Error analysis suggested that weight was mildly better than age and that the best results were obtained with the bilinear function. Curve fitting of the relationship between CT dose indices and age or weight facilitates the determination of standard dose indices in pediatric brain CT at each facility and is expected to aid the establishment and application of the DRLs.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, King Sun. "The Development and Design of a Laser Detection System for Tire Thickness Measurement." Applied Mechanics and Materials 418 (September 2013): 128–31. http://dx.doi.org/10.4028/www.scientific.net/amm.418.128.

Full text
Abstract:
This system is a self-developed real-time thickness inspection system including high-precision laser sensors and a mobile platform for on-line detection of tire rubber skin. The measurement data is used to calculate the standard deviation and process capability indices, and to evaluate measurement capacity. The system is a real-time measurement system in which the obtained measuring data compare with the standard value and show any errors. A technician can adjust the process parameters precisely on-line to improve product quality. The standard deviation of repeatability of the system for height is within +/- 0.0081 mm. The repeatability error of the horizontal sliding rail is within 0.0145mm, while the measurement error between this system and a coordinated measuring machine is within 0.028mm.
APA, Harvard, Vancouver, ISO, and other styles
5

R. Manjunath, B., J. K. Raju, and M. Rehaman. "Testing of causality relationship between Indian and Australian mutual funds performance: standard vs customized benchmarks." Investment Management and Financial Innovations 17, no. 3 (2020): 231–45. http://dx.doi.org/10.21511/imfi.17(3).2020.18.

Full text
Abstract:
Most Australian domestic investors rely on fund managers, and in India, this is not the same as they are primarily in direct investment rather than indirect. The study attempts to investigate the causal relationship between the returns of the standard indices, namely BSE500 and ASX300, and customized indices, MIMF and MAMF, for both India and Australia. The study uses econometric tools and techniques such as unit root test, vector error correction model, Wald test, Johansen co-integration, and model efficacy assumptions on the historical closing NAV of the selected mutual fund schemes for the period from April 2008 to March 2018. The econometric investigation using Johansen’s Co-Integration test confirmed the co-integration between BSE500, ASX300 and customized indices. Empirical evidence suggests that the Australian customized MAMF index is not Granger-caused by the Indian customized index MIMF, and therefore the MIMF index value cannot be used to predict the future rate of index MAMF returns, and vice versa.
APA, Harvard, Vancouver, ISO, and other styles
6

Refaat, Eman, and Ali Hadi. "Constructing composite indices." International Journal of Sociology and Social Policy 38, no. 1/2 (2018): 58–86. http://dx.doi.org/10.1108/ijssp-12-2016-0135.

Full text
Abstract:
Purpose The purpose of this paper is to construct, for the first time, composite index for Egypt that measures the economic and social rights fulfillment (ESRF) based on socioeconomic surveys at the household/individual levels. Design/methodology/approach The paper highlights some of the statistical debatable issues about composite indices and focuses mainly on six of them. Those issues are indicators selection, handling missing data, identification of and dealing with outliers, scale of measurement, computing the margin of error, weights assigned for indicators and domains and aggregation method. Handling these problematic issues gave rise to a rigorous index. Findings The quality of economic and social rights fulfillment index (ESRFI) is judged by its bootstrap standard error. Based on these margin of errors, confidence intervals can be computed and rigorous comparisons across all disaggregation levels of the ESRFI can be made. The results shows that the overall index is accurate and representative in measuring the ESRF in Egypt. Comparisons between rural and urban regions indices show that the rural areas are always worse than the urban areas in all levels of dimensions, especially for the Right to Education and Adequate Housing. Research limitations/implications The ESRFI is not very current because it is based on the 2010 Egyptian Household Conditions Observatory Survey (EHCOS), which is the latest published version of the survey with complete variables for the index data. When the next EHCOS becomes available, an updated ESRFI can be easily and quickly constructed. Practical implications The ESRFI could strengthen policy formulation that takes into account ESRF, especially by highlighting the situation in different regions and disaggregation levels. Social implications The proposed ESRFI would strengthen policy formulation that takes into account ESRF, especially by highlighting the situation in different regions and different disaggregation levels. Originality/value The paper emphasizes the importance of recognizing and handling of the six problematic issues that arise when constructing composite indices. The paper presents the first ESRFI for Egypt and demonstrates the rigor of its construction.
APA, Harvard, Vancouver, ISO, and other styles
7

Qu, Guangjun, Bob Slagter, Kevin Sylwester, and Kyle Doiron. "Explaining the standard errors of corruption perception indices." Journal of Comparative Economics 47, no. 4 (2019): 907–20. http://dx.doi.org/10.1016/j.jce.2019.07.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kiwu, Lawrence C., Eleazar C. Nwogu, Chukwudi J. Ogbonna, Hycinth C. Iwu, and Iheanyi S. Iwueze. "On the Effects of Missing Values on Estimates of Trend Parameters and Seasonal Indices in Descriptive Time Series Analysis." Archives of Current Research International 25, no. 4 (2025): 120–31. https://doi.org/10.9734/acri/2025/v25i41142.

Full text
Abstract:
In literature, when missing values are observed in a series, emphasis has always been on obtaining the estimates of the missing values while little or no attention has been given to assessing the effects of the missing values on parameter estimates. The aim of this study therefore is to at assess the effect of missing values on the estimates of trend parameters and seasonal indices in descriptive time series analysis when trending curve is linear and the decomposition model is additive. Estimates of trend parameters and seasonal indices were obtained using descriptive time series methods, while the performances of the estimates in the presence and absence of missing values are assessed using the values of summary statistics (MSE, RMSE and MAE), based on deviation of estimates from the actual parameters used in simulation. The results show that estimates of the trend parameter, error mean and standard deviation appear not to be affected when the number of missing values is less than ten. Estimates of seasonal indices also seem to be affected only slightly. For ten or more missing values, the summary statistics are only slightly higher especially when the missing follow consecutives. Specifically, the study found that trend parameters, error mean and error standard deviation used in simulation are recovered better than seasonal indices. It has therefore been recommended that when using the descriptive time series methods to obtain estimates of trend parameters and seasonal indices in presence of missing values there is no need to obtain estimates of the missing values first.
APA, Harvard, Vancouver, ISO, and other styles
9

Koen, Liebner, Charles Herbst, and William Rae. "Computed Radiography Exposure Indices in Mammography." South African Journal of Radiology 12, no. 2 (2008): 28. http://dx.doi.org/10.4102/sajr.v12i2.565.

Full text
Abstract:
Studies indicate that computed radiography (CR) can lead to increased radiation dose to patients. It is therefore important to relate the exposure indicators provided by CR manufacturers to the radiation dose delivered so as to assess the radiation dose delivered to patients directly from the exposure indicators. The aim of this study was to investigate the performance of an Agfa CR system in order to characterize the dose indicators provided by the system. The imaging plate response was characterized in terms of entrance exposure to the plate and the digital signal indicators generated by the system (SAL - Scanning Average Level and lgM - Logarithmic median) for different beam qualities. Several exposures were performed on a mammography unit and the digital signal, expressed as SAL and lgM for each image was correlated with the entrance exposure on a standard ACR phantom. From this, the relationship between the Agfa dose indices (SAL and lgM) and the average glandular dose (AGD) in mammography was established. An equation was derived to calculate the AGD delivered to the patient as a function of the exposure indicator, lgM, and the kV. The results indicated that the measured AGD at 28kV for a standard breast thickness during routine calibration with the ACR phantom was 1.58mGy (lgM = 1.99), which is within 1.5% of the value calculated using the derived equation for a standard Perspex thickness of 4.2cm using the AEC (1.56mGy). The standard error in using this equation was calculated to be 8.3%.
APA, Harvard, Vancouver, ISO, and other styles
10

Cabral, Silvia, Rita Fernandes, William Scott Selbie, Vera Moniz-Pereira, and António P. Veloso. "Reliability of a Global Gait Symmetry Index Based on Linear Joint Displacements." Applied Sciences 12, no. 24 (2022): 12558. http://dx.doi.org/10.3390/app122412558.

Full text
Abstract:
Gait symmetry is commonly used as an informal measure to assess functional gait recovery. As other outcome measures used for the evaluation of clinical change over time, gait symmetry indices must be reliable. However, studies assessing the reliability of symmetry indices are scarce and focused on discrete and local indices (i.e., peak joint angle, step length), which fail to assess overall gait symmetry. On the other hand, the repeatability of global symmetry indices (using multiple continuous waveforms) based on joint angles may be hampered by the sensitivity of these variables to marker placement imprecision. The aim of this study is to evaluate the test–retest intra-rater reliability and measurement error of an alternative global symmetry index. Two 3D gait analyses were performed on separate days (a week interval) on twenty-three healthy adults. Reliability and measurement error were assessed by calculating the intraclass correlation coefficient and the 95% limits of agreement, standard error of measurement and smallest detectable change, respectively. The new symmetry index presented acceptable results in terms of reliability (ICC = 0.71, 95% CI 0.33–0.88) and measurement error (95% LOA between −30.2% and 29.1%, SEM = 10.7% and SDC = 29.7%), thus being a more promising tool to assess overall gait symmetry.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Standard Error of Indices"

1

Ratzer, Edward Alexander. "Error-correction on non-standard communication channels." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/237471.

Full text
Abstract:
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
APA, Harvard, Vancouver, ISO, and other styles
2

Durney, Ann Wells. "Truncation and its effect on standard error of correlation coefficients." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277950.

Full text
Abstract:
A Monte Carlo study was conducted to investigate the effect of truncation of score distributions on systematic bias and random error of correlation coefficient distributions. The findings were twofold: Correlation decreases systematically due to increasing truncation; and the standard error of the correlation coefficient, which is a measure of random error, increases due to increasing truncation.
APA, Harvard, Vancouver, ISO, and other styles
3

Irani, Ramin. "Error Detection for DMB Video Streams." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5086.

Full text
Abstract:
The purpose of this thesis is to detect errors in Digital Multimedia Broadcasting (DMB) transport stream. DMB uses the MPEG-4 standard for encapsulating Packetized Elementary Stream (PES), and uses the MPEG-2 standard for assembling them in the form of transport stream packets. Recently many research works have been carried out about video stream error detection. They mostly do this by focusing on some decoding parameters related to frame. Processing complexity can be a disadvantage for the proposed methods. In this thesis, we investigated syntax error occurrences due to corruption in the header of the video transport stream. The main focus of the study is the video streams that cannot be decoded. The proposed model is implemented by filtering video and audio packets in order to find the errors. The filters investigate some sources that can affect the video stream playback. The output from this method determines the type, location and duration of the errors. The simplicity of the structure is one of advantages of this model. It can be implemented by three simple filters for detecting errors and a “calculation unit” for calculating the duration of an error. Fast processing is another benefit of the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
4

Miura, Tomoaki. "Evaluation and characterization of vegetation indices with error/uncertainty analysis for EOS-MODIS." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284157.

Full text
Abstract:
A set of error/uncertainty analyses were performed on several "improved" vegetation indices (VIs) planned for operational use in the Moderate Resolution Imaging Spectroradiometer (MODIS) VI products onboard the Terra (EOS AM-1) and Aqua (EOS PM-1) satellite platforms. The objective was to investigate the performance and accuracy of the satellite-derived VI products under improved sensor characteristics and algorithms. These include the "atmospheric resistant" VIs that incorporate the "blue" band for normalization of aerosol effects and the most widely-used, normalized difference vegetation index (NDVI). The analyses were conducted to evaluate specifically: (1) the impact of sensor calibration uncertainties on VI accuracies, (2) the capabilities of the atmospheric resistant VIs and various middle-infrared (MIR) derived VIs to minimize smoke aerosol contamination, and (3) the performances of the atmospheric resistant VIs under "residual" aerosol effects resulting from the assumptions in the MODIS aerosol correction algorithm. The results of these studies showed both the advantages and disadvantages of using the atmospheric resistant VIs for operational vegetation monitoring. The atmospheric resistant VIs successfully minimized optically thin aerosol smoke contamination (aerosol optical thickness (AOT) at 0.67 μm < 1.0) but not optically thick smoke (AOT at 0.67 μm > 1.0). On the other hand, their resistances to "residual" aerosol effects were greater when the effects resulted from the correction of optically-thick aerosol atmosphere. The atmospheric resistant VIs did not successfully minimize the residual aerosol effects from optically-thin aerosol atmosphere (AOT at 0.67 μm ≤ ∼0.15), which was caused mainly by the possible wrong choice of aerosol model used for the AOT estimation and correction. The resultant uncertainties of the atmospheric resistant Vls associated with calibration, which were twice as large as that of the NDVI, increased with increasing AOT. These results suggest that the atmospheric resistant VIs be computed from partially (Rayleigh/O₃) corrected reflectances under normal atmospheric conditions (e.g., visibility > 10 km). Aerosol corrections should only be performed when biomass burning, urban/industrial pollution, and dust storms (larger AOT) are detected.
APA, Harvard, Vancouver, ISO, and other styles
5

Cirineo, Tony, and Bob Troublefield. "STANDARD INTEROPERABLE DATALINK SYSTEM, ENGINEERING DEVELOPMENT MODEL." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/608398.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada<br>This paper describes an Engineering Development Model (EDM) for the Standard Interoperable Datalink System (SIDS). This EDM represents an attempt to design and build a programmable system that can be used to test and evaluate various aspects of a modern digital datalink. First, an investigation was started of commercial wireless components and standards that could be used to construct the SIDS datalink. This investigation lead to the construction of an engineering developmental model. This model presently consists of wire wrap and prototype circuits that implement many aspects of a modern digital datalink.
APA, Harvard, Vancouver, ISO, and other styles
6

Schlessman, Bradley R. "Type I Error Rates and Power Estimates for Several Item Response Theory Fit Indices." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1261404833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dutschke, Cynthia F. (Cynthia Fleming). "The Characteristics and Properties of the Threshold and Squared-Error Criterion-Referenced Agreement Indices." Thesis, North Texas State University, 1988. https://digital.library.unt.edu/ark:/67531/metadc331415/.

Full text
Abstract:
Educators who use criterion-referenced measurement to ascertain the current level of performance of an examinee in order that the examinee may be classified as either a master or a nonmaster need to know the accuracy and consistency of their decisions regarding assignment of mastery states. This study examined the sampling distribution characteristics of two reliability indices that use the squared-error agreement function: Livingston's k^2(X,Tx) and Brennan and Kane's M(C). The sampling distribution characteristics of five indices that use the threshold agreement function were also examined: Subkoviak's Pc. Huynh's p and k. and Swaminathan's p and k. These seven methods of calculating reliability were also compared under varying conditions of sample size, test length, and criterion or cutoff score. Computer-generated data provided randomly parallel test forms for N = 2000 cases. From this, 1000 samples were drawn, with replacement, and each of the seven reliability indices was calculated. Descriptive statistics were collected for each sample set and examined for distribution characteristics. In addition, the mean value for each index was compared to the population parameter value of consistent mastery/nonmastery classifications. The results indicated that the sampling distribution characteristics of all seven reliability indices approach normal characteristics with increased sample size. The results also indicated that Huynh's p was the most accurate estimate of the population parameter, with the smallest degree of negative bias. Swaminathan's p was the next best estimate of the population parameter, but it has the disadvantage of requiring two test administrations, while Huynh's p index only requires one administration.
APA, Harvard, Vancouver, ISO, and other styles
8

Natu, Ambarish Shrikrishna Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Error resilience in JPEG2000." Awarded by:University of New South Wales. Electrical Engineering and Telecommunications, 2003. http://handle.unsw.edu.au/1959.4/18835.

Full text
Abstract:
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach
APA, Harvard, Vancouver, ISO, and other styles
9

Kamaras, Konstantinos. "JPEG2000 image compression and error resilience for transmission over wireless channels." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://sirsi.nps.navy.mil/uhtbin/hyperion-image/02Mar%5FKamaras.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Silvia, E. Suyapa M. "Effects of sampling error and model misspecification on goodness of fit indices for structural equation models /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487597424138163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Standard Error of Indices"

1

Edronova, Valentina. Statistics. INFRA-M Academic Publishing LLC., 2023. http://dx.doi.org/10.12737/1577822.

Full text
Abstract:
In this textbook, the methodology of statistics is presented in the unity of quantitative assessment and meaningful analysis of the state, development, forecast and connections of socio-economic phenomena. The evolution of statistics as a science and its subject, methods of calculating absolute, relative, average indicators in the assessment of socio-economic phenomena are considered. The content and types of statistical observation and summary are described.&#x0D; The characteristic of selective observation and methods of calculating its errors are given. The methods of research of development and forecasting based on the series of dynamics, as well as indices in assessing the development and relationships of phenomena, the basics of correlation and regression analysis of relationships are considered.&#x0D; Meets the requirements of the Federal State educational Standard of higher education of the latest generation in the field of training "Economics".&#x0D; For students studying in economic specialties.
APA, Harvard, Vancouver, ISO, and other styles
2

Zeng, Lingjia. Standard errors of linear equating for the single-group design. American College Testing Program, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simo, Parpola, and Neo-Assyrian Text Corpus Project, eds. The standard Babylonian Epic of Gilgamesh: Cuneiform text, transliteration, glossary, indices and sign list. The Neo-Assyrian Text Corpus Project, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ziliak, Stephen Thomas. The cult of statistical significance: How the standard error costs us jobs, justice, and lives. University of Michigan Press, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

United States. National Aeronautics and Space Administration., ed. A parallel row-based algorithm with error control for standard-cell placement on a hypercube multiprocessor. Coordinated Science Laboratory, College of Engineering, University of Illinois at Urbana-Champaign, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Great Britain. Department of Social Security. Analytical Services Division. The abstract of statistics for social security benefits and contributions and indices of prices and earnings. Government Statistical Service, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1953-, Banyard Philip, ed. Understanding and using statistics in psychology: A practical introduction, or, How I came to know and love the standard error. SAGE, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Geological Survey (U.S.), ed. Field evaluation of the error arising from inadequate time averaging in the standard use of depth-integrating suspended-sediment samplers. U.S. Dept. of the Interior, U.S. Geological Survey, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Willem W. S. Van Hees. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in Southeast Alaska. U.S. Dept. of Agriculture, Forest Service, Pacific Northwest Research Station, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Blokdyk, Gerardus. Error management theory: Standard Requirements. CreateSpace Independent Publishing Platform, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Standard Error of Indices"

1

Gediga, Günther, and Ivo Düntsch. "Standard Errors of Indices in Rough Set Data Analysis." In Transactions on Rough Sets XVII. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54756-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Foster, Dean P., Robert A. Stine, and Richard P. Waterman. "Standard Error." In Basic Business Statistics. Springer New York, 1998. http://dx.doi.org/10.1007/978-1-4612-1696-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Foster, Dean P., Robert A. Stine, and Richard P. Waterman. "Standard Error." In Basic Business Statistics. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4757-2717-3_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gooch, Jan W. "Standard Error." In Encyclopedic Dictionary of Polymers. Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_15384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nahler, Gerhard. "standard error." In Dictionary of Pharmaceutical Medicine. Springer Vienna, 2009. http://dx.doi.org/10.1007/978-3-211-89836-9_1319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stewart, Antony. "Standard error." In Basic Statistics and Epidemiology, 5th ed. CRC Press, 2022. http://dx.doi.org/10.1201/9781003148111-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abe, Naohito. "Introduction to Standard Indices." In Springer Texts in Business and Economics. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-6305-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cerin, Ester. "Standard Error of Measurement." In Encyclopedia of Quality of Life and Well-Being Research. Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-007-0753-5_2847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cerin, Ester. "Standard Error of Measurement." In Encyclopedia of Quality of Life and Well-Being Research. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-17299-1_2847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Franzen, Michael D. "Standard Error of the Mean." In Encyclopedia of Clinical Neuropsychology. Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-79948-3_1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Standard Error of Indices"

1

Zarei, Ali, Amir M. Hajisadeghi, and Hamid R. Zarandi. "Combinational Standard Cells Adjacency Effect Analysis on Soft Error Occurrence Using Device-Level Simulations." In 2024 5th CPSSI International Symposium on Cyber-Physical Systems (Applications and Theory) (CPSAT). IEEE, 2024. http://dx.doi.org/10.1109/cpsat64082.2024.10745376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

LaViolette, Marc, and Ruben Perez. "On the Prediction of Pollutant Emission Indices From Gas Turbine Combustion Chambers." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-70038.

Full text
Abstract:
This paper surveys existing emissions models used in the prediction of NOx. The prediction of jet engine emission indices from fundamental principles have proven to be difficult due to the complex physical and chemical interactions occurring within their combustion chambers. Present day prediction of engine emission indices during engine development relies on published models, which are based on limited sets of data measured on older combustion chambers where minimizing pollutant emissions was not a major design criteria. Such empirical and semi-empirical models can, however, provide upper emission limits for new engine designs. A database comprising a wider range of experimental data (over 2000 measured points) taken from the literature was used to test the models. Advanced techniques were applied to optimize the coefficients of proportionality of governing equations of the best models in the literature. Most models tend to consistently over or under predict the measured values. In most cases, even though the standard deviation of the predicted values was not reduced, the correlation error was improved by removing this bias.
APA, Harvard, Vancouver, ISO, and other styles
3

Murray, Alan. "Are Pipelines Being Held to an Unreasonably High Standard of Performance?" In 2018 12th International Pipeline Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/ipc2018-78744.

Full text
Abstract:
The media and sections of the public have shown recently an acute interest in Pipeline operational performance incident statistics. Published data for North America shows that 99.999% of crude oil and petroleum products shipped by pipelines reach their destination safely. Some pipeline operators claim even better performance, 99.9996 % being one example. However, should failing to deliver 4 barrels of product for every million shipped be a legitimate cause for concern? If not how about the more general case of 1 per one hundred thousand? Is pipeline performance being singled out unreasonably when compared to other threats to public and environmental wellbeing such as medical malpractice or industrial waste contamination? Evidence from Canada and elsewhere, indicates that, during their hospital stay, an appreciable number of patients, one in every 18, experience adverse events, such as medication error, injurious falls, infections, and other medical misadventures. Errors (mostly minor), in fulfilling pharmaceutical prescriptions show an even higher error rate — 1 in 4 in one recent study, yet the public appears to be unperturbed. A common thread is determining what constitutes an acceptable level of risk whether individual or societal, voluntary or involuntary. Besides providing a broader context for pipeline risk, the paper explores the origin and intent of the environmental screening standard of 1 in 10−6, as well as the concept of setting risk tolerance to be as low as reasonably practicable — ALARP. The question of why there may be a reticence for many Pipeline Regulators to set, as other industries have, a prescriptive value for ALARP is considered.
APA, Harvard, Vancouver, ISO, and other styles
4

Parida, Lukesh, Sumedha Moharana, and Sourav Kumar Giri. "Machine Learning Based Optimization Techniques for Predictive Strength of High Performance Concrete: Enhancing Sustainable Development." In IABSE Congress, New Delhi 2023: Engineering for Sustainable Development. International Association for Bridge and Structural Engineering (IABSE), 2023. http://dx.doi.org/10.2749/newdelhi.2023.0567.

Full text
Abstract:
&lt;p&gt;The pursuit of sustainable growth in the construction sector needs a precise forecast of material characteristics to optimize resource consumption. This research focuses on utilizing the capabilities of well-known XGBoost regression algorithms to forecast the compressive strength of High- Performance Concrete (HPC). In this study, 2171 datasets were collected from literature containing input parameters that influence concrete strength, thereby creating a robust predictive model. The performance indices were assessed using root mean squared error (RMSE) and R2 score. The findings indicate that the XGBoost model outperforms standard statistical techniques in predicting accuracy. This research intends to improve the precision of compressive strength estimation, facilitating the development of more durable and sustainable construction practices.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
5

Chase, L. L., Stephen A. Payne, and Robert Adair. "Measurement of nonlinear refractive indices of optical crystals and glasses by three-wave mixing at 1.06 µm." In International Laser Science Conference. Optica Publishing Group, 1986. http://dx.doi.org/10.1364/ils.1986.thh1.

Full text
Abstract:
We have developed a three-wave mixing apparatus for rapid and accurate measurements of the third-order nonlinear susceptibility of transparent optical materials by comparison with standard samples. The 1.06-µm fundamental of a Nd:YAG laser is mixed with a tunable beam generated from a 560-nm dye laser which is shifted to the 1.06-µm region by second order Stokes Raman scattering from hydrogen. A small frequency difference (70 cm−1) between the two beams allows the determination of the joint contributions of the electronic hyperpolarizability and the atomic displacements to the nonlinear susceptibility. Discrimination between the incident beams and the three-wave mixing signal is accomplished with a combination of filtering and a double-grating monochromator. Separate sample and reference arms allow for the suppression of noise and errors caused by long- and short-term fluctuations in laser output. The method is easily extended to higher harmonics of the Nd:YAG laser, although absolute measurements must be performed on at least one material at each harmonic in order to provide a reference sample. Measurements have been made on a number of fluoride and oxide crystals, wide band gap semiconductors, and glasses. Experimental trends regarding the composition and structure dependence of the nonlinear index and its anisotropy are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Jiao, Yuan Li, Jianfeng Yu, and Wenbin Tang. "Profile Tolerances Modeling: A Unified Framework for Representing Geometric Variations for Line Profiles." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-51154.

Full text
Abstract:
Tolerance modeling is the most basic issue in Computer Aided Tolerancing (CAT). It will negatively influence the performance of subsequent activities such as tolerance analysis to a great extent if the resultant model cannot accurately represent variations in tolerance zone. According to ASME Y14.5M Standard [1], there is a class of profile tolerances for lines and surfaces which should also be interpreted correctly. Aim at this class of tolerances, the paper proposes a unified framework called DOFAS for representing them which composed of three parts: a basic DOF (Degrees of Freedom) model for interpreting geometric variations for profiles, an assessment method for filtering out and rejecting those profiles cannot be accurately represented and a split algorithm for splitting rejected profiles into sub profiles to make their variations interpretable. The scope of discussion in this paper is restricted to the line profiles; we will focus on the surface profiles in forthcoming papers. From the DOF model, two types of errors result from the rotations of the features are identified and formulized. One type of the errors is the result of the misalignment between profile boundary and tolerance zone boundary (noted as type 1); and if the feature itself exceeds the range of tolerance zone the other type of errors will form (noted as type 2). Specifically, it is required that the boundary points of the line profile should align with the corresponding boundary lines of the tolerance zone and an arbitrary point of the line profile should lie within the tolerance zone when line profile rotates in the tolerance zone. To make DOF model as accurate as possible, an assessment method and a split algorithm are developed to evaluate and eliminate these two type errors. It is clear that not all the line features carry the two type errors; as such the assessment method is used as a filter for checking and reserving such features that are consistent with the error conditions. In general, feature with simple geometry is error-free and selected by the filter whereas feature with complex geometry is rejected. According to the two type errors, two sub-procedures of the assessment process are introduced. The first one mathematically is a scheme of solving the maximum deviation of rotation trajectories of profile boundary, so as to neglect the type 1 error if it approaches to zero. The other one is to solve the maximum deviation of trajectories of all points of the feature: type 2 error can be ignored when the retrieved maximum deviation is not greater than prescribed threshold, so that the feature will always stay within the tolerance zone. For such features rejected by the filter which are inconsistent with the error conditions, the split algorithm, which is spread into the three cases of occurrence of type 1 error, occurrence of type 2 error and concurrence of two type errors, is developed to ease their errors. By utilizing and analyzing the geometric and kinematic properties of the feature, the split point is recognized and obtained accordingly. Two sub-features are retrieved from the split point and then substituted into the DOFAS framework recursively until all split features can be represented in desired resolution. The split algorithm is efficient and self-adapting lies in the fact that the rules applied can ensure high convergence rate and expected results. Finally, the implementation with two examples indicates that the DOFAS framework is capable of representing profile tolerances with enhanced accuracy thus supports the feasibility of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

Azizoglu, Zulkuf, Almostafa Alhadi, and Zoya Heidari. "Experimental Workflow for Quantifying the Performance of Geophysics-Based and Conventional Core-Based Wettability Assessment Methods." In 2023 SPWLA 64th Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2023. http://dx.doi.org/10.30632/spwla-2023-0079.

Full text
Abstract:
Conventional wettability assessment methods (e.g., Amott-Harvey and USBM) are often time consuming and require core-scale measurements. We recently developed wettability models based on two-dimensional (2D) nuclear magnetic resonance (2D-NMR) and/or resistivity measurements, which can be applied to well logs for simultaneous assessment of water saturation and wettability. However, they require core-scale verification, which has been challenging due to the uncertainties in wettability distribution inside the core samples as well as the lack of a dependable ground truth on the wettability of a given sample. The objective of this paper is to develop a setup that can provide a wide range of ground-truth wettability indices to (a) test reliability of a variety of wettability assessment techniques and (b) enable the advancement of the geophysics-based methods for wettability assessment. Glass beads are used to create synthetic core samples. We use a siliconizing fluid to alter the wettability of the beads and confirm this alteration with sessile drop tests. Next, we aggregate beads of different wettability to create cylindrical artificial grain packs with a controlled wide range of wettability states. For this purpose, we design and fabricate an experimental fixture that tightly packs the beads between two electrodes. This fixture is designed in a way that enables performing both NMR and resistivity measurements. Then, we saturate the samples with a hydrocarbon/water mixture and perform 2D-NMR and resistivity measurements. We use 2D-NMR measurements to estimate water saturation. Then, we use our newly introduced resistivity-based wettability index model (which uses water saturation and resistivity as inputs) to quantify wettability. Finally, we test the reliability of the estimated water saturation and wettability. We observed that the average relative error between the estimated wettability indices and the fraction of water/hydrocarbon-wet beads is less than 20%. Moreover, we showed that water saturation and wettability could be simultaneously estimated by integration of 2D-NMR and electrical-resistivity measurements with average relative errors of less than 10% and 15%, respectively. The results clearly demonstrate that the introduced workflow can be reliably used in the simultaneous quantification of water saturation and wettability index. We also demonstrated that the experimentally obtained resistivity model parameters related to the shape of the grains (i.e., depolarization factor) are consistent with the one calculated from analytical solutions for spherical grain geometry. This observation verifies that the model parameters of the resistivity-based wettability index model are based on geometry and can be estimated via simplifying assumptions. The outcomes of this paper enable the detection of the most reliable geophysical-based wettability assessment method and the comparison of their performance against conventional methods. This comparison might suggest a need for redefining a standard wettability index that can be uniquely estimated. In this work, we used artificial rock samples to enable a direct comparison of the estimated wettability indices and actual wettability fractions, which is not possible in actual core samples. Moreover, the use of artificial samples enables testing of the wettability assessment methods in rocks with different pore-size distributions. Finally, results are promising for in-situ and real-time assessment of wettability using borehole geophysical measurements.
APA, Harvard, Vancouver, ISO, and other styles
8

Owen, Robert B., P. J. Giarratano, and V. D. Arp. "Use of holography to measure heat transfer in high gradient systems." In OSA Annual Meeting. Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.fo7.

Full text
Abstract:
The understanding of transient heat transfer into a confined compressible fluid is of considerable value in engine design, nuclear power plant development, and chemical engineering. Interest centers on the thermal profile, particularly as fluid expansivity and compressibility increase while approaching the critical point. Computer models which simulate this phenomena have been developed by NBS and need experimental testing. However, traditional measurement methods are indirect, while optical methods are direct, nonintru- sive, and rapid. Unfortunately, the extreme conditions of transient heat transfer and near critical conditions in the fluid boundary layer induce substantial changes in refractive indices, causing optical beam disturbances large enough to render standard phase measurements nearly useless. Even refractive techniques are difficult to apply. However, the application of holographic microscopy makes it feasible to introduce a diffusive sample beam, minimizing refractive errors. It thereby proved possible to thermally map the upper three- quarters of a boundary layer which was only a few tenths of a millimeter thick with 1-ms time resolution. In the remaining lower layer fringe packing was so extreme as to make resolution at visible wavelengths impossible.
APA, Harvard, Vancouver, ISO, and other styles
9

Inoue, Akira, and Yosuke Tanabe. "Development of Particle Velocity Transfer Path Analysis." In ASME 2012 Noise Control and Acoustics Division Conference at InterNoise 2012. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ncad2012-1018.

Full text
Abstract:
The transfer path analysis (TPA) in terms of sound pressure has been implemented for decades in many application areas, such as car, train and construction machine. In this article, we propose a transfer path analysis where particle velocity is employed as the measure of TPA. Sound pressure is a scalar quantity, while particle velocity, which is the other fundamental quantity of sound, is a vector quantity. The phase differences among particle velocity vector components have to be generally considered. For TPA, not only the six degrees-of-freedom of each path motion, but also the three degrees-of-freedom of the particle velocity at the receiver location have to be considered together for an effective path rank ordering. We first propose the formulation of the particle velocity transfer path analysis where the same formulation of the standard sound pressure transfer path analysis is assumed to hold true for each direction of particle velocity. In order to verify the proposed particle velocity transfer path analysis, we carry out an experiment using a simple test box structure. As a result we have found that the error in the particle velocity vector synthesis is acceptably small, and is as small as the error in the standard sound pressure synthesis, which indicates that the same synthesis method can be employed. We then perform rank ordering of the particle velocity transmission paths. Here, a simple method of path rank ordering is applied. Lastly, we briefly discuss sound energy as a measure of TPA.
APA, Harvard, Vancouver, ISO, and other styles
10

Kovačević, Vanja, Dražen Kovačević, Danijel Kovačević, Sandra Nikolić, and Nemanja Obradović. ""Delta check" values and its clinic and laboratory use." In Zbornik radova 26. medunarodni kongres Mediteranske federacije za zdravlje i produkciju preživara - FeMeSPRum. Poljoprivredni fakultet Novi Sad, 2024. http://dx.doi.org/10.5937/femesprumns24018k.

Full text
Abstract:
The delta check value is the measured difference between the results of consecutive laboratory tests of the same individual. When this value is higher than expected, such a result may indicate a problem in testing with the previous or current sample and should be investigated in a timely manner, before the results are published. The classic formula for calculating delta check is subtracting the current parameter value from the previous parameter value for the same individual. In addition to this, other approaches to calculating the delta check have been developed. The delta check percentage is calculated by multiplying the difference between the current value of the parameter and the previous value by 100 and then dividing by the current value of the parameter or calculating the quotient of the current value of the parameter and the previous value. In the case of calculating the difference between the current value and the previous value of the parameter, it must be within limits called delta check limits. In calculating these limits, we must take into account: intraindividual biological variations for the parameter being examined (coefficient of variation CVi - the ratio of standard deviation to the mean value); analytical variation (SD - standard deviation, which can be easily calculated using control values, and represents the average deviation from the average); preanalytic variations (CVpre-analytic); improvement or deterioration of the individual's health and errors in the pre-analytical, analytical or post-analytical stages of work. The criteria for choosing an analyte for delta check have several limitations. Analytes that have a low coefficient of intra-individual variation are most often used, which prevents the possibility of a false alarm, but reduces the possibility of detecting an error. Another limitation is the interval in which the delta check is performed, because it is short, so only tests that are repeated frequently in short time intervals are useful. The use of the delta check method is a simple method of comparing results that indicates the existence of certain significant changes in the metabolic status of an individual.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Standard Error of Indices"

1

Fishbein, Bethany, Peter Siegel, Umut Atasever, and Darryl Cooney. Standard Error Estimation in TIMSS. TIMSS & PIRLS International Study Center, Boston College, 2024. https://doi.org/10.6017/lse.tpisc.timss.rs6950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Andrews, J. W. Error analysis for duct leakage tests in ASHRAE standard 152P. Office of Scientific and Technical Information (OSTI), 1997. http://dx.doi.org/10.2172/534521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orban, Levente. Standardization and Standard Error of the Mean in Psychological Statistics. Orban Foundation, 2025. https://doi.org/10.69642/2905.

Full text
Abstract:
This lecture explores the concepts of standardization and the standard error of the mean (SEM) in psychological statistics. Standardization enables comparability across variables measured on different scales, while SEM quantifies the uncertainty of sample means in relation to the population mean. We derive these concepts mathematically and implement them using base R to reinforce statistical reasoning in psychological research.
APA, Harvard, Vancouver, ISO, and other styles
4

Libert, John M. Statistical Studies of Standard Error of the Weight of Evidence: Theoretical Framework. National Institute of Standards and Technology, 2023. http://dx.doi.org/10.6028/nist.tn.2250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wright, T. Prediction and standard error estimation for a finite universe total when a stratum is not sampled. Office of Scientific and Technical Information (OSTI), 1994. http://dx.doi.org/10.2172/10113638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leis, Sherry, and Mary Short. Vegetation community monitoring at Pea Ridge National Military Park, Arkansas: 2007–2021. National Park Service, 2023. http://dx.doi.org/10.36967/2299454.

Full text
Abstract:
Resource managers at Pea Ridge National Military Park manage the natural communities of the park as a backdrop for interpreting the civil war battle that occurred on March 7–8, 1862. Restoration of the landscape to the vegetation communities that were present at the time of the battle is ongoing. Priorities for restoration include density, form, and vegetation structure, but native representative species are also desired. Heartland Inventory and Monitoring Network ecologists observed plant community sites in park woodlands in 2007, 2012, 2016, and 2021. Climate may influence vegetation and other park natural resources. Temperatures have been increasing at the park, but precipitation and drought indices did not demonstrate significant trends. There was a great degree of interannual variability in precipitation and drought metrics. Phenological data indicated earlier first bloom and leaf-out dates. Overstory canopy, basal area, density, and tree stocking were similar through time, but class 1, midstory trees increased in basal area and density after 2007. The overstory structure remained that of a closed woodland despite prescribed fires and cedar thinning that occurred at the park. Our monitoring data show that fire management goals for overstory reduction have not yet been met. Ground cover was similar through time except for bare soil and deciduous leaf litter. Bare soil and leaf litter experienced a pulse in variability in 2012 and 2021, presumably in response to heterogeneous prescribed fires. Maintaining heterogeneous ground cover may best support biodiversity across the landscape. Ground flora cover (excluding tree regeneration) increased by 88% from 2007 to 2021, meeting a fire management goal. However, ground flora cover was heterogeneous across the sites. Concomitant with the increase in ground flora cover, we observed notable increases in alpha diversity (mean site species richness) and gamma diversity (parkwide species richness) across the monitoring period. Although increases may have been related to treatments and environmental factors, we also improved our botanical sampling preparation and included an expert botanist on the crew in 2021. Species composition of the ground flora was assessed via guilds and indicator species. Although tree regeneration was not included in estimates of total ground flora cover, it was highly variable through time. The seedling class comprised the majority of the regeneration stems observed. Although mean small sapling density values increased by 535% over 2007 levels, there was a great deal of variability among the sites, indicating small sapling estimates were contained within the confidence intervals and not truly different through time. Forbs comprised the greatest abundance of the ground flora guilds through time, except in 2012. Forbs also exhibited the greatest variability in all years except for 2012. Woody species increased over our monitoring record but remained low in cover. We analyzed a set of 50 indicator species for Ozark highlands woodlands to understand changes in the ground flora. The number of woodland indicator species observed in each monitoring event was low (ranged from 7–14 species) with the highest number of species observed in 2021. We also calculated invasive species metrics and found the number of invasive species increased from 2 to 11 over the monitoring record. Nepalese browntop (Microstegium vimineum) was the most abundant of these species. Our confidence in the 2021 ground flora observations was high. We found our observer error to be within standard levels, including agreement on species cover estimates. Not all sites have received the number of burns described in the park’s ecological fire management goals. Our monitoring data show that one of the fire management goals, increased ground flora cover, has been met, but the other two goals for overstory density reduction and seedling density have not yet been met. Further analysis of species composition of the overstory and tree regeneration will be needed after the fire treatments have been completed to determine if all fire management goals are being met.
APA, Harvard, Vancouver, ISO, and other styles
7

van Hees, Willem W. S. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska. U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station, 2002. http://dx.doi.org/10.2737/pnw-rn-532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McDougall, Robert. A New Regional Household Demand System for GTAP. GTAP Working Paper, 2000. http://dx.doi.org/10.21642/gtap.wp14.

Full text
Abstract:
The GTAP final demand system has some known defects: the computation of the equivalent variation is not exact; with non-standard demand parameters, the equivalent variation may be grossly in error; the decomposition of the equivalent variation contains a nuisance term. We find a further defect, that the upper-level demand equations are invalid. We revise the model to remove these defects.
APA, Harvard, Vancouver, ISO, and other styles
9

Settles, Burr. The Reliability of the Duolingo English Test. Duolingo, 2016. http://dx.doi.org/10.46999/hqep1801.

Full text
Abstract:
This study reports several reliability measures for the first operational year of the Duolingo English Test (DET). Our results show that during this time, the standard error of measurement (SEM) was stable across the score range, reliability and internal consistency coefficients were both 0.96, and the test-retest reliability coefficient was 0.84. Repeat test-takers exhibited some practice effects, improving their score by 1–3 points on average, but the overall effect size (0.10) was small. These findings indicate that DET scores are reliable.
APA, Harvard, Vancouver, ISO, and other styles
10

Hawley, Adam, and Luis Gutierrez. PO-015-21607-R01 In-Situ USM Flow Verification Data Analysis. Pipeline Research Council International, Inc. (PRCI), 2022. http://dx.doi.org/10.55274/r0012233.

Full text
Abstract:
This project provided additional data analysis and data processing techniques to a data set that was collected under the previous MEAS-6-17A project. The goal of this project was to reduce the bias in the flow verification data and improve the repeatability of the results. The project was successful, as all of the methods developed on this project resulted in an average error better than �0.05% with a standard deviation of less than 1.1% for the longest injection length, which was a significant improvement from the previous MEAS-6-17A project.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography