To see the other types of publications on this topic, follow the link: Estimation question type.

Journal articles on the topic 'Estimation question type'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Estimation question type.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Iping, Supriana Ayu Purwarianti Wiwin Suwarningsih*. "ESTIMATION QUESTION TYPE ANALYZER FOR MULTI CLOSE DOMAIN INDONESIAN QAS." INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT 4, no. 6 (2017): 59–66. https://doi.org/10.5281/zenodo.583975.

Full text
Abstract:
We propose an automated estimation scheme to analyze question classification in Indonesian multi closed domain question answering systems. The goal is to provide a good questioning classification system even if using only available language sources. Our strategy here is to build a pattern and rule to extract some important words and utilize the results as a feature for classification estimation of automated learning-based questions. Scenarios designed in automated learning estimates: (i) Analyzing questions, to represent the key information needed to answer user questions using target focus and target identification; (ii) Classify the type of question, construct a taxonomy of questions that have been coded into the system to determine the expected answer type, through some question processing patterns and rules. The proposed method is evaluated using datasets collected from various Indonesian websites. Test results show that the classification process using the proposed method is very effective.
APA, Harvard, Vancouver, ISO, and other styles
2

Kazmi, Aqdas Ali. "An Econometric Estimation of Tax-discounting in Pakistan." Pakistan Development Review 34, no. 4III (1995): 1067–77. http://dx.doi.org/10.30541/v34i4iiipp.1067-1077.

Full text
Abstract:
The debt neutrality hypothesis which has been a source of major controversies in the theory of public finance, and macroeconomics has at the same time generated a vast literature on the implications of budgetary deficits and public debt on various subsectors/ variables of the economy, such as inflation, interest rates, current account deficit, etc. Tax discounting has been one of the fields of research associated with debt neutrality. The econometric estimation of some of the standard models of taxdiscounting has shown that consumer response to fiscal policy in Pakistan reflects neither the extreme Barro-like rational anticipation of future tax liabilities nor the Buchanan-type extreme fiscal myopia. It broadly follows a middle path between these extremes. The controversy relating to debt neutrality is quite old in economic theory. However, due to its serious and far-reaching implications for the formulation of fiscal policy and macroeconomic management, the issues of debt neutrality have assumed a foremost position in economic theoretisation and empirical testing. This controversy is based on two important questions: (a) Who bears the burden of the debt? (b) Should debt be used to finance public expenditure? The first question centres on whether the debt can be shifted forward in time, while the second question explores whether taxation is equivalent to debt in its effects on the national economy.
APA, Harvard, Vancouver, ISO, and other styles
3

Onyango, Ronald, Brian Oduor, and Francis Odundo. "Mean Estimation of a Sensitive Variable under Nonresponse Using Three-Stage RRT Model in Stratified Two-Phase Sampling." Journal of Probability and Statistics 2022 (April 22, 2022): 1–14. http://dx.doi.org/10.1155/2022/4530120.

Full text
Abstract:
The present study addresses the problems of mean estimation and nonresponse under the three-stage RRT model. Auxiliary information on an attribute and variable is used to propose a generalized class of exponential ratio-type estimators. Expressions for the bias, mean squared error, and minimum mean squared error for the proposed estimator are derived up to the first degree of approximation. The efficiency of the proposed estimator is studied theoretically and numerically using two real datasets. From the numerical analysis, the proposed generalized class of exponential ratio-type estimators outperforms ordinary mean estimators, usual ratio estimators, and exponential ratio-type estimators. Furthermore, the efficiencies of the mean estimators are observed to decrease with an increase in the sensitivity level of the survey question. As the inverse sampling rate and nonresponse rate go up, so does the efficiency of the mean estimators, which makes them more accurate.
APA, Harvard, Vancouver, ISO, and other styles
4

Müller-Trede, Johannes. "Repeated judgment sampling: Boundaries." Judgment and Decision Making 6, no. 4 (2011): 283–94. http://dx.doi.org/10.1017/s1930297500001893.

Full text
Abstract:
AbstractThis paper investigates the boundaries of the recent result that eliciting more than one estimate from the same person and averaging these can lead to accuracy gains in judgment tasks. It first examines its generality, analysing whether the kind of question being asked has an effect on the size of potential gains. Experimental results show that the question type matters. Previous results reporting potential accuracy gains are reproduced for year-estimation questions, and extended to questions about percentage shares. On the other hand, no gains are found for general numerical questions. The second part of the paper tests repeated judgment sampling’s practical applicability by asking judges to provide a third and final answer on the basis of their first two estimates. In an experiment, the majority of judges do not consistently average their first two answers. As a result, they do not realise the potential accuracy gains from averaging.
APA, Harvard, Vancouver, ISO, and other styles
5

Sivasamy, Shyam. "Sample size considerations in research." Endodontology 35, no. 4 (2023): 304–8. http://dx.doi.org/10.4103/endo.endo_235_23.

Full text
Abstract:
ABSTRACT “What should be the sample size for my study?” is a common question in the minds of every research at some point of the research cycle. Answering this question with confident is tough even for a seasoned researcher. Sample size determination, an important aspect of sampling design consideration of a study, is a factor which directly influences the internal and external validity of the study. Unless the sample size is of adequate size, the results of the study cannot be justified. Conducting a study in too small sample size or too large sample size have ethical, scientific, practical, and economic strings attached to it and have detrimental effects in the research outcomes. A myriad of factors including the study design, type of power analysis, sampling technique employed, and acceptable limits of error fixed play a decisive role in estimating the sample size. However, the advent of free to use software and websites for sample size estimation has actually diluted or sometimes complicated the whole process of sample size estimation as important factors or assumptions related to sample size are overlooked. Engaging a professional biostatistician from the very beginning of the research process would be a wise decision while conducting research. This article highlights the important concepts related to sample size estimation with emphasis on factors which influences it.
APA, Harvard, Vancouver, ISO, and other styles
6

Weiss, Christoph, Sabine Enengl, Simon Hermann Enzelsberger, Richard Bernhard Mayer, and Peter Oppelt. "Does the Porter formula hold its promise? A weight estimation formula for macrosomic fetuses put to the test." Archives of Gynecology and Obstetrics 301, no. 1 (2019): 129–35. http://dx.doi.org/10.1007/s00404-019-05410-7.

Full text
Abstract:
Abstract Purpose Estimating fetal weight using ultrasound measurements is an essential task in obstetrics departments. Most of the commonly used weight estimation formulas underestimate fetal weight when the actual birthweight exceeds 4000 g. Porter et al. published a specially designed formula in an attempt to improve detection rates for such macrosomic infants. In this study, we question the usefulness of the Porter formula in clinical practice and draw attention to some critical issues concerning the derivation of specialized formulas of this type. Methods A retrospective cohort study was carried out, including 4654 singleton pregnancies with a birthweight ≥ 3500 g, with ultrasound examinations performed within 14 days before delivery. Fetal weight estimations derived using the Porter and Hadlock formulas were compared. Results Of the macrosomic infants, 27.08% were identified by the Hadlock formula, with a false-positive rate of 4.60%. All macrosomic fetuses were detected using the Porter formula, with a false-positive rate of 100%; 99.96% of all weight estimations using the Porter formula fell within a range of 4300 g ± 10%. The Porter formula only provides macrosomic estimates. Conclusions The Porter formula does not succeed in distinguishing macrosomic from normal-weight fetuses. High-risk fetuses with a birthweight ≥ 4500 g in particular are not detected more precisely than with the Hadlock formula. For these reasons, we believe that the Porter formula should not be used in clinical practice. Newly derived weight estimation formulas for macrosomic fetuses must not be based solely on a macrosomic data set.
APA, Harvard, Vancouver, ISO, and other styles
7

Weselovska, Nataliya, and Sergey Shargorodskiy. "METHOD OF EVALUATION OF EFFICIENCY AND RELIABILITY OPERATION OF VIBRATION MACHINES." ENGINEERING, ENERGY, TRANSPORT AIC, no. 4(107) (December 20, 2019): 47–53. http://dx.doi.org/10.37128/2520-6168-2019-4-7.

Full text
Abstract:
The use of new energy-saving technologies has led to the significant development of vibration machine designs and their widespread use. In the course of their operation, the question of efficiency and reliability of the use of this type of machines is rather acute, related to the availability and possibility of using reserves of its operation. Machines of this type must meet the requirements of quality and reliability in order to fulfill their official purpose. Due to the design features and the complexity of the processes occurring during their work, analytical calculations of durability and reliability in the classical version are quite approximate in nature and do not provide the necessary accuracy. So the question of reliability and durability of vibrating equipment is urgent. The questions of reliability estimation of vibrating machines were dealt with by the following scientists: Iskovich-Lototsky RD, Obertyukh R., Sevastyanov IV, Kanarchuk VE, Dzhratratano D.J. etc. The techniques they offer are virtually indistinguishable from those adopted in general engineering. The publication proposes a technique for evaluating efficiency and reliability based on the use of quantitative characteristics of probabilistic and statistical nature. Such indicators for quantifying the reliability and reliability are: probability of failure, failure rate, failure rate, failure time. These indicators are one of the most important in the technical diagnostics of the operation of vibrating machines and the estimation of their residual life. The basic calculation, dependencies, analysis of laws are given.
APA, Harvard, Vancouver, ISO, and other styles
8

Beck, Nathaniel, and Jonathan N. Katz. "What To Do (and Not to Do) with Time-Series Cross-Section Data." American Political Science Review 89, no. 3 (1995): 634–47. http://dx.doi.org/10.2307/2082979.

Full text
Abstract:
We examine some issues in the estimation of time-series cross-section models, calling into question the conclusions of many published studies, particularly in the field of comparative political economy. We show that the generalized least squares approach of Parks produces standard errors that lead to extreme overconfidence, often underestimating variability by 50% or more. We also provide an alternative estimator of the standard errors that is correct when the error structures show complications found in this type of model. Monte Carlo analysis shows that these “panel-corrected standard errors” perform well. The utility of our approach is demonstrated via a reanalysis of one “social democratic corporatist” model.
APA, Harvard, Vancouver, ISO, and other styles
9

Shelest, Mariya. "Average delay estimation for one queueing network model with resource reservation." Information and Control Systems, no. 2 (May 11, 2022): 32–41. http://dx.doi.org/10.31799/1684-8853-2022-2-32-41.

Full text
Abstract:
Introduction: An urgent task of today is to develop new analysis methods for complex information systems that now demand higher standards for maintaining data integrity. One of the important quality characteristics of such systems is average transaction time. However, currently there are almost no mathematical models and speed estimation tools for such systems. Purpose: To develop and analyze a distributed information system model based on queueing networks. Results: A type of information systems that demand higher standards for maintaining data integrity has been described, the corresponding assumptions for such systems are given. A convenient way to represent such systems as transaction paths dependency graphs has been proposed, with each path being represented as one tandem queueing system. The calculation of their functional characteristics have been provided. This method of representation has made it possible to simplify the analysis of complex systems, which resulted in obtaining closed-form expressions for temporal estimation of the system type in question. In addition, two mechanisms of decomposition of the proposed graph are considered with the subsequent calculation of the lower bound for average transaction time. The accuracy of both approaches is analyzed with simulation modeling methods. Practical relevance: The proposed models allow estimating speed limits of an information system during the design phase.
APA, Harvard, Vancouver, ISO, and other styles
10

Sobota, Aleksander. "The method of estimation of influence the type of intersection into environmental conditions." WUT Journal of Transportation Engineering 121 (June 1, 2018): 351–62. http://dx.doi.org/10.5604/01.3001.0014.4617.

Full text
Abstract:
Functioning of transport system is determine by the quality of service realized by infrastructure of different transport branches. In case of road transport, very important are intersections. These objects are usually a bottleneck in the network. Therefore, the correct selection of the intersection type is really important in the planning and projecting process of infrastructure. So, decision problem have to be solved by the projectors, who have an influence on these process. But the selection of the intersection type is also the multi-criteria problem. Therefore the answer on the question like, does the intersection type have an influence on environmental conditions, have been presented in the article. For this purpose, the basic assumptions of the method of selections the intersection type, and results of the measurements realized at four type of intersection located on multilane arteries, have been presented in the article.
APA, Harvard, Vancouver, ISO, and other styles
11

MOUNT, DAVID M., NATHAN S. NETANYAHU, CHRISTINE D. PIATKO, RUTH SILVERMAN, and ANGELA Y. WU. "QUANTILE APPROXIMATION FOR ROBUST STATISTICAL ESTIMATION AND k-ENCLOSING PROBLEMS." International Journal of Computational Geometry & Applications 10, no. 06 (2000): 593–608. http://dx.doi.org/10.1142/s0218195900000334.

Full text
Abstract:
Given a set P of n points in Rd, a fundamental problem in computational geometry is concerned with finding the smallest shape of some type that encloses all the points of P. Well-known instances of this problem include finding the smallest enclosing box, minimum volume ball, and minimum volume annulus. In this paper we consider the following variant: Given a set of n points in Rd, find the smallest shape in question that contains at least k points or a certain quantile of the data. This type of problem is known as a k-enclosing problem. We present a simple algorithmic framework for computing quantile approximations for the minimum strip, ellipsoid, and annulus containing a given quantile of the points. The algorithms run in O(n log n) time.
APA, Harvard, Vancouver, ISO, and other styles
12

Carson, Richard T., and Jordan J. Louviere. "Estimation of Broad-Scale Tradeoffs in Community Policing Policies." Journal of Benefit-Cost Analysis 8, no. 3 (2017): 385–98. http://dx.doi.org/10.1017/bca.2017.24.

Full text
Abstract:
This paper looks at how to measure the tradeoffs in monetary terms that the public is prepared to make with respect to adoption of different community policing options. The approach advanced is a discrete choice experiment in which survey respondents face different policing options which can be described by a set of attributes ranging from costs to outcomes. The main contribution of this paper is to show how to go beyond the usual characterization of the monetized benefits of reducing the level of a specific type of crime to asking the question of whether those benefits differ depending on how that outcome is achieved.
APA, Harvard, Vancouver, ISO, and other styles
13

Trisakti, Bambang, Atriyon Julzarika, Udhi C. Nugroho, Dipo Yudhatama, and Yudi Lasmana. "CAN THE PEAT THICKNESS CLASSES BE ESTIMATED FROM LAND COVER TYPE APPROACH?" International Journal of Remote Sensing and Earth Sciences (IJReSES) 14, no. 2 (2018): 93. http://dx.doi.org/10.30536/j.ijreses.2017.v14.a2677.

Full text
Abstract:
Indonesia has been known as a home of the tropical peatlands. The peatlands are mainly in Sumatera, Kalimantan and Papua Islands. Spatial information on peatland depth is needed for the planning of agricultural land extensification. The research objective was to develop a preliminary estimation model of peat thickness classes based on land cover approach and analyse its applicability using Landsat 8 image. Ground data, including land cover, location and thickness of peat, were obtained from various surveys and peatlands potential map (Geology Map and Wetlands Peat Map). The land cover types were derived from Landsat 8 image. All data were used to build an initial model for estimating peat thickness classes in Merauke Regency. A table of relationships among land cover types, peat potential areas and peat thickness classes were made using ground survey data and peatlands potential maps of that were best suited to ground survey data. Furthermore, the table was used to determine peat thickness classes using land cover information produced from Landsat 8 image. The results showed that the estimated peat thickness classes in Merauke Regency consist of two classes, i.e., very shallow peatlands and shallow peatlands. Shallow peatlands were distributed at the upper part of Merauke Regency with mainly covered by forest. In comparison with Indonesia Peatlands Map, the number of classes was the two classes. The spatial distribution of shallow peatlands was relatively similar for its precision and accuracy, but the estimated area of shallow peatlands was greater than the area of shallow peatlands from Indonesia Peatlands Map. This research answered the question that peat thickness classes could be estimated by the land cover approach qualitatively. The precise estimation of peat thickness could not be done due to the limitation of insitu data.
APA, Harvard, Vancouver, ISO, and other styles
14

Ingremeau, Jean-Jacques, and Olivier Saunier. "Investigations on the source term of the detection of radionuclides in North of Europe in June 2020." EPJ Nuclear Sciences & Technologies 8 (2022): 10. http://dx.doi.org/10.1051/epjn/2022003.

Full text
Abstract:
During the second half of June 2020, small quantities of artificial radionuclides (60Co, 134Cs, 137Cs, 103Ru, 106Ru, 141Ce, 95Nb, 95Zr) have been detected in northern Europe (Finland, Sweden, Estonia), the source of the release being unknown. The measured values were close to detection limits and didn’t represent any health issue. This paper presents the investigations carried out at IRSN in order to identify the release origin. The most probable source location and the release magnitude estimation are briefly presented. This recent set of detection is also compared to previous similar ones. This paper mainly focuses on the investigations which have been performed in order to answer two main questions. First “from which type and part of a nuclear installation the release could come from?”. Although no certainty is achievable, the most probable source is found to be a spent primary ion exchange resin. The second question addressed was “how this radiological inventory could have been released into the atmosphere?”. But, mainly due to the lack of information, no satisfying answer has been found to that question and what really happened remains unknown.
APA, Harvard, Vancouver, ISO, and other styles
15

Xiong, Li, Guo-Zheng Wang, and Hu-Chen Liu. "New Community Estimation Method in Bipartite Networks Based on Quality of Filtering Coefficient." Scientific Programming 2019 (May 21, 2019): 1–12. http://dx.doi.org/10.1155/2019/4310561.

Full text
Abstract:
Community detection is an important task in network analysis, in which we aim to find a network partitioning that groups together vertices with similar community-level connectivity patterns. Bipartite networks are a common type of network in which there are two types of vertices, and only vertices of different types can be connected. While there are a range of powerful and flexible methods for dividing a bipartite network into a specified number of communities, it is an open question how to determine exactly how many communities one should use, and estimating the numbers of pure-type communities in a bipartite network has not been completed. In our paper, we propose a method named as “biCNEQ” (bipartite network communities number estimation based on quality of filtering coefficient), which ensures that communities are all pure type, for estimating the number of communities in a bipartite network. This paper makes the following contributions: (1) we show how a unipartite weighted network, which we call similarity network, can be projected from a bipartite network using a measure of correlation; (2) we reveal the relation between the similarity correlation and community’s edges in the vertices of a unipartite network; (3) we design a measure of the filtering quality named QFC (quality of filtering coefficient) to filter the similarity network and construct a binary network, which we call approximation network; and (4) the number of communities in each type of unipartite networks is estimated using Riolo’s method with the approximation network as input. Finally, the proposed biCNEQ is demonstrated by both synthetic bipartite networks and a real-world network, and the results show that it can determine the correct number of communities and perform better than two classical one-mode projection methods.
APA, Harvard, Vancouver, ISO, and other styles
16

De Fraine, Bieke, Jan Van Damme, and Patrick Onghena. "Accountability of Schools and Teachers: What Should Be Taken into Account?" European Educational Research Journal 1, no. 3 (2002): 403–28. http://dx.doi.org/10.2304/eerj.2002.1.3.2.

Full text
Abstract:
The domain of school effectiveness relates to the question of accountability of schools. It is commonly agreed that a correction should be made for student background in order to achieve fair comparisons between schools. But even then, a fair estimation of the schools' value added is not achieved. The composition of the group of students has arguably an effect over and above individual student characteristics. This study addresses the effects of group composition in secondary schools and classes on achievement and well-being. Compositional effects are discussed with reference to type A and type B effects. Type A effects are school effectiveness indices, controlling for student background. Type B school effects are controlled for both student background and school context.
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Yunjong, Seungwoo Choi, and Mun Yong Yi. "Applying Comparable Sales Method to the Automated Estimation of Real Estate Prices." Sustainability 12, no. 14 (2020): 5679. http://dx.doi.org/10.3390/su12145679.

Full text
Abstract:
In this paper, we propose a novel procedure designed to apply comparable sales method to the automated price estimation of real estates, in particular, that of apartments. Apartments are the most popular residential housing type in Korea. The price of a single apartment is influenced by many factors, making it hard to estimate accurately. Moreover, as an apartment is purchased for living, with a sizable amount of money, it is mostly traded infrequently. Thus, its past transaction price may not be particularly helpful to the estimation after a certain period of time. For these reasons, the up-to-date price of an apartment is commonly estimated by certified appraisers, who typically rely on comparable sales method (CSM). CSM requires comparable properties to be identified and used as references in estimating the current price of the property in question. In this research, we develop a procedure to systematically apply this procedure to the automated estimation of apartment prices and assess its applicability using nine years’ real transaction data from the capital city and the most-populated province in South Korea and multiple scenarios designed to reflect the conditions of low and high fluctuations of housing prices. The results from extensive evaluations show that the proposed approach is superior to the traditional approach of relying on real estate professionals and also to the baseline machine learning approach.
APA, Harvard, Vancouver, ISO, and other styles
18

L. Reznik, Aleksandr, Vitaliy M. Efimov, and Andrey V. Torgov. "Effective Methods on Speed of Digital Processing Dynamic Sequences of Images." Siberian Journal of Physics 3, no. 3 (2008): 95–103. http://dx.doi.org/10.54362/1818-7919-2008-3-3-95-103.

Full text
Abstract:
The sharply rised performance capabilities of the modern production-type computers allow the succesful use of hybrid computing schemes in the problems related to images sequence concurrent processing. These schemes are based, on the one hand, on the most efficient recently developed analytic and numerical methods for the problem solution, and, on the other hand, on combination of methods in question with concurrent processing of not one or two images but a sequence of those. The stable method of this type for estimation of unknown parameters of camera and terrain relief reconstruction via a joint simultaneous processing of arbitrary number of satellite images is presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
19

Mao, Zhu, and Michael D. Todd. "Uncertainty Modeling and Quantification for Structural Health Monitoring Features Derived from Frequency Response Estimation." Key Engineering Materials 569-570 (July 2013): 1148–55. http://dx.doi.org/10.4028/www.scientific.net/kem.569-570.1148.

Full text
Abstract:
System identification in the frequency domain plays a fundamental role in many aspects of mechanical and structural engineering. Frequency domain approaches typically involve estimation of a transfer function, whether it is the usual frequency response function (FRF) or an output-to-output transfer model (transmissibility). The field of structural health monitoring, which involves extracting and classifying features mined from in-sit structural performance data for the purposes of damage condition assessment, has exploited many features for this purpose that inherently are derived from estimations of frequency domain models such as the FRF or transmissibility. Structural health monitoring inevitably involves a hypothesis test at the classification stage such as the (common) binary question: are the features mined from data derived from a reference condition or from data derived from a different (test) condition? Inevitably, this decision involves stochastic data, as any such candidate feature is compromised by error, which we categorize as (i) operational and environmental, (ii) measurement, and (iii) computational/estimation. Regardless of source, this noise leads to the propagation of error, resulting in possible false positive (Type I) errors in the classification. As such, the quantification of uncertainty in the estimation of such features is tantamount to making informed decisions based on a hypothesis test. This paper will demonstrate several statistical models that describe the uncertainty in FRF estimation and will compare their performance to features derived from them for the purposes of detecting damage, with ultimate performance evaluated by receiver operating characteristics (ROCs). A simulation and a plate subject to single-input/single-output vibration testing will serve as the comparison testbeds.
APA, Harvard, Vancouver, ISO, and other styles
20

Nyinoh, I. W. "Seventy Years on from the Luria and Delbrûck Fluctuation Analysis: A Comparison of three Methods for Estimating Mutation Rate." NIGERIAN ANNALS OF PURE AND APPLIED SCIENCES 6 (December 28, 2015): 50–58. http://dx.doi.org/10.46912/napas.8.

Full text
Abstract:
Seventy years ago, Luria and Delbrûck discovered fluctuation assay for estimating mutation rates. While this method is slightly dated, it is one of the few methods for estimating mutation rates in batch culture. Mutation rates when determined expose information on cellular processes and fundamental mutagenic mechanisms. Formerly, inferences drawn from fluctuation assay were sufficient to answer a specific question inbacterial genetics. However, contemporary interpretation of results goes far beyond the motive originally intended. As the fluctuation assay has gained popularity in various scientific disciplines, analyses of results obtained are not same. This study aims to compare the estimation of mutation rates using the Poison distribution (Po) method with, the Ma-Sarka Sandri maximum likelihood estimator and the Lea-Coulson median estimator. Mycobacterium smegmatismc 2 155was used as a model organism for Mycobacterium tuberculosis, and spontaneous mutations that arose in stationary phase cells exposed to antibiotic stress were investigated. Ten to twenty-four parallel cultures were tested with various anti-tuberculosis drugs; isoniazid, kanamycin, rifampicin and streptomycin. Minimum Inhibitory Concentration (MIC) of the drugs were also determinedto be; 8 ìg/mL, 0.24 ìg/mL, 16 ìg/mL and 0.5 ìg/mL for isoniazid, kanamycin, rifampicin and streptomycin respectively. The mutation rates obtained with the methods were very similar. To improve the power of deductions drawn from fluctuation assay, efforts should be made to experimentally determine the relative fitness of wild-type to mutant bacteria.This comparison is only a guide providing evidence regarding the authenticity of some of the methods currently available to researchers interested in estimating bacterial mutation rates.Keywords: antibiotic resistance, mutation rate, fluctuation assay, fluctuation analysis calculator.
APA, Harvard, Vancouver, ISO, and other styles
21

Pandey, Mamta, Ratnesh Litoriya, and Prateek Pandey. "Applicability of Machine Learning Methods on Mobile App Effort Estimation: Validation and Performance Evaluation." International Journal of Software Engineering and Knowledge Engineering 30, no. 01 (2020): 23–41. http://dx.doi.org/10.1142/s0218194020500023.

Full text
Abstract:
Software cost estimation is one of the most crucial tasks in a software development life cycle. Some well-proven methods and techniques have been developed for effort estimation in case of classical software. Mobile applications (apps) are different from conventional software by their nature, size and operational environment; therefore, the established estimation models for traditional desktop or web applications may not be suitable for mobile app development. The objective of this paper is to propose a framework for mobile app project estimation. The research methodology adopted in this work is based on selecting different features of mobile apps from the SAMOA dataset. These features are later used as input vectors to the selected machine learning (ML) techniques. The results of this research experiment are measured in mean absolute residual (MAR). The experimental outcomes are then followed by the proposition of a framework to recommend an ML algorithm as the best match for superior effort estimation of a project in question. This framework uses the Mamdani-type fuzzy inference method to address the ambiguities in the decision-making process. The outcome of this work will particularly help mobile app estimators, development professionals, and industry at large to determine the required efforts in the projects accurately.
APA, Harvard, Vancouver, ISO, and other styles
22

Tripp, Nicolás G., Aníbal E. Mirasso, and Sergio Preidikman. "Numerical analysis of the influence of inertial loading over morphing trailing edge devices." Journal of Intelligent Material Systems and Structures 29, no. 18 (2018): 3533–49. http://dx.doi.org/10.1177/1045389x18783867.

Full text
Abstract:
Larger and more flexible wind turbine blades are currently being manufactured. Those highly flexible blades suffer from loading of aeroelastic nature which increases the fatigue damage. Smart blade concepts are being developed to reduce the aerodynamic loading. The state of the art favors the discrete deformable trailing edge concept. Many authors have reported adequate performance of this type of actuators in reducing the blade vibrations. However, the question of whether the actuator can maintain its authority under strong external loading remains still answered. To solve this question, actuator models that include the loading produced by the blade vibration are required. In this article, a smart morphing trailing edge model is presented that includes the inertial forces produced by the blade dynamics. The model is applied to a commercial actuator and the influence of its parameters is analyzed. Finally, a simple estimation of the inertial loading produced by a 35-m wind turbine blade at the flutter instability condition is analyzed to understand the design requirements of this type of systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Kozhevnikov, V. V. "CALCULATION OF MEASUREMENTS UNCERTAINTY AT CARRYING OUT OF BALLISTIC RESEARCHES." Theory and Practice of Forensic Science and Criminalistics 17 (November 29, 2017): 236–45. http://dx.doi.org/10.32353/khrife.2017.30.

Full text
Abstract:
Today one of the priority problems is receiving an accreditation certificate under the international standard ISO/IEC 17025:2006 by measurement laboratories of Expert service subdivision of the Ministry of Internal Affairs of Ukraine. One of the requirements which is shown to the accredited testing laboratories, is a presence of uncertainty estimation procedure and ability to apply it. As the ballistic researches are one of the important directions of researches which are carried out in the expert subdivisions, therefore the paper is devoted to the consideration ofa question of uncertainty calculation in such measurements. In the mathematical statistics two types of paramètres which characterize dispersion of not correlated random variables are known: a root-mean-square deviation and a confidential interval. As the characteristics of uncertainty they are applied under the title standard and expanded uncertainty. An elementary estimation of measurements result and its uncertainty is carried out in such an order: description of the measured quantity; revealing of uncertainty sources; quantitative description uncertainty constituents (there are estimated uncertainty constituents which can be received a posteriori or a priori); calculation of standard uncertainty of each source, total standard uncertainty and expanded uncertainty. A posterior estimation is possible only in the case of carrying out multiple observations of the measured quantity (standard uncertainty of type A). An a priori estimation is carried out when multiple observations are not performed. In this case it’s necessary to use the information received from the measurements performed before, from the passport data on the facilities ofmeasuring technics orfrom reference books (standard uncertainty of type B). Short consideration of uncertainty concept, elucidation of the basic stages measurements result estimation and its uncertainty gives the chance to transform the theoretical knowledge into practical application of uncertainty estimation on examples of measurements uncertainty calculation during carrying out ballistic ammunition researches by two different ways.
APA, Harvard, Vancouver, ISO, and other styles
24

García, Raymundo Cordero, Edson Antonio Batista, Márcio Afonso Soleira Grassi, and João Onofre Pereira Pinto. "Type-III resolver-to-digital converter using synchronous demodulation / Tipo-III conversor resolver-para-digital utilizando a desmodulação síncrona." Brazilian Journal of Development 8, no. 6 (2022): 45863–77. http://dx.doi.org/10.34117/bjdv8n6-213.

Full text
Abstract:
Resolver is an angular position sensor widely used in applications such as electric/hybrid vehicles, CNCs, antennas and robotics. However, the estimation of the angular position from resolver outputs is more difficult than the analysis of encoder signals, and it is still an open question. Most algorithms proposed in literature are based on type-I or type-II angle tracking ob- servers. Some type-III observers were proposed, but they require a high sampling frequency. This paper explores the use of synchronous demodulation of the resolver outputs to simplify the implementation of a type-III angle tracking observer. The resolver outputs are sampled at the peaks and valleys of the excitation resolver signal, being easy to get sine and cosine of the angular position. The proposed approach reduces the computational cost and the required sampling frequency to implement the type-III observer. Simulation and experimental results prove the accuracy of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Brzeziński, Mariusz, and Dariusz Pyza. "A Refined Model for Carbon Footprint Estimation in Electric Railway Transport." Energies 16, no. 18 (2023): 6567. http://dx.doi.org/10.3390/en16186567.

Full text
Abstract:
There is a plethora of methods in the global literature that can be used to measure CO2 emissions from electrified transport. But are these methods reliable, and do they offer us a true view of how much exactly of this greenhouse gas is being produced by electric rail transport? We answer this question by proposing an improved CO2 emission estimation model based on cargo transport. Unlike other works, our studies include four crucial steps: (1) estimation of energy consumption in electrified rail cargo transport; (2) estimation of energy losses in the railway traction system and high voltage transmission lines; (3) CO2 emission estimation in traditional powerhouses; and (4) determination of the intensity of the CO2 emissions from electrified rail cargo transport. Based on our method, we concluded that the intensity of CO2 depends not only on the type of fossil fuel used for energy production but also on the parameters of the cargo train, such as its length and weight or the total number of wagon axles (which depend on wagon type). The achieved intensity of CO2 emissions in electrified rail cargo transport slightly varies from those reported in the global literature. Among the most important reasons responsible for this are the conditions under which these tests were conducted. Nevertheless, our results shed new light on how CO2 should be measured. We proved that the decarbonization of electrified rail cargo transport will never be possible without infrastructure modernization. In addition, based on a case study, we also delivered knowledge on how to reduce the environmental impact of electrified rail cargo transport.
APA, Harvard, Vancouver, ISO, and other styles
26

Dudel, Christian, Jan Marvin Garbuszus, Notburga Ott, and Martin Werding. "Matching as Non-Parametric Preprocessing for the Estimation of Equivalence Scales." Jahrbücher für Nationalökonomie und Statistik 237, no. 2 (2017): 115–41. http://dx.doi.org/10.1515/jbnst-2017-0103.

Full text
Abstract:
Abstract: Empirically analyzing household behavior usually relies on informal data preprocessing. That is, before an econometric model is estimated, observations are selected in such a way that the resulting subset of data is sufficiently homogeneous to be of interest for the specific research question pursued. In the context of estimating equivalence scales for household income, we use matching techniques and balance checking at this initial stage. This can be interpreted as a non-parametric approach to preprocessing data and as a way to formalize informal procedures. To illustrate this, we use German micro-data on household expenditure to estimate equivalence scales as a specific example. Our results show that matching leads to results which are more stable with respect to model specification and that this type of formal preprocessing is especially useful if one is mainly interested in results for specific subgroups, such as low-income households.
APA, Harvard, Vancouver, ISO, and other styles
27

Ndwandwe, L., J. S. Allison, L. Santana, and I. J. H. Visagie. "Testing for the Pareto type I distribution: a comparative study." METRON 81, no. 2 (2023): 215–56. http://dx.doi.org/10.1007/s40300-023-00252-5.

Full text
Abstract:
AbstractPareto distributions are widely used models in economics, finance and actuarial sciences. As a result, a number of goodness-of-fit tests have been proposed for these distributions in the literature. We provide an overview of the existing tests for the Pareto distribution, focussing specifically on the Pareto type I distribution. To date, only a single overview paper on goodness-of-fit testing for Pareto distributions has been published. However, the mentioned paper has a much wider scope than is the case for the current paper as it covers multiple types of Pareto distributions. The current paper differs in a number of respects. First, the narrower focus on the Pareto type I distribution allows a larger number of tests to be included. Second, the current paper is concerned with composite hypotheses compared to the simple hypotheses (specifying the parameters of the Pareto distribution in question) considered in the mentioned overview. Third, the sample sizes considered in the two papers differ substantially. In addition, we consider two different methods of fitting the Pareto Type I distribution; the method of maximum likelihood and a method closely related to moment matching. It is demonstrated that the method of estimation has a profound effect, not only on the powers achieved by the various tests, but also on the way in which numerical critical values are calculated. We show that, when using maximum likelihood, the resulting critical values are shape invariant and can be obtained using a Monte Carlo procedure. This is not the case when moment matching is employed. The paper includes an extensive Monte Carlo power study. Based on the results obtained, we recommend the use of a test based on the phi divergence together with maximum likelihood estimation.
APA, Harvard, Vancouver, ISO, and other styles
28

Rakotonirina, M. D. L., and Jean-Paul Ngbolua. "Modélisation géologique et estimation d’un gisement de Fer de Bekisopa, Madagascar." Revue Congolaise des Sciences & Technologies 2, no. 4 (2022): 498–504. http://dx.doi.org/10.59228/rcst.023.v2.i4.56.

Full text
Abstract:
Cette étude a pour objectif de réaliser une analyse approfondie de la géologie du gisement de fer en question et d'explorer son potentiel d'exploitation. En combinant des observations sur le terrain, des opérations de forage, des analyses géologiques et l'utilisation d'outils de modélisation, notre intention est de parvenir à une compréhension approfondie de la composition, de la distribution et des caractéristiques géologiques du gisement de fer de Bekisopa découvert par H. Besairie en 1933. L'évaluation des réserves de fer a été menée en appliquant des méthodes géostatistiques, notamment le krigeage. Les données issues des forages, conjointement avec les informations géologiques provenant des modèles tridimensionnels, ont servi à établir des variogrammes et des modèles de continuité spatiale. Cette démarche a permis d'aboutir à une estimation quantitative des réserves de fer présentes dans le gisement. Un total de trente forages a été réalisé sur une superficie d'un kilomètre carré. Ces données de forages ont été cruciales pour calculer un volume du gisement de l'ordre de 25 000 000 de mètres cubes, compte tenu d'une densité moyenne du gisement de 4 500 kg/m3 et d'une teneur moyenne de 40%. Cette estimation a conduit à une valeur de ressources ferreuses avoisinant les quarante millions (40 000 000) de tonnes. Étant donné que le minerai se trouve à une profondeur relativement accessible en surface, l'option d'une exploitation à ciel ouvert se présente comme une alternative envisageable et attrayante. Ce type d'exploitation implique l'extraction du minerai à partir de vastes fosses découvertes, une approche économiquement viable lorsque les ressources minérales sont situées près de la surface. Mots clés : Modélisation, caractéristiques géologiques, Bekisopa, krigeage, variogramme.
APA, Harvard, Vancouver, ISO, and other styles
29

Dunn, G., M. Maracy, C. Dowrick, et al. "Estimating psychological treatment effects from a randomised controlled trial with both non-compliance and loss to follow-up." British Journal of Psychiatry 183, no. 4 (2003): 323–31. http://dx.doi.org/10.1192/bjp.183.4.323.

Full text
Abstract:
BackgroundThe Outcomes of Depression International Network (ODIN) trial evaluated the effect of two psychological interventions for the treatment of depression in primary care. Only about half of the patients in the treatment arm complied with the offer of treatment, prompting the question: ‘what was the effect of treatment in those patients who actually received it?’AimsTo illustrate the estimation of the effect of receipt of treatment in a randomised controlled trial subject to non-compliance and loss to follow-up.MethodWe estimated the complier average causal effect (CACE) of treatment.ResultsIn the ODIN trial the effect of receipt of psychological intervention (an average of about 4 points on the Beck Depression Inventory) is about twice that of offering it.ConclusionsThe statistical analysis of the results of a clinical trial subject to noncompliance to allocated treatment is now reasonably straightforward through estimation of a CACE and investigators should be encouraged to present the results of analyses of this type as a routine component of a trial report.
APA, Harvard, Vancouver, ISO, and other styles
30

Khmelnychyi, L. M., V. V. Vechorka, and S. L. Khmelnychyi. "DEPENDENCE OF THE MILK YIELD OF DAIRY COWS ON LINEAR ESTIMATION BY TYPE." Animal Husbandry of the Steppe of Ukraine 1, no. 1 (2022): 29–35. http://dx.doi.org/10.31867/2786-6750.1.1.2022.29-35.

Full text
Abstract:
Sires breeding value determination was based on classification daughters at the age of first lactation, studies related to the breeding and genetic aspects of line scoring were limited to animals array scoring at the same age. Considering indisputable importance of linear estimation, existing correlative variability between individual descriptive and complex traits growth with dairy productivity and age-related variability in body parts of conformation development, a study was carried out of evaluation influence level of traits linear classification at the age of first lactation on cow milk productivity of subsequent second and third lactations.
 While maintaining a positive correlation in further use of cows, reliability of linear estimation by type was confirmed. Studies conducted on the livestock of Ukrainian Black-and- Red-and-White dairy breeds showed positive correlations between evaluation conformation traits and cow milk yield with variability in their direction, degree and reliability, depending on the recorded lactations. First of all, at the age of first lactation, positive correlation coefficients were determined between conformation traits and milk yield amount of cows, which were a significant confirmation of this breeding event use as one of components in the comprehensive determination of dairy cattle breeding value in the world. Next important element in the aspect of correlative variability conformation traits of linear estimation with productivity was the establishment of sufficiently high correlation coefficients between evaluation four complexes of linear traits by 100-score system with milk yield for first lactation within experimental dairy breeds.
 Positive relationship was, within group traits framework, by estimation of first lactation and milk yield of Ukrainian Black-and- Red-and-White dairy breeds for: dairy type (r=0.502 and 0.447), body (r=0.385 and 0.309), limbs (r=0.129 and 0.154), udder (r=0.404 and 0.383), respectively. Examining the question of whether existing relationship between assessment of group traits of conformation and milk yield amount of cows obtained at the age of first lactation, and between these traits and milk yield for subsequent lactations is preserved, it was found that within compared animals groups of both breeds, separate correlation coefficients received at the age of first lactation, repeated in the second with less force, but at sufficient reliability level. Correlation between indicators of linear estimation group traits of first-calf cows and milk yield for third lactation didn’t repeat level of similar relationships obtained at the age of first and second calving, although a certain pattern of their direction was followed with confirmation reliability of different level. Significant part of descriptive traits of the conformation was associated with milk yield amount for first lactation, as evidenced by reliable correlation coefficients.
 But they decrease with age, and by data of third lactation such relationship was almost absent. Descriptive traits of conformation, correlated with milk yield at the age of first lactation and repeating these relationships with yield at the second and third, belonged to the traits of dairy-type animals were reliable indicators of cow milking. They included: height, chest width, body depth, angularity, rear width, pelvic limb posture, fore udder attachment, rear udder attachment height and central ligament. Thus, reliable level of positive correlation established between estimation of group traits of linear classification at the age of first lactation and milk yield for the next second and third lactation testified about effectiveness of dairy cattle selection, evaluated by conformation type. Level of correlative variability a part of descriptive traits of the conformation with milk yield of first-calf cows will not be repeated in combination indicators of the same estimation with yield at the age of second and third lactations, which was explained by natural unevenness of age-related variability in body type parts development under genotypic and paratypic factors influence.
APA, Harvard, Vancouver, ISO, and other styles
31

Mao, Zhu, and Michael D. Todd. "Optimal Structural Health Monitoring Feature Selection via Minimized Performance Uncertainty." Key Engineering Materials 558 (June 2013): 235–43. http://dx.doi.org/10.4028/www.scientific.net/kem.558.235.

Full text
Abstract:
Power spectral measurements are very ubiquitous for their utility in generating structural health monitoring (SHM) features, because of their clear physical interpretation and easy computation through Fourier transform. In most SHM applications, optimal features are always desired to perform whatever level of assessment is required. Optimal in this sense refers to a measure of performance capability to enhance decision-making, because structural health monitoring inevitably involves, at some level, a hypothesis test: in the binary case, the question becomes are the features extracted from data derived from a baseline condition? (baseline can also mean linear, or any reference condition designated the null hypothesis) or ...from data derived from a different (test) condition. Inevitably, this decision involves stochastic data, as any such candidate feature is compromised by noise, which we may categorize as (i) operational and environmental, (ii) measurement, and (iii) computational/estimation. Regardless of source, this noise leads to the propagation of uncertainty from inception to final estimation of the feature; in all cases, the subsequent distribution of the features can lead to significant false positive (Type I) or false negative (Type II) errors in the classification of the features via the hypothesis test. Frequency domain approaches for SHM typically involve estimation of some form of transfer function, typically the usual frequency response function (FRF). Based upon the statistical modeling of the uncertainty of feature estimations, this paper evaluates the performance of two FRF-derived features, namely the dot-product difference (DPD) and Euclidian distance (ED), and statistical significance detection qualities are quantitatively compared. In each of the feature evaluations, the performance comparison is executed under the condition of best trade-off between sensitivity and specificity, adopting receiver operating characteristics as the performance indicator. Monte Carlo simulation and lab-scaled tests on plate-like structures are both implemented to validate the optimal feature selection process and demonstrate performance enhancement. The comparisons are facilitated through computation of receiver operating characteristics (ROCs), which are data-driven methods for comparing detection rates to error rates as a function of decision boundaries established between data distributions, independent of the actual underlying distribution.
APA, Harvard, Vancouver, ISO, and other styles
32

Thurik, A. Roy. "Productivity in Small Business: An Analysis Using African Data." American Journal of Small Business 11, no. 1 (1986): 27–42. http://dx.doi.org/10.1177/104225878601100103.

Full text
Abstract:
Labor and floorspace cost functions are derived for small business trade. Relationships are proposed between average volume of labor or average floorspace per establishment on the one hand, and average size per establishment, average rental paid, percentage selling space, and indicators of business type and location on the other. Promising estimation results are reported using South African data of 1979/1980. The method however Is not restricted to the South African case. A productivity business support system can be developed providing productivity standards for any area in the small (service) business. An analysis similar to the one presented here, but relating to the area in question, should precede the development of such a system.
APA, Harvard, Vancouver, ISO, and other styles
33

Onyango, Ronald, Samuel B. Apima, and Amos Wanjara. "Estimation of finite population mean of a sensitive variable using three-stage orrt in the presence of non-response and measurement errors." Engineering and Applied Science Letters 6, no. 1 (2023): 37–48. https://doi.org/10.30538/psrp-easl2023.0094.

Full text
Abstract:
The purpose of this study is to present a generalized class of estimators using the three-stage Optional Randomized Response Technique (ORRT) in the presence of non-response and measurement errors on a sensitive study variable. The proposed estimator makes use of dual auxiliary information. The expression for the bias and mean square error of the proposed estimator are derived using Taylor series expansion. The proposed estimator’s applicability is proven using real data sets. A numerical study is used to compare the efficiency of the proposed estimator with adapted estimators of the finite population mean. The suggested estimator performs better than adapted ordinary, ratio, and exponential ratio-type estimators in the presence of both non-response and measurement errors. The efficiency of the proposed estimator of population mean declines as the inverse sampling rate, non-response rate, and sensitivity level of the survey question increase.
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Hong, Yue Jin Shang, and Wan Xuan Liu. "Analysis of Fatigue Residual Life of Zhuan 8AG Side Frame." Advanced Materials Research 588-589 (November 2012): 84–89. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.84.

Full text
Abstract:
Whether zhuan 8AG type bogie can achieve the requirement of speed increasing, which have become an important question need to face and research for the railway sectors. The residual life of zhuan 8AG side frame is studied by interior fatigue test that equivalent to the actual running conditions. The interior fatigue test data was processed by Weibull distribution with 3 parameters. The point estimation and the interval estimation of the safe life were given. The fitting result indicate that, the test date conform the Weibull distribution very well. Using generalized P-S-N formula and correctional two dimensional Miner’s rule derivate the residual running mileage prediction formula of zhuan 8AG side frame based on the interior test and the actual line test. The point estimation and interval estimation of residual life are computed under certain reliability and confidence. The estimation results show that, under 95% confidence and 99.9% reliability, the zhuan 8AG side frame has 10.6 years residual life at least. Thus, its’ prospect of applications by increasing speed is optimistic. As a result of some difference about forces on side frame between internal test and actual operation, high incidence area of crack in test is different from in reality. In this fatigue test, breakages are all the C position of side frame. However, the highest probability of crack occurred position in the top surface of pedestal, which total crack numbers accounted for 50% of total crack numbers on side frame in actual running. The paper recommends that the results between internal fatigue and actual operating should be analyzed comparatively, so as to simulate the actual working conditions more realistic.
APA, Harvard, Vancouver, ISO, and other styles
35

Sugár, V., and Zs Fáczányi. "Renewable energy in a dense urban fabric: solar gain estimation in the case of a turn of the century district." IOP Conference Series: Materials Science and Engineering 1252, no. 1 (2022): 012041. http://dx.doi.org/10.1088/1757-899x/1252/1/012041.

Full text
Abstract:
Abstract Buildings are responsible for significant amount of energy consumption. The new constructions already obliged to use decreased amount of energy, but the older buildings are ineffective compared to them. The new building energetic prescriptions in force since 2021 in Hungary require considerable share of renewable energy produced on-site. In case of a densely built urban fabric, the above renewable ratio can mostly be reached by using solar technologies. The author surveys the solar characteristics of a Budapest downtown area. Considerable ratio of the stock is residential, traditional apartment house type built around the turn of the 19th-20th century. The retrofit solutions and renewable power generation are restricted by the dense fabric and the heritage protection rules as well. In present paper the estimation of solar energy potential of the above characteristic building type is introduced. The European Union conform Hungarian calculations used system is used to define the values. The methodology contains 3D modelling and irradiation software simulation. As result, the connection between the footprint and irradiation data of the characteristic heritage buildings are introduced, as well as the solar power generation potential of the building type in question. Also, the heritage protection compatibility of the solar systems are studied.
APA, Harvard, Vancouver, ISO, and other styles
36

Randolph, Tamela, and Helene Sherman. "Alternative Algorithms: Increasing Options, Reducing Errors." Teaching Children Mathematics 7, no. 8 (2001): 480–84. http://dx.doi.org/10.5951/tcm.7.8.0480.

Full text
Abstract:
An algorithm is “a finite, step-by-step procedure for accomplishing a task that we wish to complete” (Usiskin 1998, p. 7). Algorithms have served as a major focus of mathematics education in the United States for decades. Because school-based mathematics focuses on computation and estimation, the tasks of developing number sense, place-value understanding, and strategies for computing with algorithms remain of great importance to elementary school teachers. “The use of algorithms allows students to look at math as a process rather than as a question answer type activity … they can choose from their toolbox. Algorithms provide a comfort zone for some students and encourage students to pursue better ways as they get comfortable with them” (Mingus and Grassl 1998, p. 56).
APA, Harvard, Vancouver, ISO, and other styles
37

Baranov, T. M., and D. A. Zainagabdinov. "Rigidity of Flanged Joints in Prefabricated Tunnel Linings with Tensile Bonding." World of Transport and Transportation 20, no. 2 (2022): 30–41. http://dx.doi.org/10.30932/1992-3252-2022-20-2-3.

Full text
Abstract:
The article considers the issue of estimating rigidity of the longitudinal fl joint of prefabricated tunnel linings with tensile bonding. Rigidity of fl joints affects correctness of estimation of predicted forces in tunnel linings. The tunnel design standards indicate the need to consider rigidity of joints of segments of prefabricated tunnel linings when calculating the forces in the load-carrying structures, however, the question of estimating the magnitude of the rigidity and methods for accounting for it remains open.The objective of the research is to study design assumptions and to reveal some results of estimation of rigidity of ordinary bolted joints of segments of prefabricated tunnel linings, as well as the effect of the rigidity on the forces in tunnel linings. The issue is relevant when performing checking calculations of existing structures and when designing new linings with rigid bolted joints and other tensile bonding elements.The article provides an analytical solution of the problem based on the compatibility of deformations of prefabricated elements, and shows the dependences obtained of the angle of mutual rotation of rigid segments of tunnel lining on bending moments, longitudinal forces, and geometric dimensions of lining elements. The correctness of the conclusions was verifi by a series of numerical experiments resulted in building of refi curves of the dependences of the same parameters, and in estimation of the spatial operation of cast-iron tubing in the contact area.Solving the contact and physically nonlinear problem of operation of a fl joint of cast-iron tubing with tensile bonding has allowed to identify at the beginning a set of linear deformations of functions of the dependence of the angle of rotation of the segments on the forces acting in them for a specific configuration of elements. A technique for applying the research results for modelling tunnel linings as a plane problem in the GTS NX environment is disclosed. Comparative modelling of the same type of test tasks for operation of annular tunnel linings showed that under various soil conditions, with introduction of joint rigidity parameters, an increase in bending moments up to 8 % is observed in the linings while longitudinal forces remain practically unchanged.
APA, Harvard, Vancouver, ISO, and other styles
38

Irfan, Mohammad, and Ghulam Mohammad Arif. "Landlessness in Rural Areas of Pakistan and Policy Options: A Preliminary Investigation." Pakistan Development Review 27, no. 4II (1988): 567–76. http://dx.doi.org/10.30541/v27i4iipp.567-576.

Full text
Abstract:
The quantification of landlessness is a formidable task. Conceptual ambiguities involved in the classification of landlessness and data limitations compound the difficulties in the estimation. Landlessness, which is an elusive concept, tends to acquire interpretations which vary with the objectives, context and estimation procedures adopted in different research endeavours. The denotation and connotation of the concept of landlessness, the population of interest (or at risk) and the objectives of measurement therefore need to be spell out very clearly for a meaningful and policy-relevant exercise. Identification of the state of landlessness using the criterion of ownership and access to land, has often been made. While the 'ownership' may be clear in certain contexts, that of 'access' needs further explanations in terms of the nature, extent and type of access. A related question, is the demarcation of the population or its subset whose landlessness is to be estimated: are all the inhabitants of an area or the ones who primarily depend on land for their livelihood be regarded as the relevant population. The dependence on land needs to be further specified whether the person is engaged in agricultural operations as worker or not.
APA, Harvard, Vancouver, ISO, and other styles
39

Giraldo-Soto, Catalina, Aitor Erkoreka, Laurent Mora, Irati Uriarte, and Luis Del Portillo. "Monitoring System Analysis for Evaluating a Building’s Envelope Energy Performance through Estimation of Its Heat Loss Coefficient." Sensors 18, no. 7 (2018): 2360. http://dx.doi.org/10.3390/s18072360.

Full text
Abstract:
The present article investigates the question of building energy monitoring systems used for data collection to estimate the Heat Loss Coefficient (HLC) with existing methods, in order to determine the Thermal Envelope Performance (TEP) of a building. The data requirements of HLC estimation methods are related to commonly used methods for fault detection, calibration, and supervision of energy monitoring systems in buildings. Based on an extended review of experimental tests to estimate the HLC undertaken since 1978, qualitative and quantitative analyses of the Monitoring and Controlling System (MCS) specifications have been carried out. The results show that no Fault Detection and Diagnosis (FDD) methods have been implemented in the reviewed literature. Furthermore, it was not possible to identify a trend of technology type used in sensors, hardware, software, and communication protocols, because a high percentage of the reviewed experimental tests do not specify the model, technical characteristics, or selection criteria of the implemented MCSs. Although most actual Building Automation Systems (BAS) may measure the required parameters, further research is still needed to ensure that these data are accurate enough to rigorously apply HLC estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Mahapatra, Mausumi, Priyanka P. Singh, and Ganeswar Nath. "Laser speckle-based estimation of surface condition for designing quieter material." Laser Physics 34, no. 1 (2023): 016003. http://dx.doi.org/10.1088/1555-6611/ad0ec0.

Full text
Abstract:
Abstract Laser speckle is a non-contact, non-interfering, non-destructive, rapid and controllable technique with a large area of coverage that has attracted much attention for surface analysis of materials. The emergence of different natural cellulosic materials as potential candidates for the replacement of synthetic ones is promising, but their sustainability from the point of view of material design is still in question. Machining of the surface of cellulosic components for making sound-dampening materials has a critical effect on determining their surface condition. Laser speckle is an emerging tool for surface analysis of a variety of materials, and it has important applications in material design and analysis. As a cutting-edge research tool, ultrasonic wave technology has maintained a significant contribution to the design and structural monitoring of composite materials. The present work uses date palm leaf fibers for composite reinforcement. The sound-dampening properties, such as sound absorption and transmission, were analyzed on the basis of surface roughness observed with the laser speckle technique and modified by ultrasonically blended surfactants. The surface roughness of the synthesized material was found to increase with sonication time with an R 2 value of 0.944 and observed fluctuation in roughness data on the surface of the material. The sound absorption coefficient is 0.98 with a transmission loss of 60 dB, for which the material is classified as an ASTM E1050 Class A acoustic material. Further, the method of laser speckle-based roughness estimation is found to be potential tool for the design of any type of quieter material.
APA, Harvard, Vancouver, ISO, and other styles
41

Myasnikova, Ekaterina, and Konstantin N. Kozlov. "Statistical method for estimation of the predictive power of a gene circuit model." Journal of Bioinformatics and Computational Biology 12, no. 02 (2014): 1441002. http://dx.doi.org/10.1142/s0219720014410029.

Full text
Abstract:
In this paper, a specific aspect of the prediction problem is considered: high predictive power is understood as a possibility to reproduce correct behavior of model solutions at predefined values of a subset of parameters. The problem is discussed in the context of a specific mathematical model, the gene circuit model for segmentation gap gene system in early Drosophila development. A shortcoming of the model is that it cannot be used for predicting the system behavior in mutants when fitted to wild type (WT) data. In order to answer a question whether experimental data contain enough information for the correct prediction we introduce two measures of predictive power. The first measure reveals the biologically substantiated low sensitivity of the model to parameters that are responsible for correct reconstruction of expression patterns in mutants, while the second one takes into account their correlation with the other parameters. It is demonstrated that the model solution, obtained by fitting to gene expression data in WT and Kr - mutants simultaneously, and exhibiting the high predictive power, is characterized by much higher values of both measures than those fitted to WT data alone. This result leads us to the conclusion that information contained in WT data is insufficient to reliably estimate the large number of model parameters and provide predictions of mutants.
APA, Harvard, Vancouver, ISO, and other styles
42

Corbett, John P., Marc D. Breton, and Stephen D. Patek. "A Multiple Hypothesis Approach to Estimating Meal Times in Individuals With Type 1 Diabetes." Journal of Diabetes Science and Technology 15, no. 1 (2019): 141–46. http://dx.doi.org/10.1177/1932296819883267.

Full text
Abstract:
Introduction: It is important to have accurate information regarding when individuals with type 1 diabetes have eaten and taken insulin to reconcile those events with their blood glucose levels throughout the day. Insulin pumps and connected insulin pens provide records of when the user injected insulin and how many carbohydrates were recorded, but it is often unclear when meals occurred. This project demonstrates a method to estimate meal times using a multiple hypothesis approach. Methods: When an insulin dose is recorded, multiple hypotheses were generated describing variations of when the meal in question occurred. As postprandial glucose values informed the model, the posterior probability of the truth of each hypothesis was evaluated, and from these posterior probabilities, an expected meal time was found. This method was tested using simulation and a clinical data set ( n = 11) and with either uniform or normally distributed ( μ = 0, σ = 10 or 20 minutes) prior probabilities for the hypothesis set. Results: For the simulation data set, meals were estimated with an average error of −0.77 (±7.94) minutes when uniform priors were used and −0.99 (±8.55) and −0.88 (±7.84) for normally distributed priors ( σ = 10 and 20 minutes). For the clinical data set, the average estimation error was 0.02 (±30.87), 1.38 (±21.58), and 0.04 (±27.52) for the uniform priors and normal priors ( σ = 10 and 20 minutes). Conclusion: This technique could be used to help advise physicians about the meal time insulin dosing behaviors of their patients and potentially influence changes in their treatment strategy.
APA, Harvard, Vancouver, ISO, and other styles
43

Jin, Shichao, Yanjun Su, Shang Gao, Tianyu Hu, Jin Liu, and Qinghua Guo. "The Transferability of Random Forest in Canopy Height Estimation from Multi-Source Remote Sensing Data." Remote Sensing 10, no. 8 (2018): 1183. http://dx.doi.org/10.3390/rs10081183.

Full text
Abstract:
Canopy height is an important forest structure parameter for understanding forest ecosystem and improving global carbon stock quantification accuracy. Light detection and ranging (LiDAR) can provide accurate canopy height measurements, but its application at large scales is limited. Using LiDAR-derived canopy height as ground truth to train the Random Forest (RF) algorithm and therefore predict canopy height from other remotely sensed datasets in areas without LiDAR coverage has been one of the most commonly used method in large-scale canopy height mapping. However, how variances in location, vegetation type, and spatial scale of study sites influence the RF modelling results is still a question that needs to be addressed. In this study, we selected 16 study sites (100 km2 each) with full airborne LiDAR coverage across the United States, and used the LiDAR-derived canopy height along with optical imagery, topographic data, and climate surfaces to evaluate the transferability of the RF-based canopy height prediction method. The results show a series of findings from general to complex. The RF model trained at a certain location or vegetation type cannot be transferred to other locations or vegetation types. However, by training the RF algorithm using samples from all sites with various vegetation types, a universal model can be achieved for predicting canopy height at different locations and different vegetation types with self-predicted R2 higher than 0.6 and RMSE lower than 6 m. Moreover, the influence of spatial scales on the RF prediction accuracy is noticeable when spatial extent of the study site is less than 50 km2 or the spatial resolution of the training pixel is finer than 500 m. The canopy height prediction accuracy increases with the spatial extent and the targeted spatial resolution.
APA, Harvard, Vancouver, ISO, and other styles
44

Buge, Éric, and Étienne Ollion. "Que vaut un député ?" Annales. Histoire, Sciences Sociales 77, no. 4 (2022): 703–37. http://dx.doi.org/10.1017/ahss.2023.3.

Full text
Abstract:
RésuméCombien perçoivent vraiment les députés français pour leur activité parlementaire ? À cette question en apparence anodine, il est pourtant difficile d’apporter une réponse claire pour la majeure partie du xxe siècle. À partir d’un travail inédit mené dans diverses archives de l’Assemblée nationale, cet article propose une première estimation du revenu que les députés ont pu retirer de leur mandat, du début du xxe siècle à nos jours. Il démontre ainsi la fécondité de ces données pour l’historien, en ce qu’il contribue à répondre à trois interrogations liées : pourquoi est-il si complexe d’accéder à une information normalement publique ? Comment le revenu des parlementaires situe-t-il les élus dans l’échelle des revenus de la population française ? Enfin, que nous disent ces évolutions de l’indemnité (en termes de niveau comme de nature) sur le type d’activité qu’est la députation ? Ce faisant, l’étude éclaire certaines transformations de fond du métier politique.
APA, Harvard, Vancouver, ISO, and other styles
45

Ruddick, Voss, Boss, et al. "A Review of Protocols for Fiducial Reference Measurements of WaterLeaving Radiance for Validation of Satellite Remote-Sensing Data over Water." Remote Sensing 11, no. 19 (2019): 2198. http://dx.doi.org/10.3390/rs11192198.

Full text
Abstract:
This paper reviews the state of the art of protocols for measurement of waterleaving radiance in the context of fiducial reference measurements (FRM) of water reflectance for satellite validation. Measurement of water reflectance requires the measurement of waterleaving radiance and downwelling irradiance just above water. For the former there are four generic families of method, based on: 1) underwater radiometry at fixed depths; or 2) underwater radiometry with vertical profiling; or 3) abovewater radiometry with skyglint correction; or 4) onwater radiometry with skylight blocked. Each method is described generically in the FRM context with reference to the measurement equation, documented implementations and the intramethod diversity of deployment platform and practice. Ideal measurement conditions are stated, practical recommendations are provided on best practice and guidelines for estimating the measurement uncertainty are provided for each protocolrelated component of the measurement uncertainty budget. The state of the art for measurement of waterleaving radiance is summarized, future perspectives are outlined, and the question of which method is best adapted to various circumstances (water type, wavelength) is discussed. This review is based on practice and papers of the aquatic optics community for the validation of water reflectance estimated from satellite data but can be relevant also for other applications such as the development or validation of algorithms for remote-sensing estimation of water constituents including chlorophyll a concentration, inherent optical properties and related products.
APA, Harvard, Vancouver, ISO, and other styles
46

Черненко, Альберт, Олександр Матвієвський, Олексій Рудковський, Петро Ванкевич, Назар Баліцький та Віталій Федоренко. "ФОРМУВАННЯ НАВИЧОК БОЙОВОЇ РОБОТИ НА СУЧАСНИХ ЗРАЗКАХ ОЗБРОЄННЯ ІЗ ЗАЛУЧЕННЯМ ТРЕНАЖЕРІВ". Collection of scientific works of Odesa Military Academy, № 18 (3 березня 2023): 103–10. http://dx.doi.org/10.37129/2313-7509.2022.18.103-110.

Full text
Abstract:
Some aspects of preparation of servicemen, that touches the battle arranging of crews of military equipment are considered in this summarizing article, receipt by them proof skills of the effective use of regular armament in composition subsections. The analysis of process of forming of skills of servicemen is resulted on employments from the combat training. Possibility of distribution of method of making of battle skills is examined for providing of sufficient level of the combat training, that foresees bringing in of modern multivectorial equipment which has ability to reproduce on facilities of monitoring of the varied multiplicative situations and providing of possibility to seize by it with the use of the attracted standards of military equipment adequate to the battle situation. For an example the process of preparation of activity of specialists of operator type is select, namely: one who aims a gun of cannons of tanks, one-operators who aims a gun of fighting machines of infantry, self-propelled-artillery (gun-artillery) complexes one who aims a gun of the anti-tank guided complexes; operators of the radiolocations stations and communication, facilities and automated control systems by the fire of the attracted military equipment networks. The questions of specific of functions which are executed by a man-operator in the process of battle application of armament and military technical come into question. As a result of evaluation of the special preparation of operators of Anti-aircraft self-moving gun-2S6 on an existent educational-material base the row of conclusions which are able to influence on the process of optimization of process of preparation of one who aims a gun is done. Keywords: reproduce maximal approach to militant condition of the personally staff, algorithm of the militant work of crew of military vehicle, indexes of the instruct, operate type of activity, training and modeling device, probably estimation of battle work, modern special training and education and methodical base military training of the military specialist, comparatively estimation of the level of instruct of personally crew, level of methodical master commander of platoon and battery.
APA, Harvard, Vancouver, ISO, and other styles
47

Weissling, B. P., and S. F. Ackley. "Antarctic sea-ice altimetry: scale and resolution effects on derived ice thickness distribution." Annals of Glaciology 52, no. 57 (2011): 225–32. http://dx.doi.org/10.3189/172756411795931679.

Full text
Abstract:
AbstractThree ice type regimes at Ice Station Belgica (ISB), during the 2007 International Polar Year SIMBA (Sea Ice Mass Balance in Antarctica) expedition, were characterized and assessed for elevation, snow depth, ice freeboard and thickness. Analyses of the probability distribution functions showed great potential for satellite-based altimetry for estimating ice thickness. In question is the required altimeter sampling density for reasonably accurate estimation of snow surface elevation given inherent spatial averaging. This study assesses an effort to determine the number of laser altimeter ‘hits’ of the ISB floe, as a representative Antarctic floe of mixed first- and multi-year ice types, for the purpose of statistically recreating the in situ-determined ice-thickness and snow depth distribution based on the fractional coverage of each ice type. Estimates of the fractional coverage and spatial distribution of the ice types, referred to as ice ‘towns’, for the 5 km2 floe were assessed by in situ mapping and photo-visual documentation. Simulated ICESat altimeter tracks, with spot size ~70m and spacing ~170 m, sampled the floe’s towns, generating a buoyancy-derived ice thickness distribution. 115 altimeter hits were required to statistically recreate the regional thickness mean and distribution for a three-town assemblage of mixed first- and multi-year ice, and 85 hits for a two-town assemblage of first-year ice only: equivalent to 19.5 and 14.5 km respectively of continuous altimeter track over a floe region of similar structure. Results have significant implications toward model development of sea-ice sampling performance of the ICESat laser altimeter record as well as maximizing sampling characteristics of satellite/airborne laser and radar altimetry missions for sea-ice thickness.
APA, Harvard, Vancouver, ISO, and other styles
48

Knuth, Joseph, and Prabir Barooah. "Distributed collaborative 3D pose estimation of robots from heterogeneous relative measurements: an optimization on manifold approach." Robotica 33, no. 7 (2014): 1507–35. http://dx.doi.org/10.1017/s0263574714000794.

Full text
Abstract:
SUMMARYWe propose a distributed algorithm for estimating the 3D pose (position and orientation) of multiple robots with respect to a common frame of reference when Global Positioning System is not available. This algorithm does not rely on the use of any maps, or the ability to recognize landmarks in the environment. Instead, we assume that noisy relative measurements between pairs of robots are intermittently available, which can be any one, or combination, of the following: relative pose, relative orientation, relative position, relative bearing, and relative distance. The additional information about each robot's pose provided by these measurements are used to improve over self-localization estimates. The proposed method is similar to a pose-graph optimization algorithm in spirit: pose estimates are obtained by solving an optimization problem in the underlying Riemannian manifold $(SO(3)\times{\mathcal R}^3)^{n(k)}$. The proposed algorithm is directly applicable to 3D pose estimation, can fuse heterogeneous measurement types, and can handle arbitrary time variation in the neighbor relationships among robots. Simulations show that the errors in the pose estimates obtained using this algorithm are significantly lower than what is achieved when robots estimate their pose without cooperation. Results from experiments with a pair of ground robots with vision-based sensors reinforce these findings. Further, simulations comparing the proposed algorithm with two state-of-the-art existing collaborative localization algorithms identify under what circumstances the proposed algorithm performs better than the existing methods. In addition, the question of trade-offs between cost (of obtaining a certain type of relative measurement) and benefit (improvement in localization accuracy) for various types of relative measurements is considered.
APA, Harvard, Vancouver, ISO, and other styles
49

Favre, Philippe, Jess G. Snedeker, and Christian Gerber. "Numerical modelling of the shoulder for clinical applications." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, no. 1895 (2009): 2095–118. http://dx.doi.org/10.1098/rsta.2008.0282.

Full text
Abstract:
Research activity involving numerical models of the shoulder is dramatically increasing, driven by growing rates of injury and the need to better understand shoulder joint pathologies to develop therapeutic strategies. Based on the type of clinical question they can address, existing models can be broadly categorized into three groups: (i) rigid body models that can simulate kinematics, collisions between entities or wrapping of the muscles over the bones, and which have been used to investigate joint kinematics and ergonomics, and are often coupled with (ii) muscle force estimation techniques, consisting mainly of optimization methods and electromyography-driven models, to simulate muscular action and joint reaction forces to address issues in joint stability, muscular rehabilitation or muscle transfer, and (iii) deformable models that account for stress–strain distributions in the component structures to study articular degeneration, implant failure or muscle/tendon/bone integrity. The state of the art in numerical modelling of the shoulder is reviewed, and the advantages, limitations and potential clinical applications of these modelling approaches are critically discussed. This review concentrates primarily on muscle force estimation modelling, with emphasis on a novel muscle recruitment paradigm, compared with traditionally applied optimization methods. Finally, the necessary benchmarks for validating shoulder models, the emerging technologies that will enable further advances and the future challenges in the field are described.
APA, Harvard, Vancouver, ISO, and other styles
50

Kowalska, Magdalena. "Influence of Loading History and Boundary Conditions on Parameters of Soil Constitutive Models." Studia Geotechnica et Mechanica 34, no. 1 (2012): 15–33. http://dx.doi.org/10.1515/sgem-2017-0020.

Full text
Abstract:
Abstract Parameters of soil constitutive models are not constant. This mainly concerns the strain parameters such as K, G or Eoed modules. What influences their values is not only soil type, structure and consistency, but also the history of stress and strain states. So, it is the question of the current state but also of what happened to the subsoil in the past (regarding geological and anthropological activity) and what impact would have the planned soil–structure interaction. This paper presents an overview of the literature showing how much the soil constitutive model parameters depend on loading and boundary conditions of a particular geotechnical problem. Model calibration methods are shortly described with special attention paid to the author’s “Loading Path Method”, which allows estimation of optimum parameter values of any soil constitutive model. An example of the use of this method to estimate strain parameters E and ν of Coulomb–Mohr elasticperfectly plastic model is given.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography