Academic literature on the topic 'Test point optimal'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Test point optimal.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Test point optimal"

1

Evans, Merran A., and Maxwell L. King. "A point optimal test for heteroscedastic disturbances." Journal of Econometrics 27, no. 2 (February 1985): 163–78. http://dx.doi.org/10.1016/0304-4076(85)90085-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

King, Maxwell L. "A point optimal test for autoregressive disturbances." Journal of Econometrics 27, no. 1 (January 1985): 21–37. http://dx.doi.org/10.1016/0304-4076(85)90042-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vougas, Dimitrios V. "Modification of the point optimal unit root test." Applied Economics Letters 16, no. 4 (February 5, 2009): 349–52. http://dx.doi.org/10.1080/13504850601018635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Larner, Andrew J. "Defining ‘optimal’ test cut-off using global test metrics: evidence from a cognitive screening instrument." Neurodegenerative Disease Management 10, no. 4 (August 2020): 223–30. http://dx.doi.org/10.2217/nmt-2020-0003.

Full text
Abstract:
Aim: To examine the variation of several global metrics of test accuracy with test cut-off for the diagnosis of dementia. These metrics included some based on the receiver operating characteristic curve, such as Youden index, and some independent of receiver operating characteristic curve, such as correct classification accuracy. Materials & methods: Data from a test accuracy study of Mini-Addenbrooke’s Cognitive Examination were used to calculate and plot each global measure against test cut-off. Results: Different ‘optimal’ cut-points were identified for the different global measures, with a spread of ten points in observed optimal cut-off in the 30-point Mini-Addenbrooke’s Cognitive Examination scale. Using these optima gave a large variation in test sensitivity from very high (diagnostic odds ratio) to very low (likelihood to be diagnosed or misdiagnosed), but all had high negative predictive value. Conclusion: The method used to determine the cut-off of cognitive screening instruments may have significant implications for test performance.
APA, Harvard, Vancouver, ISO, and other styles
5

King, Maxwell L. "A Point Optimal Test for Moving Average Regression Disturbances." Econometric Theory 1, no. 2 (August 1985): 211–22. http://dx.doi.org/10.1017/s0266466600011142.

Full text
Abstract:
This paper reconsiders King's [12] locally optimal test procedure for first-order moving average disturbances in the linear regression model. It recommends two tests, one for problems involving positively correlated disturbances and one for negatively correlated disturbances. Both tests are most powerful invariant at a point in the alternative hypothesis parameter space that is determined by a function involving the sample size and the number of regressors. Selected bounds for the tests' significance points are tabulated and an empirical comparison of powers demonstrates the overall superiority of the new test for positively correlated moving average disturbances.
APA, Harvard, Vancouver, ISO, and other styles
6

Sofronov, G. Yu. "Asymptotically d-Optimal Test of a Change-Point Detection." Theory of Probability & Its Applications 46, no. 3 (January 2002): 547–48. http://dx.doi.org/10.1137/s0040585x97979160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dastoor, Naorayex K., and Gordon Fisher. "On Point-Optimal Cox Tests." Econometric Theory 4, no. 1 (April 1988): 97–107. http://dx.doi.org/10.1017/s0266466600011889.

Full text
Abstract:
This paper is concerned with the general problem of testing one form of covariance structure against another in a normal linear regression. It is shown that all the point-optimal tests recently proposed by King and his associates can be interpreted as special cases of a Cox test for non-nested hypotheses. This provides a synthesis of a whole range of point-optimal tests as well as demonstrating that King and his associates have exposed a class of Cox tests which have an exact distribution.
APA, Harvard, Vancouver, ISO, and other styles
8

Sofronov, G. Yu. "Asymptotically d-optimal Test of A Posteriori Change-Point Detection." Theory of Probability & Its Applications 49, no. 2 (January 2005): 367–71. http://dx.doi.org/10.1137/s0040585x97981111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Xiaofeng, Aiqiang Xu, and Shuangcheng Niu. "KKCV-GA-Based Method for Optimal Analog Test Point Selection." IEEE Transactions on Instrumentation and Measurement 66, no. 1 (January 2017): 24–32. http://dx.doi.org/10.1109/tim.2016.2614752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Yuan, Chenglin Yang, Shulin Tian, and Fang Chen. "Entropy Based Test Point Evaluation and Selection Method for Analog Circuit Fault Diagnosis." Mathematical Problems in Engineering 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/259430.

Full text
Abstract:
By simplifying tolerance problem and treating faulty voltages on different test points as independent variables, integer-coded table technique is proposed to simplify the test point selection process. Usually, simplifying tolerance problem may induce a wrong solution while the independence assumption will result in over conservative result. To address these problems, the tolerance problem is thoroughly considered in this paper, and dependency relationship between different test points is considered at the same time. A heuristic graph search method is proposed to facilitate the test point selection process. First, the information theoretic concept of entropy is used to evaluate the optimality of test point. The entropy is calculated by using the ambiguous sets and faulty voltage distribution, determined by component tolerance. Second, the selected optimal test point is used to expand current graph node by using dependence relationship between the test point and graph node. Simulated results indicate that the proposed method more accurately finds the optimal set of test points than other methods; therefore, it is a good solution to minimize the size of the test point set. To simplify and clarify the proposed method, only catastrophic and some specific parametric faults are discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Test point optimal"

1

Wang, Liqiong. "Point optimal unit root tests." Thesis, University of York, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kaštánek, Martin. "Vstupní díl UHF přijímače s velmi nízkou spotřebou." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217183.

Full text
Abstract:
The purpose of this work was to make a proposal for input parts of receiver for band 430 to 440 MHz. A model of chosen semiconductor triode BFP540 was created in simulation software. Possibilities how to decrease consumption of this semiconductor triode, keeping the profit, were investigated through the simulation.In compromise consumption, keeping the profit of the amplifier - an optimal operating point for this semiconductor triode UCE = 1,2 V and IC = 2 mA was found. It was tested through the testing wiring with noise microstrips conformity. Ascertained knowledge was used for construction of tuner for UHF receiver. An operating point of input amplifier of UHF receiver was owing to power supply amplifier forced for bigger effectiveness to UCE = 2,65 V and IC = 2,0 mA. Suppression of mirror frequency is provided with Helix filter of the third order, because of intermediate frequency 10,7 MHz. Mixing on intermediate frequency is made again by semiconductor triode BFP540. Selectivity of receiver is provided with intermediate frequency crystal filter 10,7 MHz with bandwidth 15 kHz. Designed input part enables reception of SSB, FM and digital types of modulation.Bandwidth intermediate frequency exit is adapted to this request To receive particular modulation , it is necessary to complete intermediate frequency signal way with appropriate intermediate frequency filter.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Zhi-Hong. "Mixed-signal testing of integrated analog circuits and modules." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1181174339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gao, Lijun. "Information Points and Optimal Discharging Speed: Effects on the Saturation Flow at Signalized Intersections." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1430482821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

St-Onge, Christina, and Christina St-Onge. "La vraisemblance de patrons de réponses : étude de la précision des indices d'ajustement des scores individuels, de leurs points critiques et du taux optimal d'aberrance." Doctoral thesis, Université Laval, 2008. http://hdl.handle.net/20.500.11794/19733.

Full text
Abstract:
Cette étude doctorale porte sur les indices d’ajustement des scores individuels dérivés de la Théorie des réponses aux items (TRI). Les deux concepts retenus dans le cadre de cette recherche sont les taux de détection et les points critiques. Le premier et le troisième article traitent des taux de détection tandis que le deuxième article traite des points critiques. Le premier article étudie la relation entre la puissance des indices et l’ajustement des modèles logistiques à 2 et à 3 paramètres de la TRI aux données. Les résultats suggèrent que pour que les indices soient puissants, un modèle qui correspond à la distribution des données doit être préféré à un modèle qui épouse les données. Dans le deuxième article, nous avons élaboré des points critiques pour la statistiques lz qui peuvent être utilisés dans des contextes semblables à ceux étudiés dans le cadre du premier article. Les résultats obtenus, dans le deuxième article, démontrent qu’il est possible de créer une table des points critiques. Les intervalles de confiance calculés pour chaque point critique indiquent que ces derniers sont précis. Lors de la mise à l’essai de ces points critiques, il a été observé que les taux d’erreur de type I sont conservateurs. Ceci est plus prononcé pour l’erreur de type I de 0,01. Quant aux taux de détection pour les niveaux d’erreur de type I de 0,05 et 0,10, ils sont légèrement inférieurs à ceux recensés dans la documentation. Dans le troisième article, il est question de la relation entre les taux de détection des indices d’ajustement des scores individuels et le taux d’aberrance des patrons de réponses. Les résultats de ce troisième article suggèrent l’existence du phénomène du taux d’aberrance optimal. Il y a une augmentation du taux de détection des indices d’ajustement des scores individuels avec l’augmentation du taux d’aberrance jusqu’à l’atteinte d’un sommet. Par la suite, une augmentation du taux d’aberrance entraîne une diminution du taux de détection. Ces derniers résultats nous permettre d’expliquer un phénomène qui n’avait jamais été formellement étudié auparavant.
Cette étude doctorale porte sur les indices d’ajustement des scores individuels dérivés de la Théorie des réponses aux items (TRI). Les deux concepts retenus dans le cadre de cette recherche sont les taux de détection et les points critiques. Le premier et le troisième article traitent des taux de détection tandis que le deuxième article traite des points critiques. Le premier article étudie la relation entre la puissance des indices et l’ajustement des modèles logistiques à 2 et à 3 paramètres de la TRI aux données. Les résultats suggèrent que pour que les indices soient puissants, un modèle qui correspond à la distribution des données doit être préféré à un modèle qui épouse les données. Dans le deuxième article, nous avons élaboré des points critiques pour la statistiques lz qui peuvent être utilisés dans des contextes semblables à ceux étudiés dans le cadre du premier article. Les résultats obtenus, dans le deuxième article, démontrent qu’il est possible de créer une table des points critiques. Les intervalles de confiance calculés pour chaque point critique indiquent que ces derniers sont précis. Lors de la mise à l’essai de ces points critiques, il a été observé que les taux d’erreur de type I sont conservateurs. Ceci est plus prononcé pour l’erreur de type I de 0,01. Quant aux taux de détection pour les niveaux d’erreur de type I de 0,05 et 0,10, ils sont légèrement inférieurs à ceux recensés dans la documentation. Dans le troisième article, il est question de la relation entre les taux de détection des indices d’ajustement des scores individuels et le taux d’aberrance des patrons de réponses. Les résultats de ce troisième article suggèrent l’existence du phénomène du taux d’aberrance optimal. Il y a une augmentation du taux de détection des indices d’ajustement des scores individuels avec l’augmentation du taux d’aberrance jusqu’à l’atteinte d’un sommet. Par la suite, une augmentation du taux d’aberrance entraîne une diminution du taux de détection. Ces derniers résultats nous permettre d’expliquer un phénomène qui n’avait jamais été formellement étudié auparavant.
This doctoral research on Item Response Theory (IRT)-based Person-Fit Statistics (PFS) is comprised of three studies. This research was divided in such a way so we could study two key concepts: the detection rates and the critical values of PFS. In the first and third study, detection rates were studied. The second study focused on the critical values of a PFS. In the first article, we observed that the PFS were more accurate when they were used with parametric estimated ICCs (ML2P and ML3P), and this was independent of the sample size. It seems necessary to verify the model-data fit before carrying out appropriateness assessment with IRT-based PFS. Following the development of a table of critical values, in the second article, the degrees of confidence were calculated for each interval and these results lead us to believe that the critical values were precise. These critical values were tested and it was observed that the type I error rates were conservative but the detection rates observed for .05 and .10 type I error levels were slightly inferior to the detection rates found in the literature. In the third article, we investigated the optimal aberrance phenomenon, i.e., we observed an increase in the detection rate of PFS with an increase in the aberrance rate until a peak was reached and then an increase in the aberrance rate lead to a decrease in the detection rates of PFS. These last results help us to explain a phenomenon that was never previously studied.
This doctoral research on Item Response Theory (IRT)-based Person-Fit Statistics (PFS) is comprised of three studies. This research was divided in such a way so we could study two key concepts: the detection rates and the critical values of PFS. In the first and third study, detection rates were studied. The second study focused on the critical values of a PFS. In the first article, we observed that the PFS were more accurate when they were used with parametric estimated ICCs (ML2P and ML3P), and this was independent of the sample size. It seems necessary to verify the model-data fit before carrying out appropriateness assessment with IRT-based PFS. Following the development of a table of critical values, in the second article, the degrees of confidence were calculated for each interval and these results lead us to believe that the critical values were precise. These critical values were tested and it was observed that the type I error rates were conservative but the detection rates observed for .05 and .10 type I error levels were slightly inferior to the detection rates found in the literature. In the third article, we investigated the optimal aberrance phenomenon, i.e., we observed an increase in the detection rate of PFS with an increase in the aberrance rate until a peak was reached and then an increase in the aberrance rate lead to a decrease in the detection rates of PFS. These last results help us to explain a phenomenon that was never previously studied.
APA, Harvard, Vancouver, ISO, and other styles
6

Souza, Rafael Ramos de [UNESP]. "Um método primal-dual de pontos interiores/exteriores com estratégias de teste quadrático e determinação de direções de busca combinadas no problema de fluxo de potência ótimo reativo." Universidade Estadual Paulista (UNESP), 2016. http://hdl.handle.net/11449/142853.

Full text
Abstract:
Submitted by Rafael Ramos de Souza null (rr.souza@live.com) on 2016-08-09T15:45:13Z No. of bitstreams: 1 VERSÃO_ENTREGUE.pdf: 1452852 bytes, checksum: ae6aa21d2282113ac3abaade8414218e (MD5)
Approved for entry into archive by Ana Paula Grisoto (grisotoana@reitoria.unesp.br) on 2016-08-11T12:16:39Z (GMT) No. of bitstreams: 1 souza_rr_me_bauru.pdf: 1452852 bytes, checksum: ae6aa21d2282113ac3abaade8414218e (MD5)
Made available in DSpace on 2016-08-11T12:16:39Z (GMT). No. of bitstreams: 1 souza_rr_me_bauru.pdf: 1452852 bytes, checksum: ae6aa21d2282113ac3abaade8414218e (MD5) Previous issue date: 2016-06-10
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O problema de Fluxo de Potência Ótimo tem por objetivo a otimização de um critério de desempenho elétrico sujeito ao atendimento das demandas de potência ativa e reativa em cada barra e de restrições técnico-operacionais dos sistemas de geração e transmissão. É um problema de otimização, não-linear, não-convexo e de grande porte. Neste trabalho é explorado o problema de Fluxo de Potência Ótimo Reativo com o objetivo de minimizar as perdas de potência ativa na transmissão e para resolvê-lo é proposto um método primal-dual de pontos interiores/exteriores barreira logarítmica modificada com estratégias de teste quadrático e determinação de direções de busca combinadas. O teste quadrático é proposto como alternativa ao procedimento de Cholesky na verificação da positividade da matriz hessiana do problema, que, se definida positiva, garante direções de descida para o método. As novas direções de busca são determinadas através de combinações das direções dos procedimentos previsor e corretor, determinadas através da análise das condições de complementaridade das variáveis primais e duais do problema. O método proposto foi implementado em Matlab e aplicado aos sistemas elétricos 9 e 39 barras e aos sistemas IEEE 14, 30, 57 e 118 barras. O desempenho do método com as estratégias propostas é avaliado em termos do número de iterações e do tempo computacional. Os resultados são promissores e permitem a aplicação do presente método, com as estratégias propostas, para resolver o problema de Fluxo de Potência Ótimo Reativo com maior dimensão do que os sistemas testados.
The reactive optimal power flow problem is concerned with the optimization of a specific criterion associated with the transmission system while enforcing the power balance in each transmission bus, as well as operational and physical constraints associated with generation and transmission systems. It is a nonlinear, non-convex and large optimization problem. In this work we consider the active losses minimization in the transmission system as a criterion for the optimal power flow problem. The solution of the problem is investigated by proposing a modified log-barrier primal-dual interior/exterior point method with a quadratic test strategy and new search direction procedures. The quadratic test is proposed as an alternative strategy to the Cholesky procedure for calculating the positivity of the Hessian matrix of the problem.The new search directions investigated in the paper are determined by combining the search directions calculated in the predictor and corrector steps, respectively, and also by using information associated with the complementarity conditions. The method proposed is implemented in Matlab and applied to solving the reactive optimal power flow problem for 9 and 39-bus systems, as well as for the IEEE 14, 30, 57 and 118-bus test systems. The performance of the method with the proposed strategies for search directions is evaluated in terms of the number of iterations and computational times. The results are promising and allow the application of the present method with the proposed search strategies for solving problems of larger dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Chenxue. "Generalized Confidence Intervals for Partial Youden Index and its Corresponding Optimal Cut-Off Point." 2013. http://scholarworks.gsu.edu/math_theses/133.

Full text
Abstract:
In the field of diagnostic test studies, the accuracy of a diagnostic test is essential in evaluating the performance of the test. The receiver operating characteristic (ROC) curve and the area under the curve (AUC) are widely used in such evaluation procedures. Meanwhile, the Youden index is also introduced into practice to measure the accuracy of the diagnostic test from another aspect. The Youden index maximizes the sum of sensitivity and specificity, assuring decent true positive and negative rates. It draws one's attention due to its merit of finding the optimal cut-off points of biomarkers. Similar to Partial ROC, a new index, called "Partial Youden index" can be defined as an extension of Youden's Index. It is more meaningful than regular Youden index since the regular one is just a special case of the Partial Youden Index. In this thesis, we focus on construction of generalized confidence intervals for the Partial Youden Index and its corresponding optimal cut-off points. Extensive simulation studies are conducted to evaluate the finite sample performances of the new intervals.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Ful-Chiang, and 吳復強. "Determining the Optimum Cut-off Point of Diagnostic Tests by Taguchi Method." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/84901573133033180104.

Full text
Abstract:
碩士
萬能科技大學
經營管理研究所在職專班
105
A diagnostic test is a medical test that is applied to a patient in order to determine the presence of a specific disease, where early and accurate diagnosis can decrease morbidity and mortality rates of disease. The application of a diagnostic test in the assessment of a disease may lead to errors, and therefore the accuracy of a diagnostic test is measured in terms of two probabilities: sensitivity and specificity. Sensitivity is the probability of a positive result when the individual has the disease, and specificity is the probability of a negative result when the individual does not have the disease. There are several indices are studied for evaluating diagnostic performance, such as sensitivity and specificity, receiver operating characteristic (ROC) curves, area under the ROC curve, Youden index, likelihood ratio and diagnostic odds ratio. The Youden index is a single statistic that captures the performance of a diagnostic test. The index is defined for all points of a ROC curve, and the maximum value of the index may be used as a criterion for selecting the optimum cut-off point. Taguchi's robust design aims to reduce the impact of noise on the product or process quality and leads to greater customer satisfaction and higher operational performance. The objective of robust design is to minimize the total quality loss in products or processes. The SN ratio recommended by Taguchi is based on the errors with the same loss coefficient to optimize the digital dynamic problem. However, the losses due to the two types of errors are not equal. The problem of two error probabilities (false negative rate and false positive rate) in diagnostic tests can be viewed as a digital dynamic system in Taguchi method. The purpose of study is to obtain the optimum cut-off point for the diagnostic tests using the concept of Taguchi's quality loss function. The loss model of diagnostic tests is proposed due to different loss coefficients between false negative and false positive. The Youden_Taguchi index (JT) and optimum cut-off point are derived for the normal, lognormal, Gamma and Weibull distributions.
APA, Harvard, Vancouver, ISO, and other styles
9

"Finding the minimum test set with the optimum number of internal probe points." Chinese University of Hong Kong, 1996. http://library.cuhk.edu.hk/record=b5888785.

Full text
Abstract:
by Kwan Wai Wing Eric.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.
Includes bibliographical references.
ABSTRACT
ACKNOWLEDGMENT
LIST OF FIGURES
LIST OF TABLES
Chapter Chapter 1 --- Introduction
Chapter 1.1 --- Background --- p.1-1
Chapter 1.2 --- E-Beam testing and test generation algorithm --- p.1-2
Chapter 1.3 --- Motivation of this research --- p.1-4
Chapter 1.4 --- Out-of-kilter Algorithm --- p.1-6
Chapter 1.5 --- Outline of the remaining chapter --- p.1-7
Chapter Chapter 2 --- Electron Beam Testing
Chapter 2.1 --- Background and Theory --- p.2-1
Chapter 2.2 --- Principles and Instrumentation --- p.2-4
Chapter 2.3 --- Implication of internal IC testing --- p.2-6
Chapter 2.4 --- Advantage of Electron Beam Testing --- p.2-7
Chapter Chapter 3 --- An exhaustive method to minimize test sets
Chapter 3.1 --- Basic Principles --- p.3-1
Chapter 3.1.1 --- Controllability and Observability --- p.3-1
Chapter 3.1.2 --- Single Stuck at Fault Model --- p.3-2
Chapter 3.2 --- Fault Dictionary --- p.3-4
Chapter 3.2.1 --- Input Format --- p.3-4
Chapter 3.2.2 --- Critical Path Generation --- p.3-6
Chapter 3.2.3 --- Probe point insertion --- p.3-8
Chapter 3.2.4 --- Formation of Fault Dictionary --- p.3-9
Chapter Chapter 4 --- Mathematical Model - Out-of-kilter algorithm
Chapter 4.1 --- Network Model --- p.4-1
Chapter 4.2 --- Linear programming model --- p.4-3
Chapter 4.3 --- Kilter states --- p.4-5
Chapter 4.4 --- Flow change --- p.4-7
Chapter 4.5 --- Potential change --- p.4-9
Chapter 4.6 --- Summary and Conclusion --- p.4-10
Chapter Chapter 5 --- Apply Mathematical Method to minimize test sets
Chapter 5.1 --- Implementation of OKA to the Fault Dictionary --- p.5-1
Chapter 5.2 --- Minimize test set and optimize internal probings / probe points --- p.5-5
Chapter 5.2.1 --- Minimize the number of test vectors --- p.5-5
Chapter 5.2.2 --- Find the optimum number of internal probings --- p.5-8
Chapter 5.2.3 --- Find the optimum number of internal probe points --- p.5-11
Chapter 5.3 --- Fixed number of internal probings/probe points --- p.5-12
Chapter 5.4 --- True minimum test set and optimum probing/ probe point --- p.5-14
Chapter Chapter 6 --- Implementation and work examples
Chapter 6.1 --- Generation of Fault Dictionary --- p.6-1
Chapter 6.2 --- Finding the minimum test set without internal probe point --- p.6-5
Chapter 6.3.1 --- Finding the minimum test set with optimum internal probing --- p.6-10
Chapter 6.3.2 --- Finding the minimum test set with optimum internal probe point --- p.6-24
Chapter 6.4 --- Finding the minimum test set by fixing the number of internal probings at 2 --- p.6-26
Chapter 6.5 --- Program Description --- p.6-35
Chapter Chapter 7 --- Realistic approach to find the minimum solution
Chapter 7.1 --- Problem arising in exhaustive method --- p.7-1
Chapter 7.2 --- Improvement work on existing test generation algorithm --- p.7-2
Chapter 7.3 --- Reduce the search set --- p.7-5
Chapter 7.3.1 --- Making the Fault Dictionary from existing test generation algorithm --- p.7-5
Chapter 7.3.2 --- Making the Fault Dictionary by random generation --- p.7-9
Chapter Chapter 8 --- Conclusions
Chapter 8.1 --- Summary of Results --- p.8-1
Chapter 8.2 --- Further Research --- p.8-5
REFERENCES --- p.R-1
Chapter Appendix A --- Fault Dictionary of circuit SC1 --- p.A-1
Chapter Appendix B --- Fault Dictionary of circuit SC7 --- p.B-1
Chapter Appendix C --- Simple Circuits Layout --- p.C-1
APA, Harvard, Vancouver, ISO, and other styles
10

Taamouti, Abderrahim. "Problèmes d'économétrie en macroéconomie et en finance : mesures de causalité, asymétrie de la volatilité et risque financier." Thèse, 2007. http://hdl.handle.net/1866/1507.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Test point optimal"

1

Prussing, John E. Optimal Spacecraft Trajectories. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198811084.001.0001.

Full text
Abstract:
Optimal spacecraft trajectories are given a modern comprehensive treatment of the theory and important results. In most cases “optimal” means minimum propellant. Less propellant required results in more payload delivered to the destination. Both necessary and sufficient conditions for an optimal solution are analysed. Numerous illustrative examples are included and problems are provided at the ends of the chapters along with references. Newer topics such as cooperative rendezvous and second-order conditions are considered. Seven appendices are included to supplement the text, some with problems. Both classical results and newer research results are included. A new test for a conjugate point is demonstrated. The book is both a graduate-level textbook and a scholarly reference book.
APA, Harvard, Vancouver, ISO, and other styles
2

Prussing, John E. Second-Order Conditions. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198811084.003.0009.

Full text
Abstract:
Second-order conditions for both parameter optimization problems and optimal control problems are analysed. A new conjugate point test procedure is discussed and illustrated. For an optimal control problem we will examine the second variation of the cost. The first variation subject to constraints provides first-order NC for a minimum of J. Second-order conditions provide SC a minimum.
APA, Harvard, Vancouver, ISO, and other styles
3

Sikorski, Krzysztof A. Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.001.0001.

Full text
Abstract:
Optimal Solution of Nonlinear Equations is a text/monograph designed to provide an overview of optimal computational methods for the solution of nonlinear equations, fixed points of contractive and noncontractive mapping, and for the computation of the topological degree. It is of interest to any reader working in the area of Information-Based Complexity. The worst-case settings are analyzed here. Several classes of functions are studied with special emphasis on tight complexity bounds and methods which are close to or achieve these bounds. Each chapter ends with exercises, including companies and open-ended research based exercises.
APA, Harvard, Vancouver, ISO, and other styles
4

Spinrad, Richard W., Kendall L. Carder, and Mary Jane Perry, eds. Ocean Optics. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195068436.001.0001.

Full text
Abstract:
Since the publication of Jerlov's classic volume on optical oceanography in 1968, the ability to predict or model the submarine light field, given measurements of the inherent optical properties of the ocean, has improved to the point that model fields are very close to measured fields. In the last three decades, remote sensing capabilities have fostered powerful models that can be inverted to estimate the inherent optical properties closely related to substances important for understanding global biological productivity, environmental quality, and most nearshore geophysical processes. This volume presents an eclectic blend of information on the theories, experiments, and instrumentation that now characterize the ways in which optical oceanography is studied. Through the course of this interdisciplinary work, the reader is led from the physical concepts of radiative transfer to the experimental techniques used in the lab and at sea, to process-oriented discussions of the biochemical mechanisms responsible for oceanic optical variability. The text will be of interest to researchers and students in physical and biological oceanography, biology, geophysics, limnology, atmospheric optics, and remote sensing of ocean and global climate change.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeffrey, Waincymer. Part IX Costs, Funding, and Ideas for Optimization, 28 Optimizing the use of Mediation in International Arbitration: A Cost–Benefit Analysis of ‘Two Hat’ Versus ‘Two People’ Models. Oxford University Press, 2016. http://dx.doi.org/10.1093/law/9780198783206.003.0029.

Full text
Abstract:
This chapter considers the question of whether an arbitrator may also adopt a mediation function or whether the dual roles are antithetical. It tests that hypothesis by engaging in a cost-benefit analysis of differing scenarios when mediation is utilized in an arbitral context. The prime comparison is between parallel mediation with a separate neutral and the alternative of a dual-role neutral. The three key points are: there should be much more mediation occurring at the international level, regarding both potential and actual arbitral disputes; a commercially minded arbitrator concerned for the parties’ good faith should encourage mediation where appropriate, in particular, when an adjudicated outcome will not be in the interests of either, usually because the dispute is a small part of a long-term relationship that can risk that relationship no matter who wins; and, while informed party autonomy should always support a dual-role neutral, in most factual permutations, informed parties could be expected to prefer parallel mediation provided there is full cooperation between mediator and arbitrator. The chapter argues that the relative benefits of the use of dual-role neutrals would be greatly outweighed by the costs in fairness and efficiency, and the inevitable need for a sub-optimal design of either or both dispute processes. The benefits would also be separately outweighed by the risks of significant disruption to any ensuing arbitration if a dual-role neutral fails to achieve a settlement.
APA, Harvard, Vancouver, ISO, and other styles
6

Newnham, Robert E. Properties of Materials. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780198520757.001.0001.

Full text
Abstract:
Crystals are sometimes called 'Flowers of the Mineral Kingdom'. In addition to their great beauty, crystals and other textured materials are enormously useful in electronics, optics, acoustics and many other engineering applications. This richly illustrated text describes the underlying principles of crystal physics and chemistry, covering a wide range of topics and illustrating numerous applications in many fields of engineering using the most important materials today. Tensors, matrices, symmetry and structure-property relationships form the main subjects of the book. While tensors and matrices provide the mathematical framework for understanding anisotropy, on which the physical and chemical properties of crystals and textured materials often depend, atomistic arguments are also needed to quantify the property coefficients in various directions. The atomistic arguments are partly based on symmetry and partly on the basic physics and chemistry of materials. After introducing the point groups appropriate for single crystals, textured materials and ordered magnetic structures, the directional properties of many different materials are described: linear and nonlinear elasticity, piezoelectricity and electrostriction, magnetic phenomena, diffusion and other transport properties, and both primary and secondary ferroic behavior. With crystal optics (its roots in classical mineralogy) having become an important component of the information age, nonlinear optics is described along with the piexo-optics, magneto-optics, and analogous linear and nonlinear acoustic wave phenomena. Enantiomorphism, optical activity, and chemical anisotropy are discussed in the final chapters of the book.
APA, Harvard, Vancouver, ISO, and other styles
7

Krishnan, Kannan M. Principles of Materials Characterization and Metrology. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198830252.001.0001.

Full text
Abstract:
Characterization enables a microscopic understanding of the fundamental properties of materials (Science) to predict their macroscopic behavior (Engineering). With this focus, the book presents a comprehensive discussion of the principles of materials characterization and metrology. Characterization techniques are introduced through elementary concepts of bonding, electronic structure of molecules and solids, and the arrangement of atoms in crystals. Then, the range of electrons, photons, ions, neutrons and scanning probes, used in characterization, including their generation and related beam-solid interactions that determine or limit their use, are presented. This is followed by ion-scattering methods, optics, optical diffraction, microscopy, and ellipsometry. Generalization of Fraunhofer diffraction to scattering by a three-dimensional arrangement of atoms in crystals, leads to X-ray, electron, and neutron diffraction methods, both from surfaces and the bulk. Discussion of transmission and analytical electron microscopy, including recent developments, is followed by chapters on scanning electron microscopy and scanning probe microscopies. It concludes with elaborate tables to provide a convenient and easily accessible way of summarizing the key points, features, and inter-relatedness of the different spectroscopy, diffraction, and imaging techniques presented throughout. The book uniquely combines a discussion of the physical principles and practical application of these characterization techniques to explain and illustrate the fundamental properties of a wide range of materials in a tool-based approach. Based on forty years of teaching and research, and including worked examples, test your knowledge questions, and exercises, the target readership of the book is wide, for it is expected to appeal to the teaching of undergraduate and graduate students, and to post-docs, in multiple disciplines of science, engineering, biology and art conservation, and to professionals in industry.
APA, Harvard, Vancouver, ISO, and other styles
8

Skiba, Grzegorz. Fizjologiczne, żywieniowe i genetyczne uwarunkowania właściwości kości rosnących świń. The Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences, 2020. http://dx.doi.org/10.22358/mono_gs_2020.

Full text
Abstract:
Bones are multifunctional passive organs of movement that supports soft tissue and directly attached muscles. They also protect internal organs and are a reserve of calcium, phosphorus and magnesium. Each bone is covered with periosteum, and the adjacent bone surfaces are covered by articular cartilage. Histologically, the bone is an organ composed of many different tissues. The main component is bone tissue (cortical and spongy) composed of a set of bone cells and intercellular substance (mineral and organic), it also contains fat, hematopoietic (bone marrow) and cartilaginous tissue. Bones are a tissue that even in adult life retains the ability to change shape and structure depending on changes in their mechanical and hormonal environment, as well as self-renewal and repair capabilities. This process is called bone turnover. The basic processes of bone turnover are: • bone modeling (incessantly changes in bone shape during individual growth) following resorption and tissue formation at various locations (e.g. bone marrow formation) to increase mass and skeletal morphology. This process occurs in the bones of growing individuals and stops after reaching puberty • bone remodeling (processes involve in maintaining bone tissue by resorbing and replacing old bone tissue with new tissue in the same place, e.g. repairing micro fractures). It is a process involving the removal and internal remodeling of existing bone and is responsible for maintaining tissue mass and architecture of mature bones. Bone turnover is regulated by two types of transformation: • osteoclastogenesis, i.e. formation of cells responsible for bone resorption • osteoblastogenesis, i.e. formation of cells responsible for bone formation (bone matrix synthesis and mineralization) Bone maturity can be defined as the completion of basic structural development and mineralization leading to maximum mass and optimal mechanical strength. The highest rate of increase in pig bone mass is observed in the first twelve weeks after birth. This period of growth is considered crucial for optimizing the growth of the skeleton of pigs, because the degree of bone mineralization in later life stages (adulthood) depends largely on the amount of bone minerals accumulated in the early stages of their growth. The development of the technique allows to determine the condition of the skeletal system (or individual bones) in living animals by methods used in human medicine, or after their slaughter. For in vivo determination of bone properties, Abstract 10 double energy X-ray absorptiometry or computed tomography scanning techniques are used. Both methods allow the quantification of mineral content and bone mineral density. The most important property from a practical point of view is the bone’s bending strength, which is directly determined by the maximum bending force. The most important factors affecting bone strength are: • age (growth period), • gender and the associated hormonal balance, • genotype and modification of genes responsible for bone growth • chemical composition of the body (protein and fat content, and the proportion between these components), • physical activity and related bone load, • nutritional factors: – protein intake influencing synthesis of organic matrix of bone, – content of minerals in the feed (CA, P, Zn, Ca/P, Mg, Mn, Na, Cl, K, Cu ratio) influencing synthesis of the inorganic matrix of bone, – mineral/protein ratio in the diet (Ca/protein, P/protein, Zn/protein) – feed energy concentration, – energy source (content of saturated fatty acids - SFA, content of polyun saturated fatty acids - PUFA, in particular ALA, EPA, DPA, DHA), – feed additives, in particular: enzymes (e.g. phytase releasing of minerals bounded in phytin complexes), probiotics and prebiotics (e.g. inulin improving the function of the digestive tract by increasing absorption of nutrients), – vitamin content that regulate metabolism and biochemical changes occurring in bone tissue (e.g. vitamin D3, B6, C and K). This study was based on the results of research experiments from available literature, and studies on growing pigs carried out at the Kielanowski Institute of Animal Physiology and Nutrition, Polish Academy of Sciences. The tests were performed in total on 300 pigs of Duroc, Pietrain, Puławska breeds, line 990 and hybrids (Great White × Duroc, Great White × Landrace), PIC pigs, slaughtered at different body weight during the growth period from 15 to 130 kg. Bones for biomechanical tests were collected after slaughter from each pig. Their length, mass and volume were determined. Based on these measurements, the specific weight (density, g/cm3) was calculated. Then each bone was cut in the middle of the shaft and the outer and inner diameters were measured both horizontally and vertically. Based on these measurements, the following indicators were calculated: • cortical thickness, • cortical surface, • cortical index. Abstract 11 Bone strength was tested by a three-point bending test. The obtained data enabled the determination of: • bending force (the magnitude of the maximum force at which disintegration and disruption of bone structure occurs), • strength (the amount of maximum force needed to break/crack of bone), • stiffness (quotient of the force acting on the bone and the amount of displacement occurring under the influence of this force). Investigation of changes in physical and biomechanical features of bones during growth was performed on pigs of the synthetic 990 line growing from 15 to 130 kg body weight. The animals were slaughtered successively at a body weight of 15, 30, 40, 50, 70, 90, 110 and 130 kg. After slaughter, the following bones were separated from the right half-carcass: humerus, 3rd and 4th metatarsal bone, femur, tibia and fibula as well as 3rd and 4th metatarsal bone. The features of bones were determined using methods described in the methodology. Describing bone growth with the Gompertz equation, it was found that the earliest slowdown of bone growth curve was observed for metacarpal and metatarsal bones. This means that these bones matured the most quickly. The established data also indicate that the rib is the slowest maturing bone. The femur, humerus, tibia and fibula were between the values of these features for the metatarsal, metacarpal and rib bones. The rate of increase in bone mass and length differed significantly between the examined bones, but in all cases it was lower (coefficient b <1) than the growth rate of the whole body of the animal. The fastest growth rate was estimated for the rib mass (coefficient b = 0.93). Among the long bones, the humerus (coefficient b = 0.81) was characterized by the fastest rate of weight gain, however femur the smallest (coefficient b = 0.71). The lowest rate of bone mass increase was observed in the foot bones, with the metacarpal bones having a slightly higher value of coefficient b than the metatarsal bones (0.67 vs 0.62). The third bone had a lower growth rate than the fourth bone, regardless of whether they were metatarsal or metacarpal. The value of the bending force increased as the animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. The rate of change in the value of this indicator increased at a similar rate as the body weight changes of the animals in the case of the fibula and the fourth metacarpal bone (b value = 0.98), and more slowly in the case of the metatarsal bone, the third metacarpal bone, and the tibia bone (values of the b ratio 0.81–0.85), and the slowest femur, humerus and rib (value of b = 0.60–0.66). Bone stiffness increased as animals grew. Regardless of the growth point tested, the highest values were observed for the humerus, tibia and femur, smaller for the metatarsal and metacarpal bone, and the lowest for the fibula and rib. Abstract 12 The rate of change in the value of this indicator changed at a faster rate than the increase in weight of pigs in the case of metacarpal and metatarsal bones (coefficient b = 1.01–1.22), slightly slower in the case of fibula (coefficient b = 0.92), definitely slower in the case of the tibia (b = 0.73), ribs (b = 0.66), femur (b = 0.59) and humerus (b = 0.50). Bone strength increased as animals grew. Regardless of the growth point tested, bone strength was as follows femur > tibia > humerus > 4 metacarpal> 3 metacarpal> 3 metatarsal > 4 metatarsal > rib> fibula. The rate of increase in strength of all examined bones was greater than the rate of weight gain of pigs (value of the coefficient b = 2.04–3.26). As the animals grew, the bone density increased. However, the growth rate of this indicator for the majority of bones was slower than the rate of weight gain (the value of the coefficient b ranged from 0.37 – humerus to 0.84 – fibula). The exception was the rib, whose density increased at a similar pace increasing the body weight of animals (value of the coefficient b = 0.97). The study on the influence of the breed and the feeding intensity on bone characteristics (physical and biomechanical) was performed on pigs of the breeds Duroc, Pietrain, and synthetic 990 during a growth period of 15 to 70 kg body weight. Animals were fed ad libitum or dosed system. After slaughter at a body weight of 70 kg, three bones were taken from the right half-carcass: femur, three metatarsal, and three metacarpal and subjected to the determinations described in the methodology. The weight of bones of animals fed aa libitum was significantly lower than in pigs fed restrictively All bones of Duroc breed were significantly heavier and longer than Pietrain and 990 pig bones. The average values of bending force for the examined bones took the following order: III metatarsal bone (63.5 kg) <III metacarpal bone (77.9 kg) <femur (271.5 kg). The feeding system and breed of pigs had no significant effect on the value of this indicator. The average values of the bones strength took the following order: III metatarsal bone (92.6 kg) <III metacarpal (107.2 kg) <femur (353.1 kg). Feeding intensity and breed of animals had no significant effect on the value of this feature of the bones tested. The average bone density took the following order: femur (1.23 g/cm3) <III metatarsal bone (1.26 g/cm3) <III metacarpal bone (1.34 g / cm3). The density of bones of animals fed aa libitum was higher (P<0.01) than in animals fed with a dosing system. The density of examined bones within the breeds took the following order: Pietrain race> line 990> Duroc race. The differences between the “extreme” breeds were: 7.2% (III metatarsal bone), 8.3% (III metacarpal bone), 8.4% (femur). Abstract 13 The average bone stiffness took the following order: III metatarsal bone (35.1 kg/mm) <III metacarpus (41.5 kg/mm) <femur (60.5 kg/mm). This indicator did not differ between the groups of pigs fed at different intensity, except for the metacarpal bone, which was more stiffer in pigs fed aa libitum (P<0.05). The femur of animals fed ad libitum showed a tendency (P<0.09) to be more stiffer and a force of 4.5 kg required for its displacement by 1 mm. Breed differences in stiffness were found for the femur (P <0.05) and III metacarpal bone (P <0.05). For femur, the highest value of this indicator was found in Pietrain pigs (64.5 kg/mm), lower in pigs of 990 line (61.6 kg/mm) and the lowest in Duroc pigs (55.3 kg/mm). In turn, the 3rd metacarpal bone of Duroc and Pietrain pigs had similar stiffness (39.0 and 40.0 kg/mm respectively) and was smaller than that of line 990 pigs (45.4 kg/mm). The thickness of the cortical bone layer took the following order: III metatarsal bone (2.25 mm) <III metacarpal bone (2.41 mm) <femur (5.12 mm). The feeding system did not affect this indicator. Breed differences (P <0.05) for this trait were found only for the femur bone: Duroc (5.42 mm)> line 990 (5.13 mm)> Pietrain (4.81 mm). The cross sectional area of the examined bones was arranged in the following order: III metatarsal bone (84 mm2) <III metacarpal bone (90 mm2) <femur (286 mm2). The feeding system had no effect on the value of this bone trait, with the exception of the femur, which in animals fed the dosing system was 4.7% higher (P<0.05) than in pigs fed ad libitum. Breed differences (P<0.01) in the coross sectional area were found only in femur and III metatarsal bone. The value of this indicator was the highest in Duroc pigs, lower in 990 animals and the lowest in Pietrain pigs. The cortical index of individual bones was in the following order: III metatarsal bone (31.86) <III metacarpal bone (33.86) <femur (44.75). However, its value did not significantly depend on the intensity of feeding or the breed of pigs.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Test point optimal"

1

Felder, Stefan, and Thomas Mayrhofer. "The Optimal Cutoff Point of a Diagnostic Test." In Medical Decision Making, 121–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18330-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mabey, David, and Rosanna Peeling. "The Optimal Features of a Rapid Point-of-Care Diagnostic Test." In Revolutionizing Tropical Medicine, 81–87. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2019. http://dx.doi.org/10.1002/9781119282686.ch3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vo, Dieu Ngoc, and Peter Schegner. "An Improved Particle Swarm Optimization for Optimal Power Flow." In Meta-Heuristics Optimization Algorithms in Engineering, Business, Economics, and Finance, 1–40. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2086-5.ch001.

Full text
Abstract:
This chapter proposes a newly improved particle swarm optimization (IPSO) method for solving optimal power flow (OPF) problem. The proposed IPSO is the particle swarm optimization with constriction factor and the particle’s velocity guided by a pseudo-gradient. The pseudo-gradient is to determine the direction for the particles so that they can quickly move to optimal solution. The proposed method has been tested on benchmark functions, the IEEE 14-bus, IEEE 30-bus, IEEE 57-bus, and IEEE-118 bus systems, in which the IEEE 30-bus system is tested with different objective functions including quadratic function, valve point effects, and multiple fuels. The test results have shown that the proposed method can efficiently obtain better total costs than the conventional PSO method. Therefore, the proposed IPSO could be a useful method for implementation in the OPF problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Patel, Sarosh R., and Tarek Sobh. "Optimal Design of Three-Link Planar Manipulators Using Grashof’s Criterion." In Prototyping of Robotic Systems, 70–83. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0176-5.ch003.

Full text
Abstract:
The design of robotic manipulators is dictated by a set of pre-determined task descriptions and performance parameters. These performance parameters are often defined in terms of workspace dexterity, manipulability, and accuracy. Many serial manipulator applications require that the manipulator have full dexterity about a work piece or a pre-defined trajectory, that is, to approach the given point within the workspace with all possible orientations about that point. Grashof’s criterion defines the mobility of four-link closed chain mechanisms in relation to its link lengths. A simple assumption can convert a three-link serial manipulator into a four-link closed chain so that its mobility can be studied using Grashof’s criterion. With the help of Grashof’s criterion, it is possible not only to predict and simulate the mobility of a manipulator during its design, but also to map and identify the fully-dexterous regions within its workspace. Mapping of the dexterous workspace is helpful in efficient task placement and path planning. Next, the authors propose a simple algorithm using Grashof’s criterion for determining the optimal link lengths of a three-link manipulator, in order to achieve full dexterity at the desired regions of the workspace. Finally, the authors test the generated design by applying joint angle limitations.
APA, Harvard, Vancouver, ISO, and other styles
5

Polprasert, Jirawadee, Weerakorn Ongsakul, and Vo Ngoc Dieu. "Improved Pseudo-Gradient Search Particle Swarm Optimization for Optimal Power Flow Problem." In Sustaining Power Resources through Energy Optimization and Engineering, 177–207. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9755-3.ch008.

Full text
Abstract:
This paper proposes an improved pseudo-gradient search particle swarm optimization (IPG-PSO) for solving optimal power flow (OPF) with non-convex generator fuel cost functions. The objective of OPF problem is to minimize generator fuel cost considering valve point loading, voltage deviation and voltage stability index subject to power balance constraints and generator operating constraints, transformer tap setting constraints, shunt VAR compensator constraints, load bus voltage and line flow constraints. The proposed IPG-PSO method is an improved PSO by chaotic weight factor and guided by pseudo-gradient search for particle's movement in an appropriate direction. Test results on the IEEE 30-bus and 118-bus systems indicate that IPG-PSO method is superior to other methods in terms of lower generator fuel cost, smaller voltage deviation, and lower voltage stability index.
APA, Harvard, Vancouver, ISO, and other styles
6

Asher, Anthony, and John De Ravin. "The Age Pension Means Tests: Contorting Australian Retirement." In Who Wants to Retire and Who Can Afford to Retire? IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.91856.

Full text
Abstract:
Most Australian retirees are likely to be subject to the Age Pension assets or income test at some point. Evidence is that many retirees adapt their consumption to increase Age Pension entitlements, but long-term implications are difficult to determine—even if the current rules were to remain in place. This chapter evaluates the current approach to means testing against the principles set out in a Department of Social Services discussion paper on this topic. We evaluate the implied “effective marginal tax rates” (EMTRs) on the assets of part pensioners who are subject to the assets test. We find that depending on a variety of parameters such as assumed future earnings rates, demographic status, drawdown strategy and the base level of assets held, the EMTRs are high enough to explain material distortions to savings decisions of those still in employment, and the spending and investment decisions of retirees. Optimal decisions in this context require contorted retirement strategies that do not appear to be in anyone’s interest. Some possible remedies are suggested, which should include incorporating the value of the principal residence within the assets test. The chapter therefore illustrates the application of principled analysis to policy issues of this sort.
APA, Harvard, Vancouver, ISO, and other styles
7

Khoa, Truong Hoang, Pandian Vasant, Balbir Singh Mahinder Singh, and Vo Ngoc Dieu. "Swarm-Based Mean-Variance Mapping Optimization (MVMOS) for Solving Non-Convex Economic Dispatch Problems." In Advances in Computational Intelligence and Robotics, 211–51. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8291-7.ch007.

Full text
Abstract:
The practical Economic Dispatch (ED) problems have non-convex objective functions with complex constraints due to the effects of valve point loadings, multiple fuels, and prohibited zones. This leads to difficulty in finding the global optimal solution of the ED problems. This chapter proposes a new swarm-based Mean-Variance Mapping Optimization (MVMOS) for solving the non-convex ED. The proposed algorithm is a new population-based meta-heuristic optimization technique. Its special feature is a mapping function applied for the mutation. The proposed MVMOS is tested on several test systems and the comparisons of numerical obtained results between MVMOS and other optimization techniques are carried out. The comparisons show that the proposed method is more robust and provides better solution quality than most of the other methods. Therefore, the MVMOS is very favorable for solving non-convex ED problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Bäck, Thomas. "An Experiment in Meta-Evolution." In Evolutionary Algorithms in Theory and Practice. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195099713.003.0013.

Full text
Abstract:
So far, the basic knowledge about setting up the parameters of Evolutionary Algorithms stems from a lot of empirical work and few theoretical results. The standard guidelines for parameters such as crossover rate, mutation probability, and population size as well as the standard settings of the recombination operator and selection mechanism were presented in chapter 2 for the Evolutionary Algorithms. In the case of Evolution Strategies and Evolutionary Programming, the self-adaptation mechanism for strategy parameters solves this parameterization problem in an elegant way, while for Genetic Algorithms no such technique is employed. Chapter 6 served to identify a reasonable choice of the mutation rate, but no theoretically confirmed knowledge about the choice of the crossover rate and the crossover operator is available. With respect to the optimal population size for Genetic Algorithms, Goldberg presented some theoretical arguments based on maximizing the number of schemata processed by the algorithm within fixed time, arriving at an optimal size λ* = 3 for serial implementations and extremely small string length [Gol89b]. However, as indicated in section 2.3.7 and chapter 6, it is by no means clear whether the schema processing point of view is appropriately preferred to the convergence velocity investigations presented in section 2.1.7 and chapter 6. As pointed out several times, we prefer the point of view which concentrates on a convergence velocity analysis. Consequently, the search for useful parameter settings of a Genetic Algorithm constitutes an optimization problem by itself, leading to the idea of using an Evolutionary Algorithm on a higher level to evolve optimal parameter settings of Genetic Algorithms. Due to the existence of two logically different levels in such an approach, it is reasonable to call it a meta-evolutionary algorithm. By concentrating on meta-evolution in this chapter, we will radically deviate from the biological model, where no two-level evolution process is to be observed but the self-adaptation principle can well be identified (as argued in chapter 2). However, there are several reasons why meta-evolution promises to yield some helpful insight into the working principles of Evolutionary Algorithms: First, meta-evolution provides the possibility to test whether the basic heuristic and the theoretical knowledge about parameterizations of Genetic Algorithms is also evolvable by the experimental approach, thus allowing us to confirm the heuristics or to point at alternatives.
APA, Harvard, Vancouver, ISO, and other styles
9

Urbina, Ezio Nicolas Bruno, and Elisa Spallarossa. "BIM Tools for the Energy Analysis of Urban Transformation Projects and the Application to the Development of Healthcare Infrastructures." In Advances in Civil and Industrial Engineering, 540–74. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7091-3.ch024.

Full text
Abstract:
The aim of INDICATE was the creation of an innovative interactive software capable of providing designers, urban planners, and companies a “decision support system” in all urban development phases of a city: from the construction of a single building to the design of a master plan. Galliera test site experience was developed following the idea of using the INDICATE tool on Genoa site to understand how the tool can work in the planning stages, as if the preliminary design of the new hospital were not already defined, to understand the optimal solution to select from the energy point of view. The chapter shows how a tool such as INDICATE would have proven absolutely useful to comprehend the different energetic and economic impacts of different options and of different new building shapes. The experience gained with the INDICATE project and BIM implementation within other projects and realities of the authors could also be adopted to develop the implementation of the BIM in another important hospital in Genoa, the Giannina Gaslini Pediatric Hospital.
APA, Harvard, Vancouver, ISO, and other styles
10

Sikorski, Krzysztof A. "Fixed Points- Noncontractive Functions." In Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.003.0007.

Full text
Abstract:
In this chapter we consider the approximation of fixed points of noncontractive functions with respect to the absolute error criterion. In this case the functions may have multiple and/or whole manifolds of fixed points. We analyze methods based on sequential function evaluations as information. The simple iteration usually does not converge in this case, and the problem becomes much more difficult to solve. We prove that even in the two-dimensional case the problem has infinite worst case complexity. This means that no methods exist that solve the problem with arbitrarily small error tolerance for some “bad” functions. In the univariate case the problem is solvable, and a bisection envelope method is optimal. These results are in contrast with the solution under the residual error criterion. The problem then becomes solvable, although with exponential complexity, as outlined in the annotations. Therefore, simplicial and/or homotopy continuation and all methods based on function evaluations exhibit exponential worst case cost for solving the problem in the residual sense. These results indicate the need of average case analysis, since for many test functions the existing algorithms computed ε-approximations with polynomial in 1/ε cost.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Test point optimal"

1

Salamin, Sami, Hussam Amrouch, and Jorg Henkel. "Selecting the Optimal Energy Point in Near-Threshold Computing." In 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2019. http://dx.doi.org/10.23919/date.2019.8715211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ruiz, F. Daniel, Jesus Urena, Jose M. Villadangos, Isaac Gude, Juan J. Garcia, Alvaro Hernandez, and Ana Jimenez. "Optimal test-point positions for calibrating an ultrasonic LPS system." In Factory Automation (ETFA 2008). IEEE, 2008. http://dx.doi.org/10.1109/etfa.2008.4638416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Devadze, David, and Hamlet Meladze. "Algorithm of Solution an Optimal Control Problem for Elliptic Differential Equations with m-Point Bitsadze-Samarski Conditions." In 2018 IEEE East-West Design & Test Symposium (EWDTS). IEEE, 2018. http://dx.doi.org/10.1109/ewdts.2018.8524775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abashidze, Marina, and Vakhtang Beridze. "Solution of an Optimal Control Problem for Helmholtz Equations with m- Point Nonlocal Boundary Conditions by Means Mathcad." In 2018 IEEE East-West Design & Test Symposium (EWDTS). IEEE, 2018. http://dx.doi.org/10.1109/ewdts.2018.8524137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jachowicz, Ryszard, Jerzy Weremczuk, Daniel Paczesny, and Grzegorz Tarapata. "MEMS Based Dew Point Hygrometer With Optimal Self Adjusted Detection Threshold." In 2008 Second International Conference on Integration and Commercialization of Micro and Nanosystems. ASMEDC, 2008. http://dx.doi.org/10.1115/micronano2008-70134.

Full text
Abstract:
A new system of dew point temperature hygrometer, based on semiconductor MEMS detector is presented in the paper. Many details of MEMS detector construction are given with details in the report. Basic idea of algorithms of detector control is also discussed. More attention is devoted to subalgorithm for self-adjusted detector threshold operation. Excellent dynamic parameters of the new hygrometer (i.e. 2÷5 dew point detections and temperature measurements per second), proved by the hygrometer tests, will be presented and described in the report. It means that presented hygrometer is 10÷100 times faster than conventional hygrometers. In the end of the paper two medical applications are demonstrated with clinical test results. The first application is for dermatology for TransEpidermal Water Loss (TEWL) factor of human skin. The second is focused on measurement of humidity in human nose cavity and human throat during breathing. In both case fast humidity measurements, with time constant of some 0.5s, have been required.
APA, Harvard, Vancouver, ISO, and other styles
6

Shao, Tiefu, and Sundar Krishnamurthy. "A Hybrid Method for Surrogate Model Updating in Engineering Design Optimization." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35482.

Full text
Abstract:
This paper addresses the critical issue of effectiveness, efficiency, and reliability in simulation-based design optimization under surrogate model uncertainty. Specifically, it presents a novel method to build surrogate models iteratively with sufficient fidelity for accurately capturing global optimal design solutions at a minimal cost. The salient feature of the proposed method lies in its unique preference of focusing necessarily high fidelity at potential global optimal regions of surrogate models. The proposed method is the synergic integration of the multiple preference point method, which updates surrogate model at current local optimal points predicted with data-mining techniques in genetic algorithm setup, and the maximum variance point method, which updates surrogate model at the point associated with the maximum prediction variance. Through illustrative comparison studies on thirty different optimization scenarios derived from 15 different test functions, the proposed method demonstrates the tangible reliability advancement. The experimental results indicate that the proposed method can be a reliable updating method in surrogate-model-based design optimization for efficiently locating the global optimal point/points in various kinds of optimization scenarios featured by single/multiple global optimal point/points that may exist at the corners of design space, inside design space, or on the boundaries of design space.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Xiong, Ji Zhou, Jun Yu, and Ju Cao. "A Primal-Dual Interior-Point QP Method and its Extension for Engineering Optimization." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/dac-1049.

Full text
Abstract:
Abstract Presented in this paper is a primal-dual infeasible-interior-point quadratic programming (QP) algorithm and its extension for nonlinear programming that is suited for engineering design and structural optimization, where the number of variables are very large and function evaluations are computationally expensive. The computational experience in solving both test problems and optimal structural design problems using the algorithm demonstrated that the algorithm finds an approximate optimal solution in fewer iterations and function evaluations, the obtained solution usually is an interior feasible solution, and so the resulting method is very efficient and effective.
APA, Harvard, Vancouver, ISO, and other styles
8

Licht, Christian, and Martin Böhle. "Development of an Operation Point Detection System for Centrifugal Pumps by Classifying the Time Signal of a Single Vibration Sensor." In ASME 2014 4th Joint US-European Fluids Engineering Division Summer Meeting collocated with the ASME 2014 12th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/fedsm2014-21075.

Full text
Abstract:
In industry it is very common that pumps are not operating in their optimal operation point. The consequences are too expensive operation costs and higher loads on individual components of the entire plant. For this reason a method to detect the current operation point of a centrifugal pump has been developed. The object is to introduce this method during the operation and without changing anything in the system. This method focuses on the monitoring of the vibrations of different operation points of the pump curve through the analysis of time signals of a single vibration acceleration sensor. To identify the current operation point the vibrations of an unknown point are compared to the vibrations of the known points. Extensive laboratory tests have been conducted for this contribution. For implementation of these investigations a test loop was designed. In the first step different time signals from several pump curves have been recorded. The recorded time signals are reduced to a few succinct values, by means of the method of time signal analyses. To classify these values the so called support vector machine is used.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Wei-dong, Hong-liang Wang, Ling Zhou, Ping-ping Zou, and Guo-tao Wang. "Optimization Design of New-Type Deep Well Pump Based on Latin Square Test and Numerical Simulation." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-30189.

Full text
Abstract:
In order to develop high efficiency and high head deep well pump of 150QJ20 type, a L18 (37) orthogonal experiment was performed with seven factors and three values including blades numbers, outlet angle, outlet width, etc.18 impellers were designed. The whole flow field of new-type two-stage deep well pump at the operating point for design was simulated by FLUENT using the standard model, SIMPLEC algorithm, second-order upwind scheme to solve, and analyze the independent of the number of the grid. 18 groups of the efficiency and head in design scheme were obtained. The effects of geometrical parameters on efficiency, head were researched using Latin square test method. The primary and secondary factors of the design parameters were acquired by way of variance analysis. According to the test result, an optimum program to further design was put forward. After manufactured and tested, the final optimal design model pump flow at rated efficiency of 66.59% point, single-stage head of 10.9m, match the motor as 5.5 kW, compared to the Chinese national standards (GB/T 2816-2002), which the rated flow point of the efficiency of 64% and matching motor 7.5 kW, the efficiency and head were significantly improved. The productions show good energy saving and material saving characters and can replace traditional pumps for deep well in the future, the comprehensive technical indicators achieve international advanced levels. The results would be instructive to the design of new-type deep well pump with the impeller head maximum approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Mojaddam, Mohammad, Ali Hajilouy-Benisi, and Mohammad Reza Movahhedy. "Optimal Design of the Volute for a Turbocharger Radial Flow Compressor." In ASME Turbo Expo 2014: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/gt2014-26849.

Full text
Abstract:
In this research the design methods of radial flow compressor volutes are reviewed and the main criterions in volute primary designs are recognized and most effective ones are selected. The effective parameters i.e. spiral cross section area, circumferential area distribution, exit cone and tongue area of the compressor volute are parametrically studied to identifythe optimum values. A numerical model is prepared and verified through experimental data which are obtained from the designed turbocharger test rig. Different volutes are modeled and numerically evaluated using the same impeller and vane-less diffuser. For each model, the volute total pressure ratio, static pressure recovery and total pressure loss coefficients and the radial force on the impeller are calculated for different mass flow rates at design point and off-design conditions. The volute which shows better performanceand causes lower the net radial force on the impeller, at desiredmass flow rates is selected as an optimal one. The results show the volute design approach differences at the design point and off-design conditions. Improving the pressure ratio and reducing total pressure loss at design point, may result inthe worse conditions at off-design conditions as well as increasing radial force on the impeller.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Test point optimal"

1

Gaponenko, Artiom, and Andrey Golovin. Electronic magazine with rating system of an estimation of individual and collective work of students. Science and Innovation Center Publishing House, October 2017. http://dx.doi.org/10.12731/er0043.06102017.

Full text
Abstract:
«The electronic magazine with rating system of an estimation of individual and collective work of students» (EM) is developed in document Microsoft Excel with use of macros. EM allows to automate all the calculated operations connected with estimation of amount scored by students in each form of the current control. EM provides automatic calculation of rating of the student with reflection of a maximum quantity of the points received in given educational group. The rating equal to “1” is assigned to the student who has got a maximum quantity of points for the certain date. For the other students the share of their points in this maximum size is indicated. The choice of an estimation is made in an alphabetic format according to requirements of the European translation system of test units for the international recognition of results of educational outcomes (ECTS - European Credit Transfer System), by use of a corresponding scale of an estimation. The list of students is placed on the first page of magazine and automatically displayed on all subsequent pages. For each page of magazine the optimal size of document printing is set with automatic enter of current date and time. Owing to accounting rate of complexity of task EM is the universal technical tool which can be used for any subject matter.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography