Academic literature on the topic 'Expense method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Expense method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Expense method"

1

Yin, Su Feng, Jian Hui Wu, Xiao Jing Wang, and Sha Li. "Application of Factor Analysis in the Research of Hospital Medical Expenses." Applied Mechanics and Materials 50-51 (February 2011): 973–76. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.973.

Full text
Abstract:
Case records of inpatients in a hospital in Tangshan in 2007 and 2008 are chosen for factor analysis. This paper aims at finding the latent factors influencing inpatients' medical expense. By analysing the factors which controlling and influencing inpatients' medical expense, the practical value of factor analysis in the study of medical expense is evaluated, and an analysis method in line with the structure of hospital medical expenses and its characteristics is tentatively explored. The paper comes to the conclusion that the factor of basic expense, the operation factor and the examination factor are the common factors which control and influence inpantients' medical expense. It is implied that factor analysis is a proper method used in research of hospital medical expenses.
APA, Harvard, Vancouver, ISO, and other styles
2

Laitinen, Erkki Kalervo. "Matching of expenses in financial reporting: a matching function approach." Journal of Financial Reporting and Accounting 18, no. 1 (December 19, 2019): 19–50. http://dx.doi.org/10.1108/jfra-01-2019-0009.

Full text
Abstract:
Purpose The purpose of this study is to introduce a matching function approach to analyze matching in financial reporting. Design/methodology/approach The matching function is first analyzed analytically. It is specified as a multiplicative Cobb-Douglas-type function of three categories of expenses (labor expense, material expense and depreciation). The specified matching function is solved by the generalized reduced gradient method (GRG) for 10-year time series from 8,226 Finnish firms. The coefficient of determination of the logarithmic model (CODL) is compared with the linear revenue-expense correlation coefficient (REC) that is generally used in previous studies. Findings Empirical evidence showed that REC is outperformed by CODL. CODL was found independent of or weakly negatively dependent on the matching elasticity of labor expense, positively dependent on the material expense elasticity and negatively dependent on depreciation elasticity. Therefore, the differences in matching accuracy between industries emphasizing different expense categories are significant. Research limitations/implications The matching function is a general approach to assess the matching accuracy but it is in this study specified multiplicatively for three categories of expenses. Moreover, only one algorithm is tested in the empirical estimation of the function. The analysis is concentrated on ten-year time-series of a limited sample of Finnish firms. Practical implications The matching function approach provides a large set of important information for considering the matching process in practice. It can prove a useful method also to accounting standard-setters and other specialists such as managers, consultants and auditors. Originality/value This study is the first study to apply the new matching function approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Mirzamohammadi, Saeed, Saeed Karimi, and Mir Saman Pishvaee. "A novel cost allocation method applying fuzzy DEMATEL technique." Kybernetes 49, no. 10 (November 22, 2019): 2569–87. http://dx.doi.org/10.1108/k-07-2019-0513.

Full text
Abstract:
Purpose The purpose of this paper is to develop a new systematic method for a multi-unit organization to cope with the cost allocation problem, which is an extension of the reciprocal method. As uncertainty is the inherent characteristic of business environments, assuming changes in engaged parameters is almost necessary. The outputs of the model determine the total value of each unit/business lines or product. Design/methodology/approach In the proposed method, contrary to existing models, business units are able to transfer their costs to other units, and also, not necessarily transfer the total costs of support units completely. The DEMATEL approach, which finds all relationships between different parts of a system, is also applied for computing effects of the units’ expense paid to each other. Moreover, a fuzzification approach is used to capture linguistic experts’ judgments about related data. Findings Being closer to the real-world problem in comparison to the previous approach, the proposed systematic approach encompasses the other cost allocation models. Practical implications Applying the proposed model for a system like a multi-unit organization, the total price of each unit/business line can be obtained. Moreover, this cost allocation process guides the related decision-makers to better manage the expenses that each unit pays the others. Originality/value In the existing studies, business units cannot pay expense support units. However, in the proposed method, the business units are able to pay expenses for other units, and also, not necessarily pay total expenses for support unit completely. Moreover, considering engaged parameters as fuzzy numbers makes the proposed model closer to real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Che, Shuping, Qiaojun Wu, Yanpeng Zhang, and Jie Geng. "Maintenance Expense Optimization Method for High Temperature Reactor Equipment Group." IOP Conference Series: Earth and Environmental Science 555 (August 29, 2020): 012019. http://dx.doi.org/10.1088/1755-1315/555/1/012019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Laitinen, Erkki Kalervo. "Implied expense theory in financial reporting: a steady-state approach." Journal of Financial Reporting and Accounting 16, no. 1 (March 12, 2018): 49–83. http://dx.doi.org/10.1108/jfra-05-2016-0032.

Full text
Abstract:
Purpose The purpose of this study is to use a steady-state model structure to investigate earnings management (EM) theoretically in the context of different expense theories. Empirically, the objective is to apply the theoretical model to investigate the implicit choice of expense theories for reporting expenses. The study aims to present a new approach to analyze EM. Design/methodology/approach The study makes use of ten-year time-series data originally from 1,015 Finnish public and private firms to estimate the parameters of the steady-state model, and to investigate which expense theories the firms implicitly follow in financial reporting. The parameters are estimated using the restricted least squares regression method. The final sample included data from 631 firms fulfilling restrictions for the consistency of estimates. Findings The paper provides empirical insights about expense theories that Finnish firms implicitly follow in financial reporting. Evidence shows that the reporting of expenses mainly follows the units-of-revenue and the rate-of-return theories. Only a small number of firms follow the interest expense theory. Research limitations/implications The study is based on a steady-state approach, and therefore, the research results may lack generalizability as only 62% of the original sample firms obtained consistent estimates. Therefore, researchers are encouraged to use more general models for further theoretical and empirical work. Practical implications The paper includes implications for a new approach to EM. It also gives implications how to analyze different expense theories in the context of EM both theoretically and empirically. Originality/value This paper develops a new approach to investigate EM.
APA, Harvard, Vancouver, ISO, and other styles
6

Poere, Daniel De, and Stevanie . "Analisis Perencanaan Pajak Penghasilan Terkait Dengan Beban Pajak Penghasilan Pasal 21 Yang Ditanggung Perusahaan Studi Kasus Pada PT. Kharisma Buana Mandiri." Jurnal Ilmiah Akuntansi Kesatuan 4, no. 1 (July 27, 2018): 001–6. http://dx.doi.org/10.37641/jiakes.v4i1.95.

Full text
Abstract:
Tax planning is a legally used method by companies to save its tax payment. Income tax planning article 21 is a tax planning applied based on employee’s salary. It is an expense borne by companies and will disserve the company due to its inability to be made deductible fiscal expense. The research’s objective is to find out the influence of income tax planning related to article 21 borne by the company with gross up methode. The results conclude that the company does not use gross up method in calculationg its article 21 income tax. This is why the author made a revsion the fiscal income statement used in previous calculation. Based on this revision, the company will be able to save Rp 1.665.028 obtained as a difference between fiscal correction on article 21 tax expenses and the increase of article 21 tax income after being grossed up. This means the company can save its expenditure.
APA, Harvard, Vancouver, ISO, and other styles
7

Grytsay, O., M. Pankiv, D. Kut, and G. Wojtan. "Analysis of the enterprise operating expenses and ways of improvement of their accounting." Economics, Entrepreneurship, Management 8, no. 1 (July 2021): 43–58. http://dx.doi.org/10.23939/eem2021.01.043.

Full text
Abstract:
Activities of industrial enterprises are associated with continuous consumption of certain types of resources, so expense accounting plays a crucial role in determining the enterprise efficiency. Accounting procedures occupy a key place in information support system of any enterprise, since the original information on the enterprise activities in the form of financial statements and internal documentation (source documents, journals and ledgers) is essential in order to meet the needs of internal and external users. It is particularly important to account for operating expenses, which forms information about the use of materials, labor and other resources involved in the production process. Properly organized accounting and analytical information on operating expense ensures fair and accurate assessment of the enterprise production process and provide effective management decision-making. The purpose of the scientific paper is to analyze operating expenses, organization and methodology of the accounting process at the enterprise and to develop practical recommendations for improving the accounting for operating activity.This goal necessitates the solution of the following tasks: determine the nature and classification of operating expenses; analyze the dynamics and structure of operating expenses; develop the suggestions concerning the improvement of registration-analytical information of operating expenses. The used research methods and general characteristics of paper. The authors applied the method of literature review to justify the relevance of the chosen research topic. On the basis of the comparative method the main economic indicators of activity, in particular dynamics and structure of operating expenses according economic elements and according items of calculation were defined. On the basis of system analysis and synthesis, proposals on the composition of items of production costing and on the introduction of classification of expenses by groups in terms of relevant items of general production expenses, administrative, marketing expenses and other operating expenses were formed.
APA, Harvard, Vancouver, ISO, and other styles
8

Leou, Rong-Ceng. "A new method for unit maintenance scheduling considering reliability and operation expense." International Journal of Electrical Power & Energy Systems 28, no. 7 (September 2006): 471–81. http://dx.doi.org/10.1016/j.ijepes.2006.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lu, Bi Hong, and Yu Kai Li. "DCM Oriented Operation and Maintenance Expense Model for High-Speed Train." Key Engineering Materials 620 (August 2014): 644–49. http://dx.doi.org/10.4028/www.scientific.net/kem.620.644.

Full text
Abstract:
Manufacturing is in transition to manufacturing service industry and Life Cycle Cost (LCC) becomes an important considerate index. Operation & Maintenance Expense (OME) is the most complicated one among all the LCC expense units. Besides, it also accounts for the largest proportion. Authors proposed a LCC modeling idea, which is Expenses attached to the process, while process consumes resourses. Authors take the air-supply and brake system of high-speed train as example, conduct the system’s maintenance FMEA and risk assessment based on dependability. According to assessment results, authors match the system’s important maintenance projects, including brake control system, main air compresser etc., with their respective maintenance mode. The paper put forward the expense model of one specific maintenance work on basis of its resourse structure. Based on the main air compresser’s dependability, authors predicted its Corrective Maintenance (CM) repair times by using Monte Carlo simulation. Combined with the repair times of Preventive Maintenance (PM) and condition-based maintenance, authors obtained high-speed train’s OME model. This method has important significance for modeling the high-speed train’s LCC. Besides, it may support the expense reseach for large equipments with complicated process.
APA, Harvard, Vancouver, ISO, and other styles
10

Harnovinsah, Harnovinsah, and Septyana Mubarakah. "DAMPAK TAX ACCOUNTING CHOICES TERHADAP TAX AGGRESSIVE." Jurnal Akuntansi 20, no. 2 (March 3, 2017): 267. http://dx.doi.org/10.24912/ja.v20i2.58.

Full text
Abstract:
Penelitian ini bertujuan untuk menganalisis tax accounting choices, defferen tax expense dan firm size sebagai indikator tax aggressiveness. Sampel yang digunakan penelitian ini sebanyak 50 perusahaan manufaktur yang terdaftar di Bursa Efek Indionesia (BEI) selama periode 2010-2014. Sampel diambil dengan cara purposive random sampling dengan menggunakan criteria tertentu. Tax accounting choices diukur dengan pemilihan metode garis lurus dan metode FIFO dengan variabel dummy, sedangkan untuk deffered tax expense diukur dengan membandingkan deffred tax expense dengan total asset.Firm size diukur dengan melakukan logaritma natural total asset. Hasil penelitian ini menemukan bahwa metode garis lurus berpengaruh signifikan negative terhadap tax aggressiveness sedangkan metode FIFO tidak berpengaruh terhadap tax aggressiveness. Deffered tax expense berpengaruh signifikan negative terhadap tax aggressiveness dan firm size berpengaruh signifikan negative terhadap tax aggressiveness. Sehingga dapat dikatakan bahwa metode garis lurus dan deffered tax expense dapat dijadikan sebagai indikator tax aggressivenessTax Accounting choices in this study chose the straight-line method and the FIFO method,which is the management actions in determining the policies that are applied to compile financial statements and used as an indicator of tax aggresivitas. In addition the study also used the deffered tax expense and firm size as another independent variable to measure the tax aggresivitas action of the tax aggressiveness. This study aims to analyze the tax accounting choices, defferen tax expense and tax firm size as an indicator of tax aggressiveness. The samples used in this study as many as 50 manufacturing companies listed on the Indonesia Stock Exchange (IDX) during the period 2010-2014. The sample were with how purposive random sampling by using certain criteria. Tax accounting choices is measured by the selection of the method of straight line method and the FIFO method with dummy variables, whereas for deffered tax expense is measured by comparing the deffred tax expense by total assets. Firm size is measured by taking the natural logarithm of total assets. The results of this study found that the straightline method significant negative effect against the tax aggressiveness. while the FIFO method has no affect against tax aggressiveness. Deffered tax expense significant negative effect on tax aggressiveness and firm size are significant negative effect against tax aggressiveness. So it can be said that the method of straight-line and deffered tax expense can be used as an indicators of tax aggressiveness.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Expense method"

1

Persson, Ulrika, and Anna Svensson. "TO EXPENSE OR NOT TO EXPENSE - HOW DOES IT MATTER? : A Qualitative Study Concering R&D and Credit Granting." Thesis, Umeå University, Umeå School of Business, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-18599.

Full text
Abstract:

 

This study concerns the implications of the discretion in the cut-off point in the accounting method for research and development. Our research problem targets the issues to reduce the existing research gap:

  • - "Does the choice of accounting method for research and development matter when a creditor evaluates a company for a credit granting decision?"
  • - "How does the accounting method for research and development matter in a credit granting decision?"

Our study aims to answer these questions by investigating and analyzing the credit granting assessment and by interviewing creditors at the major banks in Sweden. Fictitious case scenarios provide in-depth information about how the accounting methods matter for a credit granting decision.

We develop this study by gathering existing material regarding accounting standards, the accounting method and the credit granting assessment. Previous studies about credit granting and the accounting methods supplement the theoretical material.

The approach to this study is a hermeneutic approach that tries to grasp the entire picture of the respondents' opinion about the accounting methods. To gain detailed and extensive information from the respondents, we use a qualitative research with semi-structured interviews. The research sample consists of experienced creditors at the largest banks in Sweden. This is to ensure relevant and informative answers on our questions. We utilise the four case scenarios to encourage the respondents to elaborate upon the accounting methods for R&D. This provides detailed knowledge about how the accounting methods matter for a credit granting decision.

The respondent states that abnormal values in the R&D account are suspicious and that investigation and adjustments of these values occurs if necessary. From this summarised statement, we draw the conclusion that the accounting methods for R&D matter in a credit granting decision. However, we also establish that other factors are more influential on the decision. Furthermore, we find that the creditors examine the content of the R&D account because the methods and its content have different impact on the financial statements. The expense method indicates a negative impact on the credit granting decision if the company cannot carry the costs, while the recognition method gives an appearance of stronger financial statements. However, the recognition method also gives rise to suspicions if the company relies on previous achievements. We conclude that depending on the amount of R&D both methods can be perceived as an advantage and a disadvantage for a credit granting decision, however, our main finding suggests that a revaluation of the abnormal values in the R&D account occurs.

From the support of our findings, we believe that our research has accomplished the objective of the study and we therefore believe that we have contributed to the existing knowledge in the subject.

 

APA, Harvard, Vancouver, ISO, and other styles
2

Jedličková, Kateřina. "Vývoj obvyklé ceny u rodinných domů v lokalitě Tišnovsko v určitém časovém období." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-241274.

Full text
Abstract:
The thesis aims to find out normal price of detached house in the area in Tišnovko, it maps current market situation with detached house in this location too. Next tasks are to detection of the expense price of detached houses of valid prescription and detection of informative price of comparative method of price regulation. The sub – task is determination of prices of detached houses with method of direct comparison with market valuotion.
APA, Harvard, Vancouver, ISO, and other styles
3

Licková, Věra. "Analýza vybraných vlivů na výši obvyklé ceny rodinných domů ve Šlapanicích." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-233092.

Full text
Abstract:
The thesis presents the analysis of selected influences on houses valuation in Šlapanice, including the report on present state of the real estate market in the above mentioned locality. It involves expense method, comparative methods – both based on regulations and not, to find out the value of five family houses in Šlapanice. The work also considers the impact which the locality with its territorial plan has on real estate valuation. In the light of the acquired information the real estate prices are compared considering significant changes in territorial plan to join neighbouring cadastral area of Brno city.
APA, Harvard, Vancouver, ISO, and other styles
4

Černý, Michal. "Porovnání cen bytového domu v k.ú. Bučovice stanovených dle platných oceňovacích předpisů." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-232506.

Full text
Abstract:
Diploma thesis introduces two different options for property valuation. With an emphasis on the different ownership of the apartment building. In the first case, the building seen as a whole and used the combination of yield and cost method of valuation. In the second case, a building is divided into individual residential units and valued method of comparison. Both methods are conducted in accordance with Decree No. 3 / 2008 digest Implement certain provisions of Act No. 151/1997 digest., Valuation of property and amending certain laws, as amended (Decree valuation), as follows of the changes made by Decree No. 456/2008 digest. and No. 460/2009 digest. In conclusion, the following derived values were compared and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
5

Vokounová, Martina. "Účetní, finanční a daňové aspekty leasingu." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-11739.

Full text
Abstract:
This thesis deals with leasing as one of the methods of making investments. You can find in it the conditions for inclusion rent in tax expenses and accounting method for the lessor and the tenant. Of course there is mentioned the development of leasing, not only for tax purposes. The last part of the thesis is devoted to compare leasing with other forms of acquisition of investments, which is related to the practical example.
APA, Harvard, Vancouver, ISO, and other styles
6

Braimah, Nuhu. "An investigation into the use of construction delay and disruption analysis methodologies." Thesis, University of Wolverhampton, 2008. http://hdl.handle.net/2436/38824.

Full text
Abstract:
Delay and disruption (DD) to contractors’ progress, often resulting in time and cost overruns, are a major source of claims and disputes in the construction industry. At the heart of the matter in dispute is often the question of the extent of each contracting party’s responsibility for the delayed project completion and extra cost incurred. Various methodologies have been developed over the years as aids to answering this question. Whilst much has been written about DD, there is limited information on the extent of use of these methodologies in practice. The research reported in this thesis was initiated to investigate these issues in the UK, towards developing a framework for improving DD analysis. The methodology adopted in undertaking this research was the mixed method approach involving first, a detailed review of the relevant literature, followed by an industry-wide survey on the use of these methodologies and associated problems. Following this, interviews were conducted to investigate the identified problems in more depth. The data collected were analysed, with the aid of SPSS and Excel, using a variety of statistical methods including descriptive statistics analysis, relative index analysis, Kendall’s concordance and factor analysis. The key finding was that DD analysis methodologies reported in the literature as having major weaknesses are the most widely used in practice mainly due to deficiencies in programming and record keeping practice. To facilitate the use of more reliable methodologies, which ensure more successful claims resolution with fewer chances of disputes, a framework has been developed comprising of: (i) best practice recommendations for promoting better record-keeping and programming practice and; (ii) a model for assisting analysts in their selection of appropriate delay analysis methodology for any claims situation. This model was validated by means of experts’ review via a survey and the findings obtained suggest that the model is valuable and suitable for use in practice. Finally, areas for further research were identified.
APA, Harvard, Vancouver, ISO, and other styles
7

Bečvářová, Hedvika. "Srovnání vybraných způsobů ocenění pro nemovitost typu byt v lokalitě Písek a okolí." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232631.

Full text
Abstract:
This master´s thesis is focused on the comparison of selected methods that the use for evaluation real property type flat. Thesis defines the basic notions connected with valuation and describes the methods of valuation. Valuated flats are situated in Písek and surrounding. Work includes description given to areas and maps local situation in the marketplace with flat unit. Flats were selected with different layout.
APA, Harvard, Vancouver, ISO, and other styles
8

Koullias, Stefanos. "Methodology for global optimization of computationally expensive design problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49085.

Full text
Abstract:
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm. This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems. Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Isaacs, Amitay Engineering &amp Information Technology Australian Defence Force Academy UNSW. "Development of optimization methods to solve computationally expensive problems." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43758.

Full text
Abstract:
Evolutionary algorithms (EAs) are population based heuristic optimization methods used to solve single and multi-objective optimization problems. They can simultaneously search multiple regions to find global optimum solutions. As EAs do not require gradient information for the search, they can be applied to optimization problems involving functions of real, integer, or discrete variables. One of the drawbacks of EAs is that they require evaluations of numerous candidate solutions for convergence. Most real life engineering design optimization problems involve highly nonlinear objective and constraint functions arising out of computationally expensive simulations. For such problems, the computation cost of optimization using EAs can become quite prohibitive. This has stimulated the research into improving the efficiency of EAs reported herein. In this thesis, two major improvements are suggested for EAs. The first improvement is the use of spatial surrogate models to replace the expensive simulations for the evaluation of candidate solutions, and other is a novel constraint handling technique. These modifications to EAs are tested on a number of numerical benchmarks and engineering examples using a fixed number of evaluations and the results are compared with basic EA. addition, the spatial surrogates are used in the truss design application. A generic framework for using spatial surrogate modeling, is proposed. Multiple types of surrogate models are used for better approximation performance and a prediction accuracy based validation is used to ensure that the approximations do not misguide the evolutionary search. Two EAs are proposed using spatial surrogate models for evaluation and evolution. For numerical benchmarks, the spatial surrogate assisted EAs obtain significantly better (even orders of magnitude better) results than EA and on an average 5-20% improvements in the objective value are observed for engineering examples. Most EAs use constraint handling schemes that prefer feasible solutions over infeasible solutions. In the proposed infeasibility driven evolutionary algorithm (IDEA), a few infeasible solutions are maintained in the population to augment the evolutionary search through the infeasible regions along with the feasible regions to accelerate convergence. The studies on single and multi-objective test problems demonstrate the faster convergence of IDEA over EA. In addition, the infeasible solutions in the population can be used for trade-off studies. Finally, discrete structures optimization (DSO) algorithm is proposed for sizing and topology optimization of trusses. In DSO, topology optimization and sizing optimization are separated to speed up the search for the optimum design. The optimum topology is identified using strain energy based material removal procedure. The topology optimization process correctly identifies the optimum topology for 2-D and 3-D trusses using less than 200 function evaluations. The sizing optimization is performed later to find the optimum cross-sectional areas of structural elements. In surrogate assisted DSO (SDSO), spatial surrogates are used to accelerate the sizing optimization. The truss designs obtained using SDSO are very close (within 7% of the weight) to the best reported in the literature using only a fraction of the function evaluations (less than 7%).
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Zitong. "Calibration of expensive computer models using engineering reliability methods." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3028587/.

Full text
Abstract:
The prediction ability of complex computer models (also known as simulators) relies on how well they are calibrated to experimental data. History Matching (HM) is a form of model calibration for computationally expensive models. HM sequentially cuts down the input space to find the fitting input domain that provides a reasonable match between model output and experimental data. A considerable number of simulator runs are required for typical model calibration. Hence, HM involves Bayesian emulation to reduce the cost of running the original model. Despite this, the generation of samples from the reduced domain at every iteration has remained an open and complex problem: current research has shown that the fitting input domain can be disconnected, with nontrivial topology, or be orders of magnitude smaller than the original input space. Analogous to a failure set in the context of engineering reliability analysis, this work proposes to use Subset Simulation - a widely used technique in engineering reliability computations and rare event simulation - to generate samples on the reduced input domain. Unlike Direct Monte Carlo, Subset Simulation progressively decomposes a rare event, which has a very small probability of occurrence, into sequential less rare nested events. The original Subset Simulation uses a Modified Metropolis algorithm to generate the conditional samples that belong to intermediate less rare events. This work also considers different Markov Chain Monte Carlo algorithms and compares their performance in the context of expensive model calibration. Numerical examples are provided to show the potential of the embedded Subset Simulation sampling schemes for HM. The 'climb-cruise engine matching' illustrates that the proposed HM using Subset Simulation can be applied to realistic engineering problems. Considering further improvements of the proposed method, a classification method is used to ensure that the emulation on each disconnected region gets updated. Uncertainty quantification of expert-estimated correlation matrices helps to identify a mathematically valid (positive semi-definite) correlation matrix between resulting inputs and observations. Further research is required to explicitly address the model discrepancy as well as to take the correlation between model outputs into account.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Expense method"

1

Kazakova, Nataliya. Internal audit of estimated reserves and liabilities as a method for diagnosing corporate risks. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1089678.

Full text
Abstract:
The monograph is devoted to the study of methods of diagnostics and control of corporate risks associated with the formation and use of estimated reserves and liabilities in commercial organizations. The research results are aimed at creating a corporate system for identifying and controlling corporate risks using estimated reserves and estimated liabilities. The methodological recommendations offered by the authors on verification of accrued expenses allow us to identify the risks of inefficient use of expenses, including fraudulent actions. The methodological tools are supplemented with empirical materials obtained during testing of the internal audit methodology in industrial organizations, audit companies, as well as when performing research work. It will be useful for researchers, researchers, teachers, applicants for scientific degrees, and can also be used in the system of additional professional education, advanced training, for self-development of management personnel of financial and economic services in business and government structures.
APA, Harvard, Vancouver, ISO, and other styles
2

Kravchenko, Igor', Maksim Glinskiy, Sergey Karcev, Viktor Korneev, and Diana Abdumuminova. Resource-saving plasma technology in the repair of processing equipment. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1083289.

Full text
Abstract:
In the monograph methodological bases of selection of method of coating, design of technological processes of hardening and recovery of the wearing surfaces of parts using a systems engineering analysis and information support technologist. The mathematical model of plasma spraying of materials with different thermal conductivity and methods criteria for evaluation of technical and technological opportunities of a plasma coating method. Describes the methods and results of experimental studies, the analysis of the conditions and causes of loss of efficiency of processing equipment APK. The proposed scientific and methodical approach to the justification of expediency of the recovery and strengthening of the working bodies and parts expensive imported technological equipment. The proposed mathematical model describing the physical processes in plasma coating for various applications. The structure of the algorithm for solving the task of hardening and recovery of worn parts plasma methods on the basis of the integrated CAE system. This monograph is intended for employees of scientific research institutions, specialists of machine-building production and enterprises of technical service, as well as teachers, postgraduates and students of agricultural engineering areas of training.
APA, Harvard, Vancouver, ISO, and other styles
3

Office, General Accounting. Medicare: HCFA can improve methods for revising physician practice expense payments : report to Congressional committees. Washington, D.C. (P.O. Box 37050, Washington, D.C. 20013): The Office, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

H, Sunshine Jonathan, and National Center for Health Statistics (U.S.), eds. Determinants of financially burdensome family health expenses: United States, 1980. Hyattsville, Md: U.S. Dept. of Health and Human Services, Public Health Service, National Center for Health Statistics, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mistrorigo, Alessandro. Phonodia. Venice: Edizioni Ca' Foscari, 2018. http://dx.doi.org/10.30687/978-88-6969-236-9.

Full text
Abstract:
This essay focuses on the ‘voice’ as it sounds in a specific type of recordings. This recordings always reproduce a poet performing a poem of his/her by reading it aloud. Nowadays this kind of recordings are quite common on Internet, while before the ’90 digital turn it was possible to find them only in specific collection of poetry books that came with a music cassette or a CD. These cultural objects, as other and more ancient analogic sources, were quite expensive to produce and acquire. However, all of them contain this same type of recoding which share the same characteristic: the author’s voice reading aloud a poem of his/her. By bearing in mind this specific cultural objet and its characteristics, this study aims to analyse the «intermedial relation» that occur between a poetic text and its recorded version with the author’s voice. This «intermedial relation» occurs especially when these two elements (text and voice) are juxtaposed and experienced simultaneously. In fact, some online archives dedicated to this type of recording present this configuration forcing the user to receive both text and voice in the same space and at the same time This specific configuration not just activates the intermedial relation, but also hybridises the status of both the reader, who become a «reader-listener», and the author, who become a «author-reader». By using an interdisciplinary approach that combines philosophy, psychology, anthropology, linguistics and cognitive sciences, the essay propose a method to «critically listening» some Spanish poets’ way of vocalising their poems. In addition, the book present Phonodia web archive built at the Ca’ Foscari University of Venice as a paradigmatic answer to editorial problems related to online multimedia archives dedicated to these specific recordings. An extent part of the book is dedicated to the twenty-eight interviews made to the Spanish contemporary poets who became part of Phonodia and agreed in discussing about their personal relation to ‘voice’ and how this element works in their creative practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Boote, Richard. An historical treatise of an action or suit at law: And of the proceedings used in the King's Bench and Common Pleas, from the original processes to the judgments in both courts : wherein the reason and usage of the old obscure and formal parts of our writs and pleadings, such especially as have reference, or relate to the ancient method of practice, as well before the Statute of nisi prius as afterwards, are duly considered, in order to shew from whence they arose : also an account of the alterations that have been made from time to time for regulating the course of practice in the several courts : with such remarks and observations, as tend to explain and illustrate the present mode of practice : and pointing out such particulars as would contract the proceedings, and render them more concise, plain, and significant, and less expensive to the suitors. London: Printed for W. Johnston ... [and 4 others], 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tull, Jethro. The Horse-Hoeing Husbandry, or a Treatise on the Principles of Tillage and Vegetation: Wherein Is Taught a Method of Introducing a Sort of Vineyard ... Their Product and Diminish the Common Expense. Forgotten Books, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arras, John D., James Childress, and Matthew Adams. Nice Story, but So What? Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190665982.003.0004.

Full text
Abstract:
This chapter explores the recent shift that has occurred in bioethics away from the pursuit of objectivity or truth. Instead, the emphasis has increasingly been on narrative ethics, an approach that argues for a view of ethics as being primarily local and contingent. The chapter begins by outlining the central features of narrative ethics, explaining both the conception of narrative as the grounds for moral principles and the connections between narrative ethics and postmodernism. The chapter then outlines some problems that arise for the method of narrative ethics, such as the threat of subjectivism and the dangers of fetishizing “little narratives” at the expense of broader social understanding and critique.
APA, Harvard, Vancouver, ISO, and other styles
9

Scanlon, William J. Medicare: Hcfa Can Improve Methods for Revising Physician Practice Expense Payments. Diane Pub Co, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kulkarni, Kunal, James Harrison, Mohamed Baguneid, and Bernard Prendergast, eds. Haematology. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198729426.003.0011.

Full text
Abstract:
One of the great British contributions to medicine has been the development of the prospective randomized clinical trial as a method of assessing whether novel treatments demonstrate superiority over established therapy. This replacement of clinician preference, clinical impression, and anecdote by the design and rigorous evaluation of the results of well-designed studies has been enthusiastically embraced by haematologists the world over. The training of haematologists has always involved an understanding of the pathological and scientific processes that underlie blood disorders, engendering a rational clinical approach, and treatments used in the management of haematological disorders are toxic and difficult to use, involving considerable clinical expertise and expense. The widespread use of randomized clinical trials is therefore extremely beneficial to haematologists. The studies summarized within this chapter are examples of how research has influenced day-to-day clinical practice with immense and progressive benefit to patients.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Expense method"

1

Shi, L., and K. Rasheed. "A Survey of Fitness Approximation Methods Applied in Evolutionary Algorithms." In Computational Intelligence in Expensive Optimization Problems, 3–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-10701-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hartmann, Wolfgang M. "Computationally Expensive Methods in Statistics: An Introduction." In Applied Parallel Computing. State of the Art in Scientific Computing, 928–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11558958_112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Slawig, Thomas, Malte Prieß, and Claudia Kratzenstein. "Surrogate-Based and One-Shot Optimization Methods for PDE-Constrained Problems with an Application in Climate Models." In Solving Computationally Expensive Engineering Problems, 1–24. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-08985-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peña-Miguel, Noemí, María Cristina Fernández-Ramos, and Joseba Iñaki De La Peña. "A Minimum Pension for Older People via Expenses Rate." In Mathematical and Statistical Methods for Actuarial Sciences and Finance, 489–93. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-89824-7_87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ortega-Recalde, Oscar, Julian R. Peat, Donna M. Bond, and Timothy A. Hore. "Estimating Global and Erasure Using Low-Coverage Whole-Genome Bisulfite (WGBS)." In Methods in Molecular Biology, 29–44. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-1294-1_3.

Full text
Abstract:
AbstractWhole-genome bisulfite sequencing (WGBS) is a popular method for characterizing cytosine methylation because it is fully quantitative and has base-pair resolution. While WGBS is prohibitively expensive for experiments involving many samples, low-coverage WGBS can accurately determine global methylation and erasure at similar cost to high-performance liquid chromatography (HPLC) or enzyme-linked immunosorbent assays (ELISA). Moreover, low-coverage WGBS has the capacity to distinguish between methylation in different cytosine contexts (e.g., CG, CHH, and CHG), can tolerate low-input material (<100 cells), and can detect the presence of overrepresented DNA originating from mitochondria or amplified ribosomal DNA. In addition to describing a WGBS library construction and quantitation approach, here we detail computational methods to predict the accuracy of low-coverage WGBS using empirical bootstrap samplers and theoretical estimators similar to those used in election polling. Using examples, we further demonstrate how non-independent sampling of cytosines can alter the precision of error calculation and provide methods to improve this.
APA, Harvard, Vancouver, ISO, and other styles
6

Paszyński, Maciej, Rafał Grzeszczuk, David Pardo, and Leszek Demkowicz. "Deep Learning Driven Self-adaptive Hp Finite Element Method." In Computational Science – ICCS 2021, 114–21. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77961-0_11.

Full text
Abstract:
AbstractThe finite element method (FEM) is a popular tool for solving engineering problems governed by Partial Differential Equations (PDEs). The accuracy of the numerical solution depends on the quality of the computational mesh. We consider the self-adaptive hp-FEM, which generates optimal mesh refinements and delivers exponential convergence of the numerical error with respect to the mesh size. Thus, it enables solving difficult engineering problems with the highest possible numerical accuracy. We replace the computationally expensive kernel of the refinement algorithm with a deep neural network in this work. The network learns how to optimally refine the elements and modify the orders of the polynomials. In this way, the deterministic algorithm is replaced by a neural network that selects similar quality refinements in a fraction of the time needed by the original algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Fobbe, F., H. C. Koennecke, P. Heidt, M. Dietzel, and K. J. Wolf. "Color-Coded Duplex Sonography: Is this New Method Capable of Replacing More Expensive or Invasive Examinations?" In CAR’89 Computer Assisted Radiology / Computergestützte Radiologie, 229. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-52311-3_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mukherjee, Nabanita, Karoline A. Lambert, David A. Norris, and Yiqun G. Shellman. "Enrichment of Melanoma Stem-Like Cells via Sphere Assays." In Methods in Molecular Biology, 185–99. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-1205-7_14.

Full text
Abstract:
AbstractSphere assays are widely used in vitro techniques to enrich and evaluate the stem-like cell behavior of both normal and cancer cells. Utilizing three-dimensional in vitro sphere culture conditions provide a better representation of tumor growth in vivo than the more common monolayer cultures. We describe how to perform primary and secondary sphere assays, used for the enrichment and self-renewability studies of melanoma/melanocyte stem-like cells. Spheres are generated by growing melanoma cells at low density in nonadherent conditions with stem cell media. We provide protocols for preparing inexpensive and versatile polyHEMA-coated plates, setting up primary and secondary sphere assays in almost any tissue culture format and quantification methods using standard inverted microscopy. Our protocol is easily adaptable to laboratories with basic cell culture capabilities, without the need for expensive fluidic instruments.
APA, Harvard, Vancouver, ISO, and other styles
9

Toye, Philip, Henry Kiara, Onesmo ole-MoiYoi, Dolapo Enahoro, and Karl M. Rich. "The management and economics of east coast fever." In The impact of the International Livestock Research Institute, 239–73. Wallingford: CABI, 2020. http://dx.doi.org/10.1079/9781789241853.0239.

Full text
Abstract:
Abstract This book chapter tackles the management and economics of east coast fever. At about the time of ILRAD's establishment in 1973, a vaccination procedure was being developed at the East African Veterinary Research Organization (EAVRO) at Muguga, Kenya. The infection-and-treatment method (ITM) is an immunization procedure against ECF. It involves inoculation of live sporozoites of T. parva, usually in the form of a semi-purified homogenate of T. parva-infected ticks, combined with simultaneous treatment with a dose of a long-acting formulation of the antibiotic oxytetracycline. Whilst safe and very effective when administered correctly, production and delivery of this live ECF vaccine is complicated, expensive and time consuming, and at the time of ILRAD's founding, there were doubts as to whether such a procedure was commercially viable. The future for ILRI in the pathology and immunoparasitology of theileriosis will be guided by the vaccine, balanced against the evolving prospects for a subunit vaccine. The future in the epidemiology and economics of ECF management will be developing and evaluating current or novel control methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Eremin, Sergey. "Immunochemical methods for detection of organophosphorus compounds." In ORGANOPHOSPHORUS NEUROTOXINS, 219–30. ru: Publishing Center RIOR, 2020. http://dx.doi.org/10.29039/33_219-230.

Full text
Abstract:
Organophosphorus compounds (OP) are found in environmental objects and food products. Due to their high toxicity and inhibition of cholinesterase activity, it is necessary to control residual amounts of OP. The most common methods for determining OP are gas and liquid chromatography with various detection methods. However, chromatographic analysis is lengthy, requires complex sample preparation and expensive equipment, which limits its use for screening a large number of samples and continuous monitoring of the content of OP. To detect the OP, it is necessary to use High Throughput Screening methods, using simple, fast and inexpensive analysis methods. Currently, immunochemical methods are increasingly used to determine OP. These methods are based on the recognition of the analyte (antigen) by specific receptors (antibodies) with the formation of the antigen-antibody complex and the measurement of the analytical signal generated by the immunochemical test system in response to complex formation, which leads to high sensitivity and specificity of the analysis.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Expense method"

1

Saverin, Joseph, David Marten, George Pechlivanoglou, Christian Oliver Paschereit, and Arne van Garrel. "Implementation of the Multi-Level Multi-Integration Cluster Method to the Treatment of Vortex Particle Interactions for Fast Wind Turbine Wake Simulations." In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-76554.

Full text
Abstract:
A method for the treatment of the evolution of the wake of aerodynamic bodies has been implemented. A vortex particle method approach has been used whereby the flow field is discretized into numerical volumes which possess a given circulation. A lifting line formulation is used to determine the circulation of the trailing and shed vortex elements. Upon their release vortex particles are allowed to freely convect under the action of the blade, the freestream and other particles. Induced velocities are calculated with a regularized form of the Biot-Savart kernel, adapted for vortex particles. Vortex trajectories are integrated in a Lagrangian sense. Provision is made in the model for the rate of change of the circulation vector and for viscous particle interaction; however these features are not exploited in this work. The validity of the model is tested by comparing results of the numerical simulation to the experimental measurements of the Mexico rotor. A range of tip speed ratios are investigated and the blade loading and induced wake velocities are compared to experiment and finite-volume numerical models. The computational expense of this method scales quadratically with the number of released wake particles N. This results in an unacceptable computational expense after a limited simulation time. A recently developed multilevel algorithm has been implemented to overcome this computational expense. This method approximates the Biot-Savart kernel in the far field by using polynomial interpolation onto a structured grid node system. The error of this approximation is seen to be arbitrarily controlled by the polynomial order of the interpolation. It is demonstrated that by using this method the computational expense scales linearly. The model’s ability to quickly produce results of comparable accuracy to finite volume simulations is illustrated and emphasizes the opportunity for industry to move from low fidelity, less accurate blade-element-momentum methods towards higher fidelity free vortex wake models while keeping the advantage of short problem turnaround times.
APA, Harvard, Vancouver, ISO, and other styles
2

Tao, Shuxia, Xi Cao, and Peter Bobbert. "Accurate and efficient band gap predictions of metal halide perovskites using the DFT-1/2 method: GW accuracy with DFT expense." In 3rd International Conference on Perovskite Thin Film Photovoltaics, Photonics and Optoelectronics. Valencia: Fundació Scito, 2017. http://dx.doi.org/10.29363/nanoge.abxpvperopto.2018.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Corral, Roque, and Javier Crespo. "Development of an Edge-Based Harmonic Balance Method for Turbomachinery Flows." In ASME 2011 Turbo Expo: Turbine Technical Conference and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/gt2011-45170.

Full text
Abstract:
An harmonic balance method for modeling unsteady nonlinear periodic flows in turbomachinery is presented. The method solves the Reynolds Averaged Navier-Stokes equations in the time domain and may be implemented in a relatively simple way into an existing code including all the standard convergence acceleration techniques used for steady problems. The application of the method to vibrating airfoils and rotorstator interaction is discussed. It is demonstrated that the time spectral scheme may achieve the same temporal accuracy at a lower computational cost at the expense of using more memory.
APA, Harvard, Vancouver, ISO, and other styles
4

Pal, Gopalendu, Ankur Gupta, Michael F. Modest, and Daniel C. Haworth. "Comparison of Accuracy and Computational Expense of Radiation Models in Simulation of Non-Premixed Turbulent Jet Flames." In ASME/JSME 2011 8th Thermal Engineering Joint Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/ajtec2011-44585.

Full text
Abstract:
The accuracy and computational expense of various radiation models in the simulation of turbulent jet flames are compared. Both nonluminous and luminous methane-air non-premixed turbulent jet flames are simulated using a comprehensive combustion solver. The combustion solver consists of a finite-volume/probability density function-based flow–chemistry solver interfaced with a high-accuracy spectral radiation solver. Flame simulations were performed using various k-distribution-based spectral models and radiative transfer equation (RTE) solvers, such as P-1, P-3, finite volume/discrete ordinates method (FVM/DOM), and Photon Monte Carlo (PMC) methods, with/without the consideration of turbulence-radiation interaction (TRI). TRI is found to drop the peak temperature by close to 150 K for a luminous flame (optically thicker) and 25–100 K for a nonluminous flame (optically thinner). RTE solvers are observed to have stronger effects on peak flame temperature, total radiant heat source and NO emission than the spectral models. P-1 is found to be the computationally least expensive RTE solver and the FVM the most expensive for any spectral model. For optically thinner flames all radiation models yield excellent accuracy. For optically thicker flames P-3 and FVM predict radiation more accurately than the P-1 method when compared to the benchmark line-by-line (LBL) PMC.
APA, Harvard, Vancouver, ISO, and other styles
5

Pettersson, Marcus, and Johan O¨lvander. "Adaptive Complex Method for Efficient Design Optimization." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-34773.

Full text
Abstract:
Box’s Complex method for direct search has shown promise when applied to simulation based optimization. In direct search methods, like Box’s Complex method, the search starts with a set of points, where each point is a solution to the optimization problem. In the Complex method the number of points must be at least one plus the number of variables. However, in order to avoid premature termination and increase the likelihood of finding the global optimum more points are often used at the expense of the required number of evaluations. The idea in this paper is to gradually remove points during the optimization in order to achieve an adaptive Complex method for more efficient design optimization. The proposed method shows encouraging results when compared to the Complex method with fix number of points and a quasi-Newton method.
APA, Harvard, Vancouver, ISO, and other styles
6

Oza, Kunjal, and Hae Chang Gea. "Two-Level Approximation Method for Reliability-Based Design Optimization." In ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/detc2004-57463.

Full text
Abstract:
In order to model uncertainties and achieve the required reliability, Reliability Based Design Optimization (RBDO) has evolved as a dominant design tool. Many methods have been introduced in solving the RBDO problem. However, the computational expense associated with the probabilistic constraint evaluation still limits the applicability of the RBDO to practical engineering problems. In this paper, a Two-Level Approximation method (TLA) is proposed. At the first level, a reduced second order approximation is used for better optimization solution; at the second level a linear approximation is used for faster reliability assessment. The optimal solution is obtained interatively. The proposed method is tested on certain numerical examples, and results obtained are compared to evaluate the cost-effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, Xian-Jun, Jian-Hua Cao, and Shi-Jian Zhu. "A Rectified Statistical Energy Analysis Method Using Finite Element Analysis." In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-13867.

Full text
Abstract:
Two parameters, namely modal number (Nm) and modal overlap factor (Mo), are used as the indicators of the reliability of SEA prediction. It is suggested that Mo&gt;1 and Nm&gt;5 can be used to judge if the SEA prediction is reliable in prediction of plate vibration. In the mid-frequency, those conditions are hard to satisfy. Although it takes long time and great expense and even it is impossible to apply finite element analysis to the whole structure, it is economically to calculate a local single part and coupled parts. If the value of input power and coupling loss factor can be replaced by the value calculated by FEM, the precision of SEA can be greatly improved. Calculation example was given and the validity was proved.
APA, Harvard, Vancouver, ISO, and other styles
8

Schemmann, Christoph, Marius Geller, and Norbert Kluck. "A Multi-Fidelity Sampling Method for Efficient Design and Optimization of Centrifugal Compressor Impellers." In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-75160.

Full text
Abstract:
The optimization of a centrifugal compressor impeller is a challenge for established strategies and algorithms, as the interactions between the geometric design parameters and the aerodynamic and structural performance of the system are highly complex. Furthermore many geometrically valid designs are unusable in terms of structure mechanics or flow physics. Due to the complex parameter correlations, a simple limitation of the parametric space is no option, as possibly beneficial parameter combinations could be ruled out. To obtain a meaningful optimization result, the complete operation range of the compressor has to be taken into account which adds further complexities in terms of the optimization process and the computational expense. The combination of these issues leads to a complicated optimization scenario. The aim of the presented work is the reduction of the computational expense required to generate a high quality metamodel for optimization. This goal shall be achieved by the development of a multi-fidelity sampling method. The basic idea is to use preliminary of low-fidelity information from empirical data or fast analytical methods to identify promising regions of the parameter space. Then the samples of the DOE are concentrated in these areas while still maintaining a good coverage of the whole applicable design space. This ensures that no beneficial designs are ruled out which were not recommended by preliminary information. The points of the resulting DOE are computed by CFD and FEA computations and used to generate the metamodel which is used for the optimization. The method is tested by generating a metamodel used for compressor optimization. The results are compared to an optimization using a metamodel based on a conventional Latin hypercube sampling.
APA, Harvard, Vancouver, ISO, and other styles
9

Muldoon, Frank, and Sumanta Acharya. "Mass Conservation in the Immersed Boundary Method." In ASME 2005 Fluids Engineering Division Summer Meeting. ASMEDC, 2005. http://dx.doi.org/10.1115/fedsm2005-77301.

Full text
Abstract:
The immersed boundary approach for the modeling of complex geometries in incompressible flows is examined critically from the perspective of satisfying boundary conditions and mass conservation. The system of discretized equations for mass and momentum can be inconsistent if the real velocities are used in defining the forcing terms used to satisfy the boundary conditions. As a result, the velocity is generally not divergence free and the pressure at locations in the vicinity of the immersed boundary is not physical. However, the use of the pseudo velocities in defining the forcing (as frequently done when the governing equations are solved using a fractional step or projection method) combined with the use of the specified velocity on the immersed boundary is shown to result in a consistent set of equations which allows a divergence free velocity but, depending on the time step used to obtain a steady state solution, is shown to have an undesirable effect of allowing significant permeability of the immersed boundary. An improvement is shown if the pressure gradient is integrated in time using the Crank-Nicholson scheme instead of the backward Euler scheme. However, even with this improvement a significant reduction in the time step and hence increase in computational expense is still required for sufficient satisfaction of the boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
10

George, Pradeep, and Madara Ogot. "A Compromise Method for the Design of Parametric Polynomial Surrogate Models." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-85469.

Full text
Abstract:
This study presents a compromise approach to augmentation of response surface (RS) designs to achieve the desired level of accuracy. RS are frequently used as surrogate models in multidisciplinary design optimization of complex mechanical systems. Augmentation is necessitated by the high computational expense typically associated with each function evaluation. As a result previous results from lower fidelity models are incorporated into the higher fidelity RS designs. The compromise approach yields higher quality parametric polynomial response surface approximations than traditional augmentation. Based on the D-optimality criterion as a measure of RS design quality, the method simultaneously considers several polynomial models during the RS design, resulting in good quality designs for all models under consideration, as opposed to good quality designs only for lower order models as in the case of traditional augmentation. Several numerical and an engineering example are presented to illustrate the efficacy of the approach.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Expense method"

1

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
2

Torres, Marissa, and Norberto Nadal-Caraballo. Rapid tidal reconstruction with UTide and the ADCIRC tidal database. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41503.

Full text
Abstract:
The quantification of storm surge is vital for flood hazard assessment in communities affected by coastal storms. The astronomical tide is an integral component of the total still water level needed for accurate storm surge estimates. Coastal hazard analysis methods, such as the Coastal Hazards System and the StormSim Coastal Hazards Rapid Prediction System, require thousands of hydrodynamic and wave simulations that are computationally expensive. In some regions, the inclusion of astronomical tides is neglected in the hydrodynamics and tides are instead incorporated within the probabilistic framework. There is a need for a rapid, reliable, and accurate tide prediction methodology to provide spatially dense reconstructed or predicted tidal time series for historical, synthetic, and forecasted hurricane scenarios. A methodology is proposed to combine the tidal harmonic information from the spatially dense Advanced Circulation hydrodynamic model tidal database with a rapid tidal reconstruction and prediction program. In this study, the Unified Tidal Analysis program was paired with results from the tidal database. This methodology will produce reconstructed (i.e., historical) and predicted tidal heights for coastal locations along the United States eastern seaboard and beyond and will contribute to the determination of accurate still water levels in coastal hazard analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Lumpkin, Shamsie, Isaac Parrish, Austin Terrell, and Dwayne Accardo. Pain Control: Opioid vs. Nonopioid Analgesia During the Immediate Postoperative Period. University of Tennessee Health Science Center, July 2021. http://dx.doi.org/10.21007/con.dnp.2021.0008.

Full text
Abstract:
Background Opioid analgesia has become the mainstay for acute pain management in the postoperative setting. However, the use of opioid medications comes with significant risks and side effects. Due to increasing numbers of prescriptions to those with chronic pain, opioid medications have become more expensive while becoming less effective due to the buildup of patient tolerance. The idea of opioid-free analgesic techniques has rarely been breached in many hospitals. Emerging research has shown that opioid-sparing approaches have resulted in lower reported pain scores across the board, as well as significant cost reductions to hospitals and insurance agencies. In addition to providing adequate pain relief, the predicted cost burden of an opioid-free or opioid-sparing approach is significantly less than traditional methods. Methods The following groups were considered in our inclusion criteria: those who speak the English language, all races and ethnicities, male or female, home medications, those who are at least 18 years of age and able to provide written informed consent, those undergoing inpatient or same-day surgical procedures. In addition, our scoping review includes the following exclusion criteria: those who are non-English speaking, those who are less than 18 years of age, those who are not undergoing surgical procedures while admitted, those who are unable to provide numeric pain score due to clinical status, those who are unable to provide written informed consent, and those who decline participation in the study. Data was extracted by one reviewer and verified by the remaining two group members. Extraction was divided as equally as possible among the 11 listed references. Discrepancies in data extraction were discussed between the article reviewer, project editor, and group leader. Results We identified nine primary sources addressing the use of ketamine as an alternative to opioid analgesia and post-operative pain control. Our findings indicate a positive correlation between perioperative ketamine administration and postoperative pain control. While this information provides insight on opioid-free analgesia, it also revealed the limited amount of research conducted in this area of practice. The strategies for several of the clinical trials limited ketamine administration to a small niche of patients. The included studies provided evidence for lower pain scores, reductions in opioid consumption, and better patient outcomes. Implications for Nursing Practice Based on the results of the studies’ randomized controlled trials and meta-analyses, the effects of ketamine are shown as an adequate analgesic alternative to opioids postoperatively. The cited resources showed that ketamine can be used as a sole agent, or combined effectively with reduced doses of opioids for multimodal therapy. There were noted limitations in some of the research articles. Not all of the cited studies were able to include definitive evidence of proper blinding techniques or randomization methods. Small sample sizes and the inclusion of specific patient populations identified within several of the studies can skew data in one direction or another; therefore, significant clinical results cannot be generalized to patient populations across the board.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography