Academic literature on the topic 'Ascertained price'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Ascertained price.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Ascertained price"

1

Tyurina, Elina, Aleksandr Mednikov, and Svetlana Sushko. "Competitiveness Of Advanced Technologies For Production Of Electricity And Alternative Liquid Fuels." E3S Web of Conferences 69 (2018): 02008. http://dx.doi.org/10.1051/e3sconf/20186902008.

Full text
Abstract:
Technical and economic aspects of synthetic liquid fuel and electric power combined production within one energy-technology installation (ETI) are considered. The range of prices for alternative liquid fuel (ALF) produced by the installations, depending on the cost of consumed fuel, price of supplied electric power and level of capital investments, has been ascertained. The studies made suggest the conclusion that combined production of dimethyl ether is more efficient from the energy and economic viewpoints than methanol production. Besides, a certain level of oil prices was identified, its excess implying that production of ALF, i.e. dimethyl ether, will be more economically efficient than production of motor fuel from oil.
APA, Harvard, Vancouver, ISO, and other styles
2

Egger, Sam, Suzan Burton, Rebecca Ireland, and Scott C. Walsberger. "Observed retail price of Australia’s market-leading cigarette brand before and up to 3 years after the implementation of plain packaging." Tobacco Control 28, e2 (November 28, 2018): e86-e91. http://dx.doi.org/10.1136/tobaccocontrol-2018-054577.

Full text
Abstract:
ObjectiveDespite claims by tobacco companies that plain packaging would lead to lower cigarette prices, recommended and observed real cigarette prices in Australia rose in the 9–11 months after plain packaging was introduced. However, little is known about trends in prices longer term. In this report, we assess whether inflation (Consumer Price Index; CPI) and tax adjusted (‘CPI-tax-adjusted’) prices of the market-leading Australian cigarette brand changed in the 3-year period after plain packaging, and whether price changes were associated with retailer characteristics.MethodCigarette prices were ascertained from a panel of tobacco retailers at three time points: (1) in November 2012 (n=857) (before full implementation of plain packaging, compulsory in retail outlets from December 2012), (2) between October 2014 and February 2015 (n=789) and (3) between November 2015 and March 2016 (n=579). Generalised estimating equations were used to estimate percentage change in mean CPI/tax-adjusted cigarette prices over time.ResultsCPI/tax-adjusted adjusted mean stick prices rose by 13.7% (95% CI 13.0 to 16.0) and 15.2% (95% CI 14.3 to 16.0) at 2.1 and 3.1 years after plain packaging was introduced, respectively. Increases in mean CPI/tax-adjusted stick prices varied by outlet type (p<0.001), socioeconomic status (p=0.013) and remoteness of retailer’s area (p=0.028) and whether twin packs were sold (p=0.009).ConclusionsContrary to tobacco company predictions of a fall in prices, the price of the market-leading Australian cigarette brand increased significantly in the 3 years after plain packaging was introduced, and these increases were above the combined effects of inflation and increases in excise/customs duty.
APA, Harvard, Vancouver, ISO, and other styles
3

SHITILE, Tersoo Shimonkabir, and Abubakar SULE. "Welfare Effect of Monetary Financing." Applied Economics and Finance 6, no. 5 (August 13, 2019): 145. http://dx.doi.org/10.11114/aef.v6i5.4444.

Full text
Abstract:
This study ascertained the direction and asymmetric pass-through of central bank’s monetary financing to welfare in Nigeria using annual time series data covering the period 1970 to 2018. The study depended on both the Monetarist and Keynesian theoretical postulations to provide insights on the policy significance of monetary financing. To undertake the empirical analysis, the study applied both the linear Autoregressive Distributed Lag (ARDL) and non-linear ARDL (NARDL) technique. Unlike the ARDL equation, the estimated NARDL equation established that welfare losses respond negatively to both positive and negative changes in monetary financing; but the impact of negative monetary financing shock (7.11) is greater than the positive shock (2.87). In addition, the study found that it takes about 9 to 11 quarters for the changes in positive and negative monetary financing to fully release its effects on welfare loss. Besides, the results revealed that welfare loss is also driven by oil price, which is suggestive from oil price pass-through to domestic prices (exchange rate and consumer prices). The study, therefore, supports monetary financing in proper amounts and conditions to boost aggregate nominal demand but not to spur a fully-fledged monetary policy capture in the process.
APA, Harvard, Vancouver, ISO, and other styles
4

Murshed, Muntasir, Haider Mahmood, Tarek Tawfik Yousef Alkhateeb, and Mohga Bassim. "The Impacts of Energy Consumption, Energy Prices and Energy Import-Dependency on Gross and Sectoral Value-Added in Sri Lanka." Energies 13, no. 24 (December 12, 2020): 6565. http://dx.doi.org/10.3390/en13246565.

Full text
Abstract:
Drifting away from the neoclassical growth conjecture of economic growth being solely dependent on capital and labor inputs, this paper aimed to evaluate the dynamic impacts of energy consumption, energy prices and imported energy-dependency on both gross and sectoral value-added figures of Sri Lanka. The analysis has particularly used the robust econometric methods that can account for structural break issues in the data. The results, in a nutshell, indicated that energy consumption homogeneously contributes to gross, agricultural, industrial and services value-additions in Sri Lanka. However, positive oil price shocks and greater shares of imported energy in the total energy consumption figures are found to dampen the growth figures, especially in the context of the gross, industrial and services value additions. Besides, the joint growth-inhibiting impacts of oil price movements and energy import-dependency are also ascertained. On the other hand, the causality estimates reveal bidirectional causal associations between energy consumption-gross value-added and energy consumption-industrial value-added. In contrast, no causal impact of energy consumption on the agricultural and services value-added is evidenced. Hence, these findings impose key policy implications for constructing crucial energy policy reforms to make sure that the economic growth performances of Sri Lanka are sustained in the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ying, and Bi Wen Shen. "Quoted Risk Cost Study Based on AHP." Advanced Materials Research 255-260 (May 2011): 3943–47. http://dx.doi.org/10.4028/www.scientific.net/amr.255-260.3943.

Full text
Abstract:
Construction bidding is dealing in future contract of fixed price first and made bargain: 拍板成交;成交 later, which decide that bidding is a high-risk activity. Quoted risk is object, uncertain and losses, bidder must correctly assess quoted risk if he hopes to manage project successfully. The article have built unit price risk assessment model based on AHP, gathered and neaten subjective judgment, ascertained hierarchic layers according to correlation and subordinate relationship, formed a layer system of risk factor, gotten judgment matrix and weight vector, quantified risk loss of every subproject, obtained the quoted risk ratio. All ensured bidder can find the critical risk control point and date back to subproject where the risk existed. So bidders can manage risk actively and prospectively. Exemplified risk assessment model of a residential house bidding in Hangzhou at last.
APA, Harvard, Vancouver, ISO, and other styles
6

Baaquie, Belal E. "Statistical microeconomics and commodity prices: theory and empirical results." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, no. 2058 (January 13, 2016): 20150104. http://dx.doi.org/10.1098/rsta.2015.0104.

Full text
Abstract:
A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400–4416. ( doi:10.1016/j.physa.2013.05.008 )), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable . The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional . The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400–4416. ( doi:10.1016/j.physa.2013.05.008 )) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19–37. ( doi:10.1016/j.physa.2015.02.030 )). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19–37. ( doi:10.1016/j.physa.2015.02.030 )). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R 2 >0.90—using only six parameters.
APA, Harvard, Vancouver, ISO, and other styles
7

Falatoonzadeh, Hamid, J. Richard Conner, and Rulon D. Pope. "Risk Management Strategies to Reduce Net Income Variability for Farmers." Journal of Agricultural and Applied Economics 17, no. 1 (July 1985): 117–30. http://dx.doi.org/10.1017/s0081305200017131.

Full text
Abstract:
AbstractThe most useful and practical strategy available for reducing variability of net farm income is ascertained. Of the many risk management tools presently available, five of the most commonly used are simultaneously incorporated in an empirically tested model. Quadratic programming provides the basis for decisionmaking in risk management wherein expected utility is assumed to be a function of the mean and variance of net income. Results demonstrate that farmers can reduce production and price risks when a combination strategy including a diversified crop production plan and participation in the futures market and the Federal Crop Insurance Program (FCIP) is implemented.
APA, Harvard, Vancouver, ISO, and other styles
8

Kaur, Prabhdeep, and Jaspal Singh. "Impact of ETF Listing on the Returns Generated by Underlying Stocks: Indian Evidence." Management and Labour Studies 46, no. 3 (February 28, 2021): 263–88. http://dx.doi.org/10.1177/0258042x21991015.

Full text
Abstract:
The advent of exchange traded funds (ETFs) has rendered index trading much affordable compared to their futures counterparts. The present study attempts to examine the impact of ETF listing on the price of the constituent securities of the index that it aims to track. The sample comprises of all the equity ETFs listed in India from 1 January 2002 to 31 March 2019. Event study analysis has been used to examine whether listing of ETFs bore any price impact on the constituent stocks of ETFs. To account for robustness, both parametric and non-parametric tests have been employed. The estimates obtained from event study analysis revealed that the constituent stocks generated insignificant returns for the period extending from January 2002 to March 2009 and April 2009 to March 2013 but positive and significant cumulative average abnormal returns (CAARs) post ETF listing for the period ranging from April 2013 to March 2019, thus providing evidence in support of positive price impact. The permission granted to pension funds, insurers and Employees’ Provident Fund Organisation (EPFO) to invest their funds in ETFs as well as reduction in Securities Transaction Tax (STT) account for the observed price differential. An analysis of the factors accounting for the variation in valuation effects ascertained that the stocks that were traded thinly prior to ETF listing and those forming part of ETFs with larger asset base experienced positive price impact following ETF listing. JEL Codes: G11, G14
APA, Harvard, Vancouver, ISO, and other styles
9

Amuda Yusuf, Ganiyu, and Sarajul Fikri Mohamed. "Perceived benefits of adopting Standard – Based pricing mechanism for mechanical and electrical services installations." Construction Economics and Building 14, no. 2 (June 3, 2014): 104–19. http://dx.doi.org/10.5130/ajceb.v14i2.3864.

Full text
Abstract:
Cost is an important measure of project success and clients will expect a reliable forecast at the early stage of construction projects to inform their business decision. This study was undertaken to investigate the current practices in managing cost of mechanical and electrical (M&E) services in buildings. The perceptions of practitioners on the benefits of adopting Standard – Based Pricing Mechanism for M&E services as used for building fabrics and finishes was ascertained. The methodology adopted for the study was semi – structure interview and questionnaire survey. Inferential statistics technique was used to analyse the data collected. The results revealed that, M&E services tender documents are often based on lump sum contract. Practitioners are of the opinion that the adoption of Standard – Based Pricing Mechanism (SBPM) could enhance the quality of M&E services price forecast; ensure active post contract cost monitoring and control; encourage collaborative working relationship; enhance efficient whole life cycle cost management; improve risk management and facilitate efficient tendering process. The study suggested the development of local Standard Method of Measurement for M&E services and proposed strategies to facilitate the adoption of SBPM as basis for forecasting contract price of mechanical and electrical services in buildings.
APA, Harvard, Vancouver, ISO, and other styles
10

Javadi, Masoumeh, Mousa Marzband, Mudathir Funsho Akorede, Radu Godina, Ameena Saad Al-Sumaiti, and Edris Pouresmaeil. "A Centralized Smart Decision-Making Hierarchical Interactive Architecture for Multiple Home Microgrids in Retail Electricity Market." Energies 11, no. 11 (November 14, 2018): 3144. http://dx.doi.org/10.3390/en11113144.

Full text
Abstract:
The principal aim of this study is to devise a combined market operator and a distribution network operator structure for multiple home-microgrids (MH-MGs) connected to an upstream grid. Here, there are three distinct types of players with opposite intentions that can participate as a consumer and/or prosumer (as a buyer or seller) in the market. All players that are price makers can compete with each other to obtain much more possible profitability while consumers aim to minimize the market-clearing price. For modeling the interactions among partakers and implementing this comprehensive structure, a multi-objective function problem is solved by using a static, non-cooperative game theory. The propounded structure is a hierarchical bi-level controller, and its accomplishment in the optimal control of MH-MGs with distributed energy resources has been evaluated. The outcome of this algorithm provides the best and most suitable power allocation among different players in the market while satisfying each player’s goals. Furthermore, the amount of profit gained by each player is ascertained. Simulation results demonstrate 169% increase in the total payoff compared to the imperialist competition algorithm. This percentage proves the effectiveness, extensibility and flexibility of the presented approach in encouraging participants to join the market and boost their profits.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Ascertained price"

1

Kvašňovský, Tomáš. "Rozdíl mezi cenou zjištěnou rekreační chaty a rekreačního domku o stejné velikosti v Moravskoslezském kraji." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-233047.

Full text
Abstract:
The master´s thesis deals with definition of basic terms and selected methods of pricing. Is also defines buildings for family vacation purposes. The main part of the thesis concentrates on description of selected buildings for individual vacation and their setting in built-up and unbuilt areas in three locations of Moravian-Silesian Region. The individual locations are described and their values are set. Then follow comparison and evaluation of the pricing results.
APA, Harvard, Vancouver, ISO, and other styles
2

Formanová, Barbora. "Vliv lokality na výši obvyklé ceny bytu v Břeclavi." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-402094.

Full text
Abstract:
This diploma thesis deals with the influence of the locality on the usual price of a flat in the town of Břeclav. First of all, the basic relevant terms related to the valuation of immovable property are defined and individual valuation methods are described. In the next part of this work there are formulated problems related to the stated goal of this work, including the proposal of its fulfillment and hypothesis. Then the analysis of the territory of the town of Břeclav is carried out, focusing on civic amenities and urban development. This section also describes the process of selected methods for valuing housing units. The following section is devoted to descriptions of selected housing units and also to their actual evaluation by comparative method according to the price regulation and the direct comparison method, including the expert estimate of their price. The last part is devoted to the recapitulation and discussion of all resulting prices set in this work. There is also an evaluation of the impact of selected localities on the estimated usual price of valued apartments.
APA, Harvard, Vancouver, ISO, and other styles
3

Petrovičová, Lucia. "Srovnání postupu ocenění rodinných domů v ČR a SR." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318556.

Full text
Abstract:
The topic of the diploma thesis is a comparison of evaluation methods of family houses in the Czech and Slovak Republic, a legislation in relation to the evaluation in mentioned countries and a definition of particular terms: family house, value and price. There are basic methods of the evaluation of the immovable property used in Czech and Slovak Republic described in this thesis. The important part of the thesis is an analysis of the real estate market. On the basis of this analysis were chosen two comparable localities. Methods of the evaluation of the immovable property were applied on specific cases. Chosen family houses were valued at the common price, time price and ascertained price. The result of the diploma thesis is the comparison of evaluation methods used in the Czech and Slovak Republic.
APA, Harvard, Vancouver, ISO, and other styles
4

Vondrová, Monika. "Analýza vybraných faktorů ovlivňujících zjištěnou a obvyklou cenu bytů v Ústí nad Orlicí." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232630.

Full text
Abstract:
The aim of this master´s thesis is valuation of 11 real estate propetries of type flat in Ústí nad Orlicí by selected methods of valuation. Subsequently comparison of these methods and analysis of selected factors influencing the ascertained price and the standard price is done.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Ascertained price"

1

Weich, Scott, and Martin Prince. "Cohort studies." In Practical Psychiatric Epidemiology, 155–76. Oxford University Press, 2003. http://dx.doi.org/10.1093/med/9780198515517.003.0009.

Full text
Abstract:
A cohort study is one in which the outcome (usually disease status) is ascertained for groups of individuals defined on the basis of their exposure. At the time exposure status is determined, all must be free of the disease. All eligible participants are then followed up over time. Since exposure status is determined before the occurrence of the outcome, a cohort study can clarify the temporal sequence between exposure and outcome, with minimal information bias. The historical and the population cohort study (Box 9.1) are efficient variants of the classical cohort study described above, which nevertheless retain the essential components of the cohort study design. The exposure can be dichotomous [i.e. exposed (to obstetric complications at birth) vs. not exposed], or graded as degrees of exposure (e.g. no recent life events, one to two life events, three or more life events). The use of grades of exposure strengthens the results of a cohort study by supporting or refuting the hypothesis that the incidence of the disease increases with increasing exposure to the risk factor; a so-called dose–response relationship. The essential features of a cohort study are: ♦ participants are defined by their exposure status rather than by outcome (as in case–control design); ♦ it is a longitudinal design: exposure status must be ascertained before outcome is known. The classical cohort study In a classical cohort study participants are selected for study on the basis of a single exposure of interest. This might be exposure to a relatively rare occupational exposure, such as ionizing radiation (through working in the nuclear power industry). Care must be taken in selecting the unexposed cohort; perhaps those working in similar industries, but without any exposure to radiation. The outcome in this case might be leukaemia. All those in the exposed and unexposed cohorts would need to be free of leukaemia (hence ‘at risk’) on recruitment into the study. The two cohorts would then be followed up for (say) 10 years and rates at which they develop leukaemia compared directly. Classical cohort studies are rare in psychiatric epidemiology. This may be in part because this type of study is especially suited to occupational exposures, which have previously been relatively little studied as causes of mental illness. However, this may change as the high prevalence of mental disorders in the workplace and their negative impact upon productivity are increasingly recognized. The UK Gulf War Study could be taken as one rather unusual example of the genre (Unwin et al. 1999). Health outcomes, including mental health status, were compared between those who were deployed in the Persian Gulf War in 1990–91, those who were later deployed in Bosnia, and an ‘era control group’ who were serving at the time of the Gulf war but were not deployed. There are two main variations on this classical cohort study design: they are popular as they can, depending on circumstances, be more efficient than the classical cohort design. The population cohort study In the classical cohort study, participants are selected on the basis of exposure, and the hypothesis relates to the effect of this single exposure on a health outcome. However, a large cohort or panel of subjects are sometimes recruited and followed up, often over many years, to study multiple exposures and outcomes. No separate comparison group is required as the comparison group is generally an unexposed sub-group of the panel. Examples include the British Doctor's Study in which over 30,000 British doctors were followed up for over 20 years to study the effects of smoking and other exposures on health (Doll et al. 1994), and the Framingham Heart Study, in which residents of a town in Massachusetts, USA have been followed up for 50 years to study risk factors for coronary heart disease (Wolf et al. 1988). The Whitehall and Whitehall II studies in the UK (Fuhrer et al. 1999; Stansfeld et al. 2002) were based again on an occupationally defined cohort, and have led to important findings concerning workplace conditions and both physical and psychiatric morbidity. Birth cohort studies, in which everyone born within a certain chronological interval are recruited, are another example of this type of study. In birth cohorts, participants are commonly followed up at intervals of 5–10 years. Many recent panel studies in the UK and elsewhere have been funded on condition that investigators archive the data for public access, in order that the dataset might be more fully exploited by the wider academic community. Population cohort studies can test multiple hypotheses, and are far more common than any other type of cohort study. The scope of the study can readily be extended to include mental health outcomes. Thus, both the British Doctor's Study (Doll et al. 2000) and the Framingham Heart Study (Seshadri et al. 2002) have gone on to report on aetiological factors for dementia and Alzheimer's Disease as the cohorts passed into the age groups most at risk for these disorders. A variant of the population cohort study is one in which those who are prevalent cases of the outcome of interest at baseline are also followed up effectively as a separate cohort in order (a) to study the natural history of the disorder by estimating its maintenance (or recovery) rate, and (b) studying risk factors for maintenance (non-recovery) over the follow-up period (Prince et al. 1998). Historical cohort studies In the classical cohort study outcome is ascertained prospectively. Thus, new cases are ascertained over a follow-up period, after the exposure status has been determined. However, it is possible to ascertain both outcome and exposure retrospectively. This variant is referred to as a historical cohort study (Fig. 9.1). A good example is the work of David Barker in testing his low birth weight hypothesis (Barker et al. 1990; Hales et al. 1991). Barker hypothesized that risk for midlife vascular and endocrine disorders would be determined to some extent by the ‘programming’ of the hypothalamo-pituitary axis through foetal growth in utero. Thus ‘small for dates’ babies would have higher blood pressure levels in adult life, and greater risk for type II diabetes (through insulin resistance). A prospective cohort study would have recruited participants at birth, when exposure (birth weight) would be recorded. They would then be followed up over four or five decades to examine the effect of birth weight on the development of hypertension and type II diabetes. Barker took the more elegant (and feasible) approach of identifying hospitals in the UK where several decades previously birth records were meticulously recorded. He then traced the babies as adults (where they still lived in the same area) and measured directly their status with respect to outcome. The ‘prospective’ element of such studies is that exposure was recorded well before outcome even though both were ascertained retrospectively with respect to the timing of the study. The historical cohort study has also proved useful in psychiatric epidemiology where it has been used in particular to test the neurodevelopmental hypothesis for schizophrenia (Jones et al. 1994; Isohanni et al. 2001). Jones et al. studied associations between adult-onset schizophrenia and childhood sociodemographic, neurodevelopmental, cognitive, and behavioural factors in the UK 1946 birth cohort; 5362 people born in the week 3–9 March 1946, and followed up intermittently since then. Subsequent onsets of schizophrenia were identified in three ways: (a) routine data: cohort members were linked to the register of the Mental Health Enquiry for England in which mental health service contacts between 1974 and 1986 were recorded; (b) cohort data: hospital and GP contacts (and the reasons for these contacts) were routinely reported at the intermittent resurveys of the cohort; (c) all cohort participants identified as possible cases of schizophrenia were given a detailed clinical interview (Present State examination) at age 36. Milestones of motor development were reached later in cases than in non-cases, particularly walking. Cases also had more speech problems than had noncases. Low educational test scores at ages 8,11, and 15 years were a risk factor. A preference for solitary play at ages 4 and 6 years predicted schizophrenia. A health visitor's rating of the mother as having below average mothering skills and understanding of her child at age 4 years was a predictor of schizophrenia in that child. Jones concluded ‘differences between children destined to develop schizophrenia as adults and the general population were found across a range of developmental domains. As with some other adult illnesses, the origins of schizophrenia may be found in early life’. Jones' findings were largely confirmed in a very similar historical cohort study in Finland (Isohanni et al. 2001); a 31 year follow-up of the 1966 North Finland birth cohort (n = 12,058). Onsets of schizophrenia were ascertained from a national hospital discharge register. The ages at learning to stand, walk and become potty-trained were each related to subsequent incidence of schizophrenia and other psychoses. Earlier milestones reduced, and later milestones increased, the risk in a linear manner. These developmental effects were not seen for non-psychotic outcomes. The findings support hypotheses regarding psychosis as having a developmental dimension with precursors apparent in early life. There are many conveniences to this approach for the contemporary investigator. ♦ The exposure data has already been collected for you. ♦ The follow-up period has already elapsed. ♦ The design maintains the essential feature of the cohort study, namely that information bias with respect to the assessment of the exposure should not be a problem. ♦ As with the Barker hypothesis example, historical cohort studies are particularly useful for investigating associations across the life course, when there is a long latency between hypothesized exposure and outcome. Despite these important advantages, such retrospective studies are often limited by reliance on historical data that was collected routinely for other purposes; often these data will be inaccurate or incomplete. Also information about possible confounders, such as smoking or diet, may be inadequate.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Ascertained price"

1

Yılmazcan, Dilek, and Cansu Dağ. "Financial Regulations in the Field of Energy Policies." In International Conference on Eurasian Economies. Eurasian Economists Association, 2018. http://dx.doi.org/10.36880/c10.02036.

Full text
Abstract:
Goals set by governments in energy field can be various. However, financial regulations can also vary depending on geopolitical location, sources, economical structure and other prioritized policies of the countries. Modern energy policies basically prioritize energy safety, efficiency, diversity and their environment-friendly features. In this study, financial regulations in the field of energy at world will be analyzed and the impact of financial regulations will be ascertained. Energy end-user price is calculated by taking taxes, CO2 emission pricing and subsidies into account. CO2 emission pricing resulting from emission top level and trade or carbon taxes affects investment decisions in energy industry by changing the costs of other competitive sources. In addition to this, major types of energy subsidies are fossil source subsidies and renewable energy subsidies. Financial policy tools in EU can be listed as energy taxation, EU emission trade system and incentives for renewable energy. Legal regulations affecting energy field in Turkey can be examined in three categories; energy taxation, tax expenditures and support mechanisms. Tax expenditures and support mechanisms covering tax exemption, exception, reduction and similar practices in energy field are provided to both producers and consumers. As a result, activating energy policies depends on decisions of many industries and individuals especially in transportation, industry and residence. These regulations mentioned in this study will be the most important tool in guiding rational preferences of the agents on generation, distribution, consumption and savings, if they are planned according to energy policies.
APA, Harvard, Vancouver, ISO, and other styles
2

Majerus, J. N., D. A. Tenney, M. L. Mimnagh, S. P. Lamphear, and J. A. Jannone. "Blending Hierarchical Economic Decision Matrices (EDM) With FE and Stochastic Modeling: II — Detailing EDM." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1617.

Full text
Abstract:
Abstract The purpose of the Detailing EDM is to “fine-tune” a previously ameliorated design. First, in order to determine the best fillet radius for formability, forging simulations are conducted using commercial software. Once the fillet dimension is ascertained, the 3-D model is generated and maximum stresses determined for a trial force. Two different commercial programs were used to determine the three dimensional stresses. The only statistical quantities involve the loading (Gaussian distribution), the material “strength” in the Analytical Criteria for Failure (ACF), and possibly, the boundary conditions in the 3-D models. This paper considers the ACF to be the resistance to fatigue-fracture under complete reversal of loads at 5 × 108 cycles. The paper overviews three different methods of combining stochastic behaviour with FE analysis, and presents a methodology for using the interference-method with non-symmetrical distributions. The EDM are then presented for the three Product Criteria of forging — formability, Prime cost and 3-D reliability with respect to the selected ACF.
APA, Harvard, Vancouver, ISO, and other styles
3

Nordin, N. "Concept+ 2.0 - The Revolution of Small and Marginal Fields." In Digital Technical Conference. Indonesian Petroleum Association, 2020. http://dx.doi.org/10.29118/ipa20-f-287.

Full text
Abstract:
The development of small and marginal fields is becoming progressively more important in mature oil and gas provinces. During the low oil price environment starting in 2015, the challenging economic climate required going beyond conventional boundaries to develop small and marginal fields; hence a concept was tested further through “Concept+ Workflow”. In 2019, “Concept+2.0 Workflow” was introduced with the incorporation of Design to Value (D2V) principles. Among the elements in “Concept+2.0 Workflow” was design replication through the “Design One Build Many (D1BM)” concept, with the primary focus on one standardized fit-for-purpose topsides design. With the right mind set, such development will not only technically feasible, but also economically attractive and value creative. The first element of this approach is the Lightweight Structure (LWS) with a targeted weight topsides. A long term EPCIC contract will be established based on the generic D1BM fit-for-purpose functional facilities design to provide economies of scale. The second element is on the Target Cost Setting and Target Schedule Ready for Rig (RFR) which ascertains the EPCIC capability to compete and provide the optimum Design to Value approach for the development. Throughout, an integrated approach and disciplined project management have been essential for maintaining a cost effective and timely execution of Concept+ as well as Concept+2.0. Results have clearly illustrated the success of this approach in Concept+, leading to successful execution of various small and marginal fields as well as capturing the best practices and lesson learnt to be emulated. This paper reviews the above-mentioned applications and highlights the seamless integration of the efforts of various departments and multi-disciplinary teams.
APA, Harvard, Vancouver, ISO, and other styles
4

Richards, S., and H. Perez-Blanco. "Mitigating the Variability of Wind and Solar Energy Through Pumped Hydroelectric Storage." In ASME Turbo Expo 2012: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/gt2012-68121.

Full text
Abstract:
Renewable power production is both variable and difficult to forecast accurately. These facts can make its integration into an electric grid problematic. If an area’s demand for electricity can be met without using renewable generation, the addition of renewable generation would not warrant a further increase in generation capacity. However, to effectively integrate large amounts of additional renewable generation, it is likely that a more flexible generation fleet will be required. One way of increasing a generation fleet’s flexibility is through the adoption of pumped hydroelectric storage (PHS, see the glossary for definitions of select terms). Like traditional hydropower generation, PHS is capable of quickly varying its power output but it is also capable of operating in reverse to store excess energy for later use. This paper will address many of the operational aspects of combining pumped hydroelectric storage (PHS), which is currently used to store excess energy from traditional generators, with wind and solar power generation. PJM, a grid operator in the Middle Atlantic States, defines capacity value for renewable generation as the percent of installed generating capacity that the generator can reliably contribute during summer peak hours. Existing wind generators inside PJM have an average capacity value of 13% and existing solar generators have a capacity value of 38%. The chief reason for these capacity values is that the renewable power production does not usually coincide with the hours of peak electricity demand during the summer. If PHS were used to firm renewable power generation, it would translate into increased utilization of the renewable generation that would displace the least efficient/most costly generators. A computer model with one minute granularity is constructed in order to study the operational requirements of PHS facilities. PJM electricity demand, power prices, and wind power production data for 2010 were used in conjunction with NREL simulated solar power production as input to the model. Currently, various PHS operational strategies are being tested to ascertain their effectiveness at firming and time shifting renewable generation. Preliminary results show the profound effects of increased penetration of renewable energy on an electric grid. The results also demonstrate a niche for even greater PHS operational flexibility, i.e. variable speed or unidirectional ternary machine (UTM) PHS.
APA, Harvard, Vancouver, ISO, and other styles
5

Khalid, Ali, Qasim Ashraf, Khurram Luqman, Ayoub Hadj Moussa, Agha Ghulam Nabi, and Umair Ahmed Baig. "Precise Bottom Hole Pressure Management to Reach Target Depth in a Narrow Windowed Ultra HP-HT Well: A Case for Automated Managed Pressure Drilling." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21410-ms.

Full text
Abstract:
Abstract With the energy sector in crisis the worldover, oil and gas operators continue to seek more effective and efficient methods to reach potential prospects. With sharply declining oil prices, it is imperative that operators minimize the non-productive time in the drilling of all wells. Many operators are actively seeking riskier exploration to establish a strong foothold in this volatile market. One such area of interest to operators is HPHT and beyond wells. An HPHT prospect carries a high-risk high-reward potential, therefore newer and advanced methods are being deployed to successfully drill and complete HPHT wells. The Makran Coastal belt in south western Pakistan is one such area containing a potential Ultra-HPHT prospect. Many operators had attempted to drill about 9 wells in the locality but never managed to reach target depth due to drilling operations being plagued with a large number of problems. The drilling problems included high pressure influxes, stuck pipe while controlling influxes, circulation losses with high mud weights and ECD’s, differential sticking against permeable formations, inefficient bottom hole pressure control due to mud weight reduction with high temperatures and swabbing from the formation due to having an insufficient trip margin. The operator was facing an extremely narrow drilling window in the target section. The maximum formation pressure was estimated to be around 2.29 SG while the maximum fracture pressure of the formation was estimated to be around 2.35 SG in EMW. It was abundantly clear that drilling with a conventional mud system would be impossible and impractical on all forthcoming wells. As it was of paramount importance to precisely manage the wellbore pressure profile, the operator decided to apply managed pressure drilling on a candidate well. By applying managed pressure drilling techniques the operator expected to drill the section with an underbalanced mud weight and maneuver the bottom hole pressure just above the pore pressure line and thereby avoid circulation losses, detect influxes early on and control influxes without the need of ever shutting in the well, account for mud density variations with temperatures by executing an advanced thermal hydraulics model in real time, mitigate swabbing from the formation again by maintaining a constant bottom hole pressure while tripping, and finally ascertain the downhole pressure environment by conducting dynamic formation pressure tests. The successful application of MPD enabled the operator to reach target depth for the first time in the history of the area. The paper studies the planning, design, and execution of MPD on the subject well.
APA, Harvard, Vancouver, ISO, and other styles
6

Yaïci, Wahiba, and Hajo Ribberink. "Feasibility Study of Medium- and Heavy-Duty Compressed Renewable/Natural Gas Vehicles in Canada." In ASME 2020 14th International Conference on Energy Sustainability. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/es2020-1617.

Full text
Abstract:
Abstract Concerns about environmental degradation and finite natural resources necessitate cleaner sources of energy for use in the transportation sector. In Canada, natural gas is currently being appraised as a potential alternative fuel for use in vehicles for both medium and heavy-duty use due to its relatively lower costs compared to that of conventional fuels. The idea of compressed natural gas vehicles (CNGVs) is being mooted as inexpensive for fleet owners and especially because it will potentially significantly reduce harmful emissions into the environment. A short feasibility study was conducted to ascertain the potential for reduced emissions and savings opportunities presented by CNGVs in both medium and heavy-duty vehicles. The study which is discussed in the present paper was carried out on long-haul trucking and refuse trucks respectively. Emphasis was laid on individual vehicle operating economics and emissions reduction, and the identification of practical considerations for both the individual application and CNGVs as a whole. A financial analysis of the annual cost savings that is achievable when an individual diesel vehicle is replaced with a CNG vehicle was also presented. This paper drew substantial references from published case studies for relevant data on maintenance costs, fuel economy, range, and annual distance travelled. It relied on a summary report from Argonne National Laboratory’s GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) [1] for its discussion on relative fuel efficiency penalties for heavy-duty CNGVs. The fuel cost figures were mostly drawn from motor fuel data of the Ontario Ministry of Transportation, since the Ministry is one of the few available sources of compressed natural gas fuel prices. Finally, the GHGenius life-cycle analysis tool [2] was employed to determine fuel-cycle emissions in Canada for comparison purposes. The study produced remarkable findings. Results showed that compared to diesel-fuelled vehicles, emissions in CNG heavy-and-medium-duty vehicles reduced by up to 8.7% (for well-to-pump) and 11.5% (for pump-to-wheels) respectively. Overall, the most beneficial use/application appeared to be long-haul trucking based on the long distances covered and higher fuel economy achieved (derived from economies of scale), while refuse trucks appeared to have relatively marginal annual savings. However, these annual savings are actually a conservative estimate which will ultimately be modified/determined by a number of factors that are likely to be predisposed in favour of natural gas vehicles. Significantly, the prospect of using renewable natural gas as fuel was found to be a factor for improving the value proposition of refuse trucks in particular, certainly from an emissions standpoint with a reduction of up to 100%, but speculatively from operational savings as well.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography