To see the other types of publications on this topic, follow the link: Marginal cost analysis.

Dissertations / Theses on the topic 'Marginal cost analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 dissertations / theses for your research on the topic 'Marginal cost analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Koslowski, Frank Johannes. "Assessing marginal abatement cost for greenhouse gas emissions from livestock production in China and Europe : accounting for uncertainties." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/25435.

Full text
Abstract:
Climate change is probably the most challenging threat to mankind. International agreements have acknowledged the fact that anthropogenic GHG emissions must be reduced significantly to adhere to a maximum global warming of 2°C. The livestock sector plays a key role in achieving this target as it is a significant source of GHG emissions. While the livestock sector offers significant GHG reduction potential, it is currently neglected in international and national mitigation efforts. Therefore, scientific research must guide mitigation policy decisions with evidence of cost-efficient abatement potential that can be achieved through various mitigation technologies. Marginal Abatement Cost Curves (MACC) are an analytical tool for informing policy makers about the cost-effectiveness (CE) of mitigation. MACCs provide a relatively clear representation of a complicated issue based on their graphical design that prioritises various mitigation options in terms of their CE of abatement and enables assessment of total GHG reduction under a budget constraint. However, developing a MACC involves considerable data collection, depends on various interdisciplinary information sources and the methodology is subject to several limitations. These factors can result in uncertainties in marginal abatement cost (MAC) results, the assessment of which is often neglected in MACC literature. This research shows the main GHG emission sources in livestock production and possible mitigation options to reduce GHG emissions from these sources. After elaborating the MACC methodology, advantages, disadvantages and limitation of the engineering MACC are shown. This allows understanding the relevance of assessing and reporting uncertainty of MACCs. Two engineering MACCs are developed that show the CE abatement potentials available in the Chinese livestock sector and European Union 15 (EU-15) dairy sector in 2020, with emphasis on dietary mitigation options. The requirement of assessing CE of abatement for individual mitigation options is highlighted by separate derivation of technical and economic abatement potential for the EU-15 dairy sector. For the Chinese MACC, a scenario analysis (SA) and for the European MACC, a Monte Carlo (MC) simulation are utilised to show the relevance of assessing uncertainty in MACCs. To provide further evidence, the overall range of CE estimates for eight mitigation options found in relevant MACC literature is presented. This allows the generation of probability distribution functions of CE for each mitigation option with kernel density estimation (KDE). The results from this study show the significance of livestock and dairy production related GHG emissions in China and Europe, respectively. In China, baseline GHG emissions of livestock production are projected to increase significantly, while these of the EU-15 dairy production are predicted to decrease by 2020. It was found that enteric fermentation is the largest GHG emission source from dairy production and should be focus of mitigation policies. Both case studies showed mitigation options that offer abatement potential at high CE. Priorities should be given to biomass gasification, breeding techniques and feed supplements as tea saponins and probiotics for the Chinese livestock sector, and to animal selection, reduced tillage and dietary probiotics for the EU-15 dairy sector. The scenario analysis reveals that mid-term projections for the Chinese livestock sector are varying strongly, and utilising key variables from different projections has a significant impact on MAC results which changes the ranking of the mitigation options. The MC simulation shows the contribution of some model inputs to the uncertainty of abatement at negative cost and a high model output uncertainty regarding measure’s CE for most mitigation options. However, the ranking of the mitigation options remains stable. The range of MAC estimates for 8 mitigation options in the agricultural sector is high and variables like ‘study quality’ or ‘study location’ do not change this. The KDE was further used to rank the mitigations options based on their probability of being reported as cost-negative and shows that measures affecting soil N2O and carbon sequestration are reported to be more cost-efficient as compared to measures focusing on manure management. Based on these finding, the impact of study designs on MAC estimates and lack of communication uncertainty in MACC literature are discussed. Uncertainties that are underpinning MACC results can have significant impacts on CE and abatement potentials. To increase utilisation of MACCs by knowledge users, MACC research must prioritise assessment, quantification and report of uncertainties, compare results within the scientific literature and publish data and assumption of the MACC transparently.
APA, Harvard, Vancouver, ISO, and other styles
2

Thureson, Disa. "Cost-Benefit Analysis of climate policy and long term public investments." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-48241.

Full text
Abstract:
This compilation dissertation consists of four essays with the common theme of welfare analysis of long-term public investments. The first two essays focus on analysis of climate change mitigation, i.e., the social cost of carbon dioxide. The third essay focuses on cost-benefit analysis (CBA) of transport investment projects, while the last essay takes a broader perspective on welfare analysis. Essay 1: The Temporal Aspects of the Social Cost of Greenhouse Gases. The purpose of Essay 1 is to investigate the temporal aspects of the social cost of greenhouse gases. I find that the calculation period should ultimately be modeled to be consistent with the discount rate and that the “global-warming potential” concept is unsuitable for calculation of the social cost of GHGs other than carbon dioxide. Essay 2: Avoiding path dependence of distributional weights: Lessons from climate change economic assessments. In Essay 2, I explore shortcomings in income weighting in evaluation of climate change policy. In short, in previous versions of two of the most important existing models, regional economic growth is double counted. The proposed alternative approaches yield about 20–40% higher values of SCCO2 than the old approach. Essay 3: Does uncertainty make cost-benefit analyses pointless? In Essay 3, the aim is to investigate to what extent CBA improves the selection decision of projects when uncertainties are taken into account, using a simulation-based approach on real data of infrastructure investments. The results indicate that, in line with previous literature, CBA is a rather robust tool and considerably increases the quality of decision making compared with a random selection mechanism, even when high levels of uncertainty are considered. Essay 4: Household Production and the Elasticity of Marginal Utility of Consumption. In Essay 4, I develop a new model to show that omission of household production in a previous model leads to bias when the elasticity of marginal utility of consumption, EMUC, is estimated. I further offer new, unbiased estimates based on current evidence of the included parameters, suggesting a lower bound of EMUC at about 0.9.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodrigues, Paula Fernanda Morais Andrade. "Metodologia para adaptação de curvas de custo marginal de abatimento." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100136/tde-22052018-181657/.

Full text
Abstract:
O Acordo de Paris preconiza que cada país descreva e comunique suas ações climáticas pós-2020. Neste contexto, as Curvas de Custo Marginal de Abatimento (MACC) podem ser úteis aos países e aos tomadores de decisão, pois mostram de forma clara o custo (em unidades monetárias por massa de CO2e) para a implementação de tecnologias de mitigação de emissões de gases do efeito estufa (GEE) e o seu potencial de redução de emissões associado (em massa de CO2e). As MACC podem ser utilizadas para qualquer unidade política, como: país, cidade ou estado. Podem, também, ser aplicadas a diversas áreas, tais como: transporte, construção, poluição do ar, agricultura ou indústria. Diante desta diversidade de estudos e aplicações, o objetivo do presente trabalho foi desenvolver uma metodologia de adaptação de MACC, de estudos publicados na literatura, para qualquer unidade política ou ano de interesse. Isto permitirá a \"reutilização\" destas MACC, porém sem a necessidade da realização de novos estudos. O desenvolvimento da metodologia de adaptação prescindiu de uma meta-análise e harmonização de dados da literatura. A metodologia desenvolvida foi aplica ao Brasil, considerando os subsetores industriais de cimento e de siderurgia. Ela foi, também, implementada no software Access® (e denominada re-MACC) para que todo o processo de adaptação das MACC pudesse ser realizado automaticamente. Analisando um total de 178 tecnologias de baixo carbono para os subsetores de cimento e de siderurgia, o resultado mostrou que seria possível reduzir, em 2014, aproximadamente 52,4% das emissões de CO2e, gerando uma economia monetária de 1.835 US$/tCO2e, caso o Brasil as adotasse. A metodologia foi capaz de harmonizar dados para qualquer unidade política ou ano de interesse, todavia refinamentos são necessários para torná-la ainda mais acurada
The Paris Agreement calls on each country to describe and report on its climate actions post-2020. In this context, the Marginal Abatement Cost Curves (MACC) can be useful to countries and decision makers as they clearly show the cost (in monetary units per mass of CO2e) for the implementation of greenhouse gas (GHG) emission mitigation technologies and their associated emission reduction potential (in mass of CO2e). They can be used for any jurisdiction, such as country, city or state. They can also be applied to several areas, such as: transportation, buildings, air pollution, agriculture or manufacturing. In view of this diversity of studies and applications, the objective of the present work was to develop a methodology for adapting MACC, from studies published in the literature, to any jurisdiction or year of interest. This work allows for \"re-using\" these MACC, but without the need for new studies. The development of the methodology is based on a meta-analysis and harmonization of literature data. The methodology was applied to Brazil, considering the industrial cement and steel subsectors. It was implemented in the Access® software (and called re-MACC) so the MACC adaptation process could be performed automatically. Analyzing a total of 178 low-carbon technologies for the Brazilian industrial subsectors of cement and steel, the result showed that it would be possible to reduce by approximately 52.4% of CO2e emissions by 2014, generating monetary savings of 1,835 US$/tCO2e. The methodology proved to be capable in harmonizing the data, however further refinements are needed to make it even more accurate
APA, Harvard, Vancouver, ISO, and other styles
4

Shu, Gary. "Economics and policies for carbon capture and sequestration in the western United States : a marginal cost analysis of potential power plant deployment." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62874.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division; and, (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 91-94).
Carbon capture and sequestration (CCS) is a technology that can significantly reduce power sector greenhouse gas (GHG) emissions from coal-fired power plants. CCS technology is currently in development and requires higher construction and operating costs than is currently competitive in the private market. A question that policymakers and investors have is whether a CCS plant will operate economically and be able to sell their power output once built. One way of measuring this utilization rate is to calculate capacity factors of possible CCS power plants. To investigate the economics of CCS generation, a marginal cost dispatch model was developed to simulate the power grid in the Western Interconnection. Hypothetical generic advanced coal power plants with CCS were inserted into the power grid and annual capacity factor values were calculated for a variety of scenarios, including a carbon emission pricing policy. I demonstrate that CCS power plants, despite higher marginal costs due to the operating costs of the additional capture equipment, are competitive on a marginal cost basis with other generation on the power grid at modest carbon emissions prices. CCS power plants were able to achieve baseload level capacity factors with $10 to $30 per ton-CO2 prices. However, for investment in CCS power plants to be economically competitive requires that the higher capital costs be recovered over the plant lifetime, which only occurs at much higher carbon prices. To cover the capital costs of first-of-the-kind CCS power plants in the Western Interconnection, carbon emissions prices have been calculated to be much higher, in the range of $130 to $145 per ton-CO2 for most sites in the initial scenario. Two sites require carbon prices of $65 per ton-CO2 or less to cover capital costs. Capacity factors and the impact of carbon prices vary considerably by plant location because of differences in spare transmission capacity and local generation mix.
by Gary Shu.
M.C.P.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
5

Stewart, Paul Andrew. "Intertemporal Considerations in Supply Offer Development in the wholesale electricity market." Thesis, University of Canterbury. Management, 2007. http://hdl.handle.net/10092/863.

Full text
Abstract:
Over the last 20 years, electricity markets around the world have gradually been deregulated, creating wholesale markets in which generating companies compete for the right to supply electricity, through an offering system. This thesis considers the optimisation of the offering process from the perspective of an individual generator, subject to intertemporal constraints including fuel limitations, correlated rest-of-market behaviour patterns and unit operational decisions. Contributions from the thesis include a Pre-Processing scheme that results in considerable computational benefits for a two-level Dynamic Programming method, in addition to the development of a new process that combines the techniques of Decision Analysis and Dynamic Programming.
APA, Harvard, Vancouver, ISO, and other styles
6

Bångman, Gunnel. "Equity in welfare evaluations : The rationale for and effects of distributional weighting." Doctoral thesis, Örebro University, Department of Business, Economics, Statistics and Informatics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-309.

Full text
Abstract:

This thesis addresses the issue of weighted cost-benefit analysis (WCBA). WCBA is a welfare evaluation model where income distribution effects are valued by distributional weighting. The method was developed already in the 1970s. The interest in and applications of this method have increased in the past decade, e.g. when evaluating of global environmental problems. There are, however, still unsolved problems regarding the application of this method. One such issue is the choice of the approach to the means of estimating of the distributional weights. The literature on WCBA suggests a couple of approaches, but gives no clues as to which one is the most appropriate one to use, either from a theoretical or from an empirical point of view. Accordingly, the choice of distributional weights may be an arbitrary one. In the first paper in this thesis, the consequences of the choice of distributional weights on project decisions have been studied. Different sets of distributional weights have been compared across a variety of strategically chosen income distribution effects. The distributional weights examined are those that correspond to the WCBA approaches commonly suggested in literature on the topic. The results indicate that the choice of distributional weights is of importance for the rank of projects only when the income distribution effects concern target populations with low incomes. The results also show that not only the mean income but also the span of incomes, of the target population of the income distribution effect, affects the result of the distributional weighting when applying very progressive non-linear distributional weights. This may cause the distributional weighting to indicate an income distribution effect even though the project effect is evenly distributed across the population.

One rational for distributional weighting, commonly referred to when applying WCBA, is that marginal utility of income is decreasing with income. In the second paper, this hypothesis is tested. My study contributes to this literature by employing stated preference data on compensated variation (CV) in a model flexible as to the functional form of the marginal utility. The results indicate that the marginal utility of income decreases linearly with income.

Under certain conditions, a decreasing marginal utility of income corresponds to risk aversion. Thus the hypothesis that marginal utility of income is decreasing with income can be tested by analyses of individuals’ behaviour in gambling situations. The third paper examines of the role of risk aversion, defined by the von Neumann-Morgenstern expected utility function, for people’s concern about the problem of ‘sick’ buildings. The analysis is based on data on the willingness to pay (WTP) for having the indoor air quality (IAQ) at home examined and diagnosed by experts and the WTP for acquiring an IAQ at home that is guaranteed to be good. The results indicate that some of the households are willing to pay for an elimination of the uncertainty of the IAQ at home, even though they are not willing to pay for an elimination of the risks for building related ill health. The probability to pay, for an elimination of the uncertainty of the indoor air quality at home, only because of risk aversion is estimated to 0.3-0.4. Risk aversion seems to be a more common motive, for the decision to pay for a diagnosis of the IAQ at home, among young people.

Another rationale for distributional weighting, commonly referred to, is the existence of unselfish motives for economic behaviour, such as social inequality aversion or altruism. In the fourth paper the hypothesis that people have altruistic preferences, i.e. that they care about other people’s well being, is tested. The WTP for a public project, that ensures good indoor air quality in all buildings, have been measured in three different ways for three randomly drawn sub-samples, capturing different motives for economic behaviour (pure altruism, paternalism and selfishness). The significance of different questions, and different motives, is analysed using an independent samples test of the mean WTPs of the sub-samples, a chi-square test of the association between the WTP and the sample group membership and an econometric analysis of the decision to pay to the public project. No evidence for altruism, either pure altruism or paternalism, is found in this study.

APA, Harvard, Vancouver, ISO, and other styles
7

Ajayi, Victor A. "Essays on deregulation in the electricity generation sector." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/27614.

Full text
Abstract:
Over that past three decades, power sector reform has been a key pillar of policy agendas in more than half of the countries across the world. This thesis specifically concerns the empirical investigation of the economic performance of the international electricity generation industry. Drawing on the stochastic frontier analysis techniques, the thesis considers the influence of reform as exogenous factors in shifting frontier technology as well as shaping inefficiency function directly -determinants and heteroscedasticity variables. The first essay uses an extensive panel dataset of 91 countries over the period 1980 to 2010 to measure the impact of deregulation on efficiency and total productivity growth using stochastic input distance frontier (SIDF). Three specific issues are addressed in the first essay: (1) the relationship between deregulation and technical efficiency, (2) the extent of the rank correlation of the country intercepts with deregulation via their position on the frontier, (3) the trend of total factor productivity and its components. We establish a positive impact of deregulation on efficiency and some compelling evidence suggesting that the country intercepts equally account for the influence of deregulation aside efficiency. In particular, the technical efficiency index from the first paper reveals that most OECD European countries are consistently efficient. Building on this finding, the second essay investigates the performance in term of cost efficiency for electricity generation in OECD power sector while accounting for the impact of electricity market product regulatory indicators. Empirical models are developed for the cost function as a translog form and analysed using panel data of 25 countries during the period 1980 to 2009. We show that it is necessary to model latent country-specific heterogeneity in addition to time-varying inefficiency. The estimated economies of scale are adjusted to take account of the importance of the quasi-fixed capital input in determining cost behaviour, and adjusted economies of scale are verified for the OECD generation sector. The findings suggest there is a significant impact of electricity market regulatory indicators on cost. Cost complementarity between generation and emissions found to be significant, indicating the possibility of reducing emissions without necessarily reducing electricity generation. Finally, the third essay examines the performance of electric power industry s using consistent state-level electricity generation dataset for the US contiguous states from 1998-2014. We estimate stochastic production frontier for five competing models in order to identify the determinants of technical inefficiency and marginal effects. We find evidence of positive impacts of deregulation on technical efficiency across the models estimated. Our preferred model shows that deregulated states are more efficient in electricity generation than non-deregulated states. The result of the marginal effects shows that deregulation has a positive and monotonic effect on the technical efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

Celebi, Emre. "MODELS OF EFFICIENT CONSUMER PRICING SCHEMES IN ELECTRICITY MARKETS." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/811.

Full text
Abstract:
Suppliers in competitive electricity markets regularly respond to prices that change hour by hour or even more frequently, but most consumers respond to price changes on a very different time scale, i. e. they observe and respond to changes in price as reflected on their monthly bills. This thesis examines mixed complementarity programming models of equilibrium that can bridge the speed of response gap between suppliers and consumers, yet adhere to the principle of marginal cost pricing of electricity. It develops a computable equilibrium model to estimate the time-of-use (TOU) prices that can be used in retail electricity markets. An optimization model for the supply side of the electricity market, combined with a price-responsive geometric distributed lagged demand function, computes the TOU prices that satisfy the equilibrium conditions. Monthly load duration curves are approximated and discretized in the context of the supplier's optimization model. The models are formulated and solved by the mixed complementarity problem approach. It is intended that the models will be useful (a) in the regular exercise of setting consumer prices (i. e. , TOU prices that reflect the marginal cost of electricity) by a regulatory body (e. g. , Ontario Energy Board) for jurisdictions (e. g. , Ontario) where consumers' prices are regulated, but suppliers offer into a competitive market, (b) for forecasting in markets without price regulation, but where consumers pay a weighted average of wholesale price, (c) in evaluation of the policies regarding time-of-use pricing compared to the single pricing, and (d) in assessment of the welfare changes due to the implementation of TOU prices.
APA, Harvard, Vancouver, ISO, and other styles
9

Marčiulionytė, Asta. "Įmonės pelno-išlaidų-veiklos apimties vertinimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20140626_161708-33077.

Full text
Abstract:
Pelningumo siekimas yra pagrindinis verslo įmonės tikslas, dažnai lemiantis vadovų pasirinkimą sprendžiant problemas susijusias su pardavimo kainomis, įvairiomis išlaidomis. Pelno-išlaidų-veiklos apimties vertinimas teikia apibendrinantį planavimo proceso vaizdą ir išlaidų kitimo supratimą, todėl gauta informacija yra nepakeičiama, siekiant užtikrinti racionalų įmonės valdymą. Šio darbo tikslas – išanalizavus pelno-išlaidų-veiklos apimties metodologiją, atlikti įmonės pelno-išlaidų-veiklos apimties vertinimą ir sudaryti modelį, leidžiantį efektyviai planuoti ir analizuoti įmonės kaštų, veiklos apimties ir pelno ryšį. Taigi, šio darbo objektas-įmonės pelningumas, jo priklausomybė nuo įmonės pajamų, kaštų ir veiklos apimties. Siekiant įgyvendinti darbo tikslą, keliami tokie uždaviniai: išanalizuoti pelno-išlaidų-veiklos apimties vertinimo metodologiją, suformuoti prielaidas pelno-išlaidų-veiklos apimties vertinimo modeliavimui, atlikti įmonės pelno-išlaidų-veiklos apimties vertinimą, bei sudaryti pelno-išlaidų-veiklos apimties vertinimo modelį. Darbą sudaro trys pagrindinės dalys: metodologinė, analitinė ir rezultatų. Pirmojoje darbo dalyje nagrinėjami pelno-išlaidų-veiklos apimties vertinimo metodologijos aspektai-pateikiamos analizės prielaidos, vertinimo reikšmė ir galimybės. Antrasis darbo skyrius apima prielaidų pelno-išlaidų-veiklos apimties vertinimo modeliavimui analizę: tiriami ir lyginami pelno-išlaidų-veiklos apimties vertinimo modeliavimo metodai, bei pateikiama... [toliau žr. visą tekstą]
The main purpose of business enterprise is striving for profitability and this purpose usually decides the head‘s of enterprise choice when he tackles the problems about prices of sales and different expenses. Cost-volume-profit evaluation gives resumptive view about planning process and understanding of costs range so given information is irreplaceable because of trying to ensure rational management. The purpose of this work is to do cost-volume-profit evaluation and make a model according the analysis of cost-volume-profit methodology. This model will let to plan and analyse enterprise‘s cost, activity‘s size and profit‘s relation in effective way. So the object of this work is the profitability of an enterprise, its dependence from enterprises revenue, cost and activity‘s volume. In purpose to realize work‘s aim there are these objectives: to analyse cost-volume-profit evaluation methodology, to structure presumptions for cost-volume-profit evaluation modeling, to do enterprises cost-volume-profit evaluation and to make cost-volume-profit evaluation model. Three main parts makes this work: methodological, analytical, results. In the first part of this work there is an analyses of cost-volume-profit evaluation methodology‘s aspects, there are analysis presumptions, evaluations mean and possibilities. The second part includes presumptions analysis to cost-volume-profit evaluation modeling: here are researched and compared cost-volume-profit evaluation modeling methods and is... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
10

Lunday, Brian Joseph. "Resource Allocation on Networks: Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/77323.

Full text
Abstract:
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Maioli, Sara. "Investment under uncertainty, market structure and price-cost margins : empirical analysis." Thesis, University of Strathclyde, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Andersson, Mats. "Empirical Essays on Railway Infrastructure Costs in Sweden." Doctoral thesis, Stockholm, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Du, Plessis J. S. "Addressing diminishing profit margins within the Dutoit Group : a value chain analysis." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/97273.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Rapid urbanisation, coupled with growing per capita incomes and a rapid rising middle class, is triggering rapid growth in urban food markets. Despite these opportunities, agriculture in South Africa is confronted with diminishing profit margins due to direct production cost increasing at rates above the revenue generated from agricultural products. This research assignment has aimed to define the attributes of an effective agricultural value chain in South Africa, given the challenges faced. To achieve this goal, the research focus was on the results of an in-depth analysis of the Dutoit Group’s deciduous fruit value chain. To be able to perform a value chain analysis it is of utmost importance to first understand the meaning of the concepts as well as their origin and the evolution of their application. This is achieved through a comprehensive study of literature. Three value chain analysis tools were used for the research. These tools were an industry analysis, value chain maps and benchmarking. Through the literature review the importance and relevance of these three tools were also explored and reasons provided why they can be regarded to be adequate for a proper in-depth analysis. An overview of the Dutoit Group’s history, focusing on the specific key events influencing the evolution of their value chains, is also discussed together with their business philosophy, business model and main accomplishments. This is done to provide context to the environment in which the value chain analysis process was performed. The main focus of the fourth chapter is the application of the three value chain analysis tools on the Dutoit Group’s deciduous fruit value chain, utilising primary and secondary data collected through interviews with specific value chain actors, observations, management information and literature obtained from the public domain. Through the application of the three value chain analysis tools the effectiveness of the Dutoit Group’s deciduous fruit value chain was evaluated, and strengths and weaknesses identified. The research results showed that the Dutoit Group’s internal deciduous fruit value chain has been effective in addressing the risk of diminishing profit margins. In addition the results showed that the key attributes of an effective value chain which are able to address the current challenges are defined as effective integration, strong relationships between value chain actors, high levels of productivity and strong leadership.
APA, Harvard, Vancouver, ISO, and other styles
14

Wallertz, Christoffer, and Karolina Henningsson. "Rental or cooperative aperment : A cost and risk analysis of the housing market in Malmö." Thesis, Linnéuniversitetet, Ekonomihögskolan, ELNU, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-20857.

Full text
Abstract:
This thesis is analysing the housing market situation in Malmö. The reason for the research is the always equally relevant choice between two types of housing- cooperative apartments and rentals. Cost and risk is compared between the two in order to see what accommodation is preferable from cost and risk aspects. A theoretical framework dealing with cost and risk associated to housing is the starting point of the thesis. Theory on different cost associated to the two types of housing is presented as well as risk aspects, such as market risk, credit risk and fluctuations in interest rates. The data used in the research is individual data from 993 households living in Malmö, providing the possibility to map out the cost and risk for the two types of housing and compare it to the housing market situation in Sweden.   At first glance it seems slightly more expensive to live in a rental compared to a cooperative apartment. However, when return on capital, risk premium and value change is included this first statement changes. The risk is slightly higher when living in a cooperative apartment than in a rental, due to higher risk associated to fluctuations in interest rate. However, the current initial economic situation is better for households in cooperative apartments than for households in rentals, implying that these households on average are more capable to handle the higher risk associated to changes in housing cost.
APA, Harvard, Vancouver, ISO, and other styles
15

Ibn-Mohammed, Taofeeq. "Optimal ranking and sequencing of non-domestic building energy retrofit options for greenhouse gas emissions reduction." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10501.

Full text
Abstract:
Whether it is based on current emissions data or future projections of further growth, the building sector currently represent the largest and singular most important contributor to greenhouse gas (GHG) emissions globally. This notion is also supported by the Intergovernmental Panel on Climate Change based on projection scenarios for 2030 that emissions from buildings will be responsible for about one-third of total global emissions. As such, improving the energy efficiency of buildings has become a top priority worldwide. A significant majority of buildings that exist now will still exist in 2030 and beyond; therefore the greatest energy savings and carbon footprint reductions can be made through retrofit of existing buildings. A wide range of retrofit options are readily available, but methods to identify optimal solutions for a particular abatement project still constitute a major technical challenge. Investments in building energy retrofit technologies usually involve decision-making processes targeted at reducing operational energy consumption and maintenance bills. For this reason, retrofit decisions by building stakeholders are typically driven by financial considerations. However, recent trends towards environmentally conscious and resource-efficient design and retrofit have focused on the environmental merits of these options, emphasising a lifecycle approach to emissions reduction. Retrofit options available for energy savings have different performance characteristics and building stakeholders are required to establish an optimal solution, where competing objectives such as financial costs, energy consumption and environmental performance are taken into account. These key performance parameters cannot be easily quantified and compared by building stakeholders since they lack the resources to perform an effective decision analysis. In part, this is due to the inadequacy of existing methods to assess and compare performance indicators. Current methods to quantify these parameters are considered in isolation when making decisions about energy conservation in buildings. To effectively manage the reduction of lifecycle environmental impacts, it is necessary to link financial cost with both operational and embodied emissions. This thesis presents a novel deterministic decision support system (DSS) for the evaluation of economically and environmentally optimal retrofit of non-domestic buildings. The DSS integrates the key variables of economic and net environmental benefits to produce optimal decisions. These variables are used within an optimisation scheme that consists of integrated modules for data input, sensitivity analysis and takes into account the use of a set of retrofit options that satisfies a range of criteria (environmental, demand, cost and resource constraints); hierarchical course of action; and the evaluations of ‘best’ case scenario based on marginal abatement cost methods and Pareto optimisation. The steps involved in the system development are presented and its usefulness is evaluated using case study applications. The results of the applications are analysed and presented, verifying the feasibility of the DSS, whilst encouraging further improvements and extensions. The usefulness of the DSS as a tool for policy formulation and developments that can trigger innovations in retrofit product development processes and sustainable business models are also discussed. The methodology developed provides stakeholders with an efficient and reliable decision process that is informed by both environmental and financial considerations. Overall, the development of the DSS which takes a whole-life CO2 emission accounting framework and an economic assessment view-point, successfully demonstrates how value is delivered across different parts of the techno-economic system, especially as it pertains to financial gains, embodied and operational emissions reduction potential.
APA, Harvard, Vancouver, ISO, and other styles
16

Wieck, Christine [Verfasser]. "Determinants, distribution, and development of marginal costs in dairy production: An empirical analysis for selected regions of the European Union / Christine Wieck." Aachen : Shaker, 2005. http://d-nb.info/1186581921/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Burgess, Peter Mark. "A quantitative forward modelling analysis of the controls on passive rift-margin stratigraphy." Thesis, University of Oxford, 1994. http://ora.ox.ac.uk/objects/uuid:1249833d-ef11-4327-bdbd-5d0c40faa29e.

Full text
Abstract:
A quantitative forward model has been developed to investigate the controls on the deposition, erosion, and preservation of passive rift margin stratigraphy. The model includes thermal subsidence, variable absolute sealevel, flexural isostasy, subaerial and submarine deposition on fluvial and marine equilibrium profiles, and the facility to vary sediment supply through time. Results from the quantitative model can be used to reproduce elements of the sequence stratigraphic depositional model. Conducting sensitivity tests demonstrates that variables such as sediment supply and fluvial profile behaviour are likely to be of equal importance to thermal subsidence and eustasy in passive margin stratigraphy. Sensitivity tests with the quantitative model also demonstrate the problems associated with attempting to use a discretised stratigraphic model to investigate unforced cyclicty resulting from complex interactions in stratigraphic systems. Although the model appears capable of producing such unforced cyclical behaviour, this cyclicity is shown to be due to a numerical instability within the model which occurs with certain initial conditions and assumptions. The applicability of the model to observed stratigraphy is tested by comparing specific model output to patterns of stratigraphy from the North American Atlantic margin. The results from this test demonstrate that although the model is in many respects simplistic when compared to the complexities of natural systems, it is nevertheless capable of reproducing some of the basic elements of the observed stratigraphic patterns.
APA, Harvard, Vancouver, ISO, and other styles
18

Desmet, Alain. "Ophiolites et séries basaltiques crétacées des régions caraïbes et nordandines : bassins marginaux, dorsales ou plateaux océaniques ?" Nancy 1, 1994. http://www.theses.fr/1994NAN10313.

Full text
Abstract:
Les régions caraïbes et nordandines comportent, au crétacé, des séries magmatiques basiques, volcaniques ou ophiolitiques. L'étude petrologique analytique (majeurs, traces, terres rares, microsonde) de quelques séries du Costa Rica, de Colombie et d'Équateur, a permis leur identification magmatique et dynamique. La comparaison des laves à certaines séries volcaniques océaniques actuelles a conduit à une réinterprétation magmatique et géodynamique globale. Au Costa Rica, la péninsule de Santa Elena est formée d'une large nappe ophiolitique tholeiitique avec péridotites, cumulats gabbroiques et dolerites diverses (n-morb). Les iles Murcielago sont couvertes de ferrobasaltes t-morb. Santa Elena représente un témoin de croute océanique crétacée mis en place vers 70 ma et Murcielago un lambeau de plateau océanique soudé à l'Amérique centrale. La Colombie offre, au crétacé, et du nord au sud de la cordillère occidentale, un large éventail de formations océaniques: la série du Boqueron de Toyo, à volcanisme basaltique et intrusions diorito-tonalitiques (92 ma) témoigne du fonctionnement d'un arc insulaire immature. La série d'Altamira, a cumulats gabbroiques et basaltes primitifs illustre l'ouverture vers 80 ma d'un bassin en arrière de l'arc précédent. Le massif de Bolivar, correspond, avec ses cumulats tholeiitiques (i ou iia), a la croute océanique. La coupe de Buenaventura a Buga, avec ses nappes empilées riches en sédiments océaniques et en basaltes de type t-morb évoque des terrains constitués en plateau océanique et accrétés à la marge sud-américaine. En Équateur, le crétacé supérieur de la cordillère occidentale offre une situation analogue: des lambeaux de croute océanique sont dispersés le long d'une grande suture ophiolitique oblitérée par l'arc volcanique de Macuchi. La série de la Quebrada San Juan est l'équivalent de celle de Bolivar. Les basaltes (t-morb) du Grupo Pinon de la cote correspondent aussi à du matériel de plateau océanique accrété au bâti sud-américain
APA, Harvard, Vancouver, ISO, and other styles
19

Zatloukalová, Nikola. "Podnikatelský plán založení čerpací stanice." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224784.

Full text
Abstract:
The main goal of this thesis is to create business plan for the establishment of the petrol station operating as BENZINA franchise. Entrepreneurship, requirements of the business plan, franchising and definition of used analysis are described in the theoretical part. Necessary analyses are performed in the analytical part. Suggested business plan and schedule of its implementation are described in the last part of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
20

Jurčíková, Kateřina. "Metody stanovení transferových cen." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-15380.

Full text
Abstract:
Following thesis focuses on analysis of using five basic transfer pricing methods mentioned in the OECD Transfer Pricing Guidelines in the conditions of the Czech Republic (comparable uncontrolled method, resale-price method, cost plus method, profit split method, transactional net margin method). There are mentioned the problems connected with application of these methods and also suggestions how to fix them. In this respect thesis contains comparable, functional and economic analysis. As a part of the thesis there is an example of the transfer pricing documentation using cost plus method. Next there are noted the ways and criteria of selecting an appropriate method. There are also solved some tax aspects of determining the transfer prices.
APA, Harvard, Vancouver, ISO, and other styles
21

JÃnior, Rosendo Fernandes da Silva. "There was change in Competitiveness Public and Private Banks in Local Markets Brazilians after the year 2000? A Competitive Analysis for the year 2010, considering all public banks (scenario # 1), and considering only CAIXA as the only Public Bank (scenario 2). Following, antitrust analysis in Sector Brazilian Banking: fusion simulation application from Bank of Brazil and CAIXA." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=16228.

Full text
Abstract:
nÃo hÃ
Em 2008, o mundo se deparou com uma crise econÃmica que abalou as pilastras e confiabilidade no setor bancÃrio mundial. Os bancos se estruturam em um processo defensivo de proteÃÃo de seus ativos. No Brasil, O Governo Federal estimulou os bancos pÃblicos a prover crÃdito no mercado, buscado suavizar os efeitos de fuga de investimento e repatriaÃÃo de cash na recomposiÃÃo dos ativos. A pergunta chave desse artigo Ã: houve alteraÃÃo na Competitividade de Bancos PÃblicos e Privados em Mercados Locais Brasileiros apÃs o ano 2000? Mais de uma dÃcada se passou e refizemos essa verificaÃÃo para dados de 2010, seguindo Bresnahan e Reiss (1991a), e comparamos com o trabalho de Coelho, Pinho e Rezende (2011). Dada a alta concentraÃÃo no setor bancÃrio, como se comportaria uma simulaÃÃo de fusÃo entre os dois maiores bancos pÃblicos brasileiros? Esse trabalho se divide em 03 (trÃs) artigos. No artigo n 1, verificamos se houve mudanÃa na competitividade em bancos pÃblicos e privados para a dÃcada de 2010. Constatamos mudanÃas significativas, com alteraÃÃo da qualificaÃÃo do custo e do processo de estruturaÃÃo da margem preÃo-custo que nos faz inferir numa mudanÃa na composiÃÃo e de estratÃgias dos bancos pÃblicos e privados em uma nova visÃo competitiva do setor. Os bancos pÃblicos nÃo afetam o comportamento dos bancos privados em mercados locais, mas a exigÃncia de tamanho de mercado para a inserÃÃo de um novo concorrente foi reduzida pela alteraÃÃo da estrutura dos custos e influÃncias de efeitos regionais. E se considerÃssemos o mercado com apenas um banco pÃblico? No artigo n 2, refizemos a anÃlise, considerando a CAIXA como o Ãnico banco pÃblico, e encontramos resultados semelhantes a nossa anÃlise revisional de 2010, a notar mais Ãnfase nos efeitos regionais, tanto na reduÃÃo dos custos pra a regiÃo Norte como na alteraÃÃo negativa nos deslocadores de demanda para as regiÃes Sudeste, Sul e Centro-Oeste, bem como influÃncia praticamente nula do Ãnico banco pÃblico â CAIXA na reduÃÃo dos lucros dos bancos privados. No artigo n 3, apresentamos uma anÃlise de simulaÃÃo de fusÃo no setor bancÃrio brasileiro. O objetivo central foi capturar os efeitos da fusÃo entre o Banco do Brasil e a CAIXA em 12 (doze) segmentos/portifÃlios mais significativos do mercado. Os resultados do equilÃbrio pÃs-fusÃo foram obtidos pelo modelo PCAIDS (Proportionality-Calibrated Almost Ideal Demand System), proposto por Epstein e Rubinfeld (2002), que simula a fusÃo de 02 (duas) empresas em um mercado oligopolizado. Os Resultados do exercÃcio de simulaÃÃo confirmaram os aumentos esperados nos âpreÃosâ dos segmentos. Este resultado à condizente com a expectativa de que as fusÃes implicam em aumentos de preÃos de mercado e, sem ganhos de eficiÃncia econÃmica, podem impor perdas para os consumidores.
In 2008, the world faced an economic crisis that shook the pillars and reliability in the global banking sector. Banks are structured in a defensive process of its asset protection. In Brazil, the federal government encouraged public banks to provide credit in the market, sought to soften the investment leakage effects and cash repatriation in the restructuring of assets. The key question that is: was no change in Competitiveness Public and Private Banks in Local Brazilian markets after 2000? More than a decade has passed and redid this check to 2010 data, following Bresnahan and Reiss (1991a), and compared with Coelho's work, Pinho and Rezende (2011). Given the high concentration in the banking sector, would behave as a simulation of a merger between the two largest Brazilian public banks? This work is divided into 03 (three) articles. In Article 1, we check to see if there was a change in competitiveness in public and private banks for the decade to 2010. We found significant changes, by changing the qualification of the cost and price-cost margin of the structuring process that makes us infer a change in the composition and strategies of public and private banks in a new competitive view of the sector. Public banks will affect the behavior of private banks in local markets, but the market size requirement for the inclusion of a new competitor was reduced by changing the cost structure and influences of regional effects. And if we consider the market with only a public bank? In Article 2, redid the analysis, considering CAIXA as the only state-owned bank, and found similar results to our revisional analysis 2010, noted more emphasis on regional effects, both in reducing costs to the North as in the negative change in demand shifters for the Southeast, South and Midwest, and virtually no influence of the only public bank - CAIXA in reducing the profits of private banks. In Article 3, we present a fusion of simulation analysis in the Brazilian banking sector. The main objective was to capture the effects of the merger between Banco do Brazil and CAIXA in 12 (twelve) segments most significant portfolio in the market. The results of the post-merger balance were obtained by PCAIDS model (Proportionality-Calibrated Almost Ideal Demand System), proposed by Epstein and Rubinfeld (2002), which simulates the merger of 02 (two) companies in an oligopoly market. The results of the simulation exercise confirmed the expected increases in "price" of the segments. This result is consistent with the expectation that mergers entail market price increases and without economic efficiency gains, impose losses to consumers.
APA, Harvard, Vancouver, ISO, and other styles
22

Csenki, Attila. "Marginal cost analysis of single-item maintenance policies with several decision variables." 2004. http://hdl.handle.net/10454/3203.

Full text
Abstract:
No
The marginal cost approach for the analysis of repair/replacement models was introduced by Berg in 1980 and has since been applied to many maintenance policies of various complexity. All models hitherto analysed in the literature by the marginal cost approach have one single decision variable only, this being, typically, the age of the current item at the time of ordering or replacement. This paper is concerned with the extension of the marginal cost technique to maintenance policies with several decision variables. After addressing the general framework appropriate for the multi-parameter case, we exemplify the workings of the technique by analysing a two-variable maintenance model involving replacement and minimal repair. We demonstrate that the marginal cost approach is an attractive and intuitively appealing technique also for models with several decision variables. Just as in the single-parameter situation, the approach is amenable to economic interpretation, a welcome feature for users of maintenance models with a prime interest in its economic (rather than its mathematical) aspects. As an added bonus of the marginal cost approach, in our example, some otherwise necessary tools from the theory of stochastic processes are dispensable.
APA, Harvard, Vancouver, ISO, and other styles
23

Mekaroonreung, Maethee. "Production Economics Modeling and Analysis of Polluting firms: The Production Frontier Approach." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11724.

Full text
Abstract:
As concern grows about energy and environment issues, energy and environmental modeling and related policy analysis are critical issues for today's society. Polluting firms such as coal power plants play an important role in providing electricity to drive the U.S. economy as well as producing pollution that damages the environment and human health. This dissertation is intended to model and estimate polluting firms' production using nonparametric methods. First, frontier production function of polluting firms is characterized by weak disposability between outputs and pollutants to reflecting the opportunity cost to reduce pollutants. The StoNED method is extended to estimate a weak disposability frontier production function accounting for random noise in the data. The method is applied to the U.S. coal power plants under the Acid Rain Program to find the average technical inefficiency and shadow price of SO2 and NOx. Second, polluting firms' production processes are modeled characterizing both the output production process and the pollution abatement process. Using the law of conservation of mass applied to the pollution abatement process, this dissertation develops a new frontier pollutant function which then is used to find corresponding marginal abatement cost of pollutants. The StoNEZD method is applied to estimate a frontier pollutant function considering the vintage of capital owned by the polluting firms. The method is applied to estimate the average NOx marginal abatement cost for the U.S. coal power plants under the current Clean Air Interstate Rule NOx program. Last, the effect of a technical change on marginal abatement costs are investigated using an index decomposition technique. The StoNEZD method is extended to estimate sequential frontier pollutant functions reflecting the innovation in pollution reduction. The method is then applied to estimate a technical change effect on a marginal abatement cost of the U.S. coal power plants under the current Clean Air Interstate Rule NOx program.
APA, Harvard, Vancouver, ISO, and other styles
24

Amon-Armah, Frederick. "ECONOMIC ANALYSIS OF NUTRIENT MANAGEMENT PRACTICES FOR WATER QUALITY PROTECTION." 2012. http://hdl.handle.net/10222/15581.

Full text
Abstract:
The main purpose of this study was to evaluate the effect of alternative cropping systems on farm net returns, and nitrate-N and sediment yields in Thomas Brook Watershed (TBW). The study involved integrated bio-physical and economic optimization modelling. Crop yield and nitrate-N pollution response functions were estimated and then used in trade-off analysis between farm returns and environmental quality improvement. Five crop rotation systems were evaluated for seven fertilizer levels under conventional tillage (CT) and no-till systems (NT). Nitrate-N leached, as well as estimated maximum economic rate of N (MERN) fertilizer level and marginal abatement costs depended on crop type, rotation system, and tillage type. The most cost effective cropping systems that met restrictions on Health Canada maximum limit on nitrate-N in water included corn-corn-corn-alfalfa-alfalfa under NT for corn-based cropping systems, potato-winter wheat-carrot-corn under CT for vegetable horticulture-based and potato-barley-winter wheat-potato-corn under NT for potato-based cropping systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Yu, Wei. "Empirical analysis of dynamics in demand and pricing." Thesis, 2015. https://hdl.handle.net/2144/14576.

Full text
Abstract:
This doctoral dissertation provides a framework for analyzing consumer's demand and firm's pricing strategy under a dynamic setting. The results bring new methods and empirical evidence to the existing literature within this realm. The first chapter evaluates retailers' choice of service even when service is not observed. Retailer optimization over service alters manufacturers' price setting and thus provides the required identification. Using new data containing wholesale prices from China's second largest wireless carrier, I construct and estimate a dynamic structural model including both demand and supply. I find that service has a significantly positive effect on expanding market demand; however, its impact subsides over time. I argue that this pattern provides a potential explanation for Apple's initial exclusive contract with China Unicom and subsequent contract arrangements. The second chapter is a joint work with Gautam Gowrisankaran and Marc Rysman. We develop and implement a new method for calculating price-cost margins in a durable goods environment. We study the industry of digital camcorders and analyze how margins differ across products, firms and time. We are particularly interested in the extent to which falling marginal costs explain falling prices. Using demand estimates and our new method, we generate non-parametric distributions of marginal costs that each firm expects for each product. We show that marginal cost falls dramatically by an average of $300 and that the price-cost margin is strongly correlated with quality. We also find that the market share is an important driver for the dynamic effects in our model. The last chapter investigates firm's price adjustment processes, with a particular focus on the micro-level determinants of the frequency of price changes. Using the same data as in the first chapter, I construct and estimate a model of the frequency of price adjustments within products. I find that the price of a high quality product tends to adjust more often. Older products are more likely to change price than newer ones. Also, firms are more responsive to seasonal effects than to market competition.
APA, Harvard, Vancouver, ISO, and other styles
26

KAFKOVÁ, Martina. "Účetní výkazy a výkaznictví." Master's thesis, 2010. http://www.nusl.cz/ntk/nusl-48254.

Full text
Abstract:
This thesis is engaged in composition and resulting analysis of financial and management statements of the company enterprised in a traffic sector. It evaluates their need of composition and their predicable ability.
APA, Harvard, Vancouver, ISO, and other styles
27

Macario, Ana L. G. "Frequency response function analysis of the equatorial margin of Brazil using gravity and bathymetry." Thesis, 1989. http://hdl.handle.net/1957/29232.

Full text
Abstract:
The overall objective of this study is to address questions concerning the long-term mechanical strength of the lithosphere across the equatorial margin of Brazil. The approach used in this study consists of calculating the frequency response function estimates, also called admittance, using gravity and bathymetry data. These experimental estimates are then compared to theoretical admittance curves for Airy and thin elastic plate models for which estimates on the flexural rigidity or, equivalently, effective elastic thickness may be made. Twelve profiles, each 256 km long, were extracted from gridded gravity and bathymetry data (data sources: project EQUANT, Defense Mapping Agency, National Geophysical Data Center files and GEOS 3/SEASAT altimeter data). Three profiles were specifically used for testing truncation errors introduced by four different data treatment procedures (before Fourier transforming the data) : detrending, applying 10% cosine tapering, mirror imaging and the use of the first derivatives. The method I adopted is similar to the one used by McNutt (1983) and consists of testing how reliably a given admittance estimate can be recovered as a function of the data treatment procedure. A "predicted" gravity anomaly was obtained by convolving each bathymetric profile with a theoretical admittance filter. The edges of this anomaly are then submitted to the same treatment as the corresponding bathymetric profile before Fourier transforming both profiles and calculating admittance. The stability of the long-wavelength admittance estimates, in the presence of noise, was also investigated by introducing Gaussian noise, in the range of -50 to +50 mGals, in the "predicted" gravity signal. The results indicate that relatively unbiased long-wavelength admittance estimates can be obtained by using the first derivative of the data sets. In addition, it is shown that the mirroring technique, used in previous admittance studies across Atlantic-type margins, leads to overestimated admittance values and, therefore, overestimated flexural rigidities. Neither the theoretical curves for the Airy model nor the plate flexure model can explain the experimental admittance estimates. Not only are the experimental admittance estimates higher than the predicted values but they also have a narrower peak than the theoretical curves. This raises the question of the applicability of highly simplified isostatic models for tectonic provinces such as Atlantic-type continental margins. The following reasons may explain the discrepancies between the experimental and theoretical admittance estimates: (1) The abrupt nature of the transition between oceanic and continental crust controlled by the Romanche Fracture Zone - Unlike the eastern North American continental margin which was formed as a result of extensive rifling and pulling apart, the obliquely-rifled equatorial margin of Brazil has undergone a complex tectonic evolutionary process, where additional components such as shear and right-lateral wrenching were present. Therefore, representing the margin as a thin homogeneous elastic plate might be reasonable when the transition is gradual (for which the uniform flexural rigidity assumption seems reasonable) but is probably not a good approximation when it is as abrupt as the equatorial margin of Brazil (2) Presence of subsurface loads - Previous studies have shown that estimates of the average flexural rigidity of continental lithosphere using the admittance approach are biased when subsurface loads are present. In principle, the proximity of the Romanche Fracture Zone and associated volcanism suggest that shallow buried loads, caused by intrusive bodies, might be present in the area. This could partially account for the mismatch between theoretical curves and experimental admittance estimates. (3) "Masked" estimates - The admittance estimates presented here are likely to reflect the combination of two different signals: one related to the compensation of the Barreirinhas/Piaui-Camocim sub-basin which has no topographic/bathymetric expression and the other one related to the topography/bathymetry and its compensation which is of interest in the admittance studies. Since the wavelengths of these signals do not differ by much (around 80-100 km for the basin) it is possible that in the averaging process some overlapping occurs. The combination of these signals could yield anomalous results masking the admittance estimates in the diagnostic waveband. In addition, I present a two-dimensional cross section obtained by forward modelling the gravity anomaly along a profile using the line integral method. The uniform sedimentary infill of the Barreirinhas/Piaui-Camocim basin is enough to account for the gravity low over the inner shelf and no Moho topography is required. A plausible explanation for this "rootless" basin structure is that the lithosphere is capable of supporting the sediment infill load, and thus, has finite flexural rigidity (basin is locally uncompensated).
Graduation date: 1990
APA, Harvard, Vancouver, ISO, and other styles
28

Braz, Augusta Santos. "Implementação de instrumentos de apoio à gestão numa pequena empresa de retalho." Master's thesis, 2010. http://hdl.handle.net/10071/3847.

Full text
Abstract:
Este trabalho de projecto empresarial foi desenvolvido para aplicar na empresa C – Produtos para Tabacarias, Lda., sedeada no concelho de Lisboa, tendo como principal objectivo proporcionar à empresa com instrumentos de apoio à tomada de decisão, que possam constituir um suporte fundamentalpara a sua gestão da empresa. Com suporte na revisão de literatura, foram analisadas as etapas dos sistemas de Contabilidade de Gestão e as principais características dos mais referenciados métodos de apuramento dos custos; foram ainda abordados alguns dos métodostradicionais de apuramento dos custos dos produtos/serviços, aprofundando o Método das Secções Homogéneas (MSH) e o método ABC – Activity Based Costing. Após a descrição das suas características, segue-se uma análise comparativa e critica entre os dois modelos. Finalmente, são apresentadas várias ferramentas de apoio à decisão, sendo posteriormente avaliadas quais delas se mostram mais adequadas à empresa C. Com o sistema implementado a empresa pode agora facilmente conhecer a margem de contribuição de cada um dos segmentos emque opera e, deste modo, acompanhar e tomar as medidas adequadas para atingir os objectivos determinados pela gestão. Para além disso, é agora possível calcular o ponto crítico e a margem de segurança da empresa, para que os responsáveis pela gestão conheçam a situação global das vendas face à cobertura dos custos fixos, para além da contribuição dos vários segmentos para esse montante, bem como a margem de segurança que lhes possibilite avaliar o risco, global e por segmento, para resistir a uma eventual quebra nas vendas. O desenvolvimento deste projecto permitiu concluir que não existem sistemas perfeitos e que, ponderadas as vantagens e limitações de cada um deles, ao implementar na empresa as ferramentas de gestão, há que atender à realidade concreta de cada organização.
This project was developed to be implemented in a sized company called Company C, seated in Lisbon, with the aim of providing the company with tools to support decision making, to form a vital support for its management. Based on the available literature, we have analyzed the different stages of Cost Accounting Systems as well as the main features of the most recognized process costing methods; we have also addressed some ofthe traditional process costing methods regarding products/services, while thoroughly examining the La Méthode des Sections Homogènes(MSH) and the ABC method - Activity Based Costing. After a description of their characteristics, we present a critical and comparative analysis of these two methods. Finally, we present several tools essential for decision making and then we proceed to assess which are the most suitable ones for company C. With the implemented system, the company can now easily recognize the Contribution Margin of each segment and therefore it can monitor and take appropriate measures to achieve the objectives set by management. In addition, it is now possible to calculate the Breakeven Point and the Margin of Safety of the company. This allows those responsible for the management to recognize the global situation ofsales regarding the fixed costs coverage, as well as the contribution of several segments for the Breakeven Point and the Margin of Safety, that allows them to assess risk, overall and by segment, in order to withstand any downturn in sales. This developed project has allowed us to infer that no Management Accounting system is perfect to implement any management tool, after balancing the advantages and restrictions we must first consider of each organization characteristics.
APA, Harvard, Vancouver, ISO, and other styles
29

Koster, Roman. "Ökonomische Analyse forstlicher Bestandesbehandlung." Doctoral thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-14F4-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography