Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Minimisation des coût.

Rozprawy doktorskie na temat „Minimisation des coût”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 32 najlepszych rozpraw doktorskich naukowych na temat „Minimisation des coût”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Calvin, Christophe. "Minimisation du sur-coût des communications dans la parallélisation des algorithmes numériques". Phd thesis, Grenoble INPG, 1995. http://tel.archives-ouvertes.fr/tel-00005034.

Pełny tekst źródła
Streszczenie:
Le but de ce memoire est d'étudier les voies possibles pour minimiser le sur-coût des communications consécutif à la parallélisation d'algorithmes numériques sur machines parallèles à mémoire distribuée. La première voie explorée consiste à optimiser les schémas de communication des données et résultats mis en oeuvre dans les versions parallèles de noyaux de calcul. Nous proposons notamment de nouveaux algorithmes pour réaliser une transposition de matrices carrées allouées par blocs, sur différentes topologies de réseaux d'interconnexion. Nous avons également étudié le problème de l'échange total. Ce schéma de communication se retrouve fréquemment dans les versions parallèles d'algorithmes numériques (comme dans l'algorithme du gradient conjugué). Nous proposons des algorithmes efficaces d'échange total pour des topologies toriques. La deuxième voie qui a été explorée consiste à recouvrir les communications par du calcul. Nous avons étudié quelques principes algorithmiques de base permettant de masquer au mieux les communications. Ceux-ci sont basés, notamment, sur des techniques d'enchainement de phases de calcul et de communication, ainsi que sur le re-ordonnancement local de tâches afin d'optimiser le recouvrement. Ces techniques sont illustrées sur des algorithmes parallèles de calcul de transformée de Fourier. Les différentes implantations de ces algorithmes sur de nombreuses machines parallèles à mémoire distribuée (T3D de Cray, SP2 d'IBM, iPSC-860 et Paragon d'Intel) montrent le gain en temps d'exécution apporté par ces méthodes.
Style APA, Harvard, Vancouver, ISO itp.
2

Echague, Eugénio. "Optimisation globale sans dérivées par minimisation de modèles simplifiés". Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0016.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, on étudie deux méthodes d’optimisation globale sans dérivées : la méthode des moments et les méthodes de surface de réponse. Concernant la méthode des moments, nous nous sommes intéressés à ses aspects numériques et à l'un de ses aspects théoriques : l’approximation à une constante près d'une fonction par des polynômes somme de carrés. Elle a aussi été implémentée dans les sous-routines d'une méthode sans dérivées et testée avec succès sur un problème de calibration de moteur. Concernant les surface de réponse, nous construisons un modèle basée sur la technique de Sparse Grid qui permet d’obtenir une approximation précise avec un nombre faible d'évaluations de la fonction. Cette surface est ensuite localement raffinée autour des points les plus prometteurs. La performance de cette méthode, nommée GOSgrid, a été testée sur différentes fonctions et sur un cas réel. Elle surpasse les performances d'autres méthodes existantes d’optimisation globale en termes de coût
In this thesis, we study two global derivative-free optimization methods: the method of moments and the surrogate methods. Concerning the method of moments, it is implemented as solver of the sub-problems in a derivative-free optimization method and tested for an engine calibration problem with succes. We also explore its dual approach, and we study the approximation of a function by a sum of squares of polynomials plus a constant. Concerning the surrogate methods, we construct a new approximation by using the Sparse Grid interpolation method, which builds an accurate model from a limited number of function evaluations. This model is then locally refined near the points with low function value. The numerical performance of this new method, called GOSgrid, is tested for classical optimisation test functions and finally for an inverse parameter identification problem, showing good results compared to some of the others existing methods, in terms of number of function evaluations
Style APA, Harvard, Vancouver, ISO itp.
3

Bel, Haj Ali Wafa. "Minimisation de fonctions de perte calibrée pour la classification des images". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00934062.

Pełny tekst źródła
Streszczenie:
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d'un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d'autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie.
Style APA, Harvard, Vancouver, ISO itp.
4

Boudjlida, Khaled. "Méthodes d’optimisation numérique pour le calcul de stabilité thermodynamique des phases". Thesis, Pau, 2012. http://www.theses.fr/2012PAUU3025/document.

Pełny tekst źródła
Streszczenie:
La modélisation des équilibres thermodynamiques entre phases est essentielle pour le génie des procédés et le génie pétrolier. L’analyse de la stabilité des phases est un problème de la plus haute importance parmi les calculs d’équilibre des phases. Le calcul de stabilité décide si un système se présente dans un état monophasique ou multiphasique ; si le système se sépare en deux ou plusieurs phases, les résultats du calcul de stabilité fournissent une initialisation de qualité pour les calculs de flash (Michelsen, 1982b), et permettent la validation des résultats des calculs de flash multiphasique. Le problème de la stabilité des phases est résolu par une minimisation sans contraintes de la fonction distance au plan tangent à la surface de l’énergie libre de Gibbs (« tangent plane distance », ou TPD). Une phase est considérée comme étant thermodynamiquement stable si la fonction TPD est non- négative pour tous les points stationnaires, tandis qu’une valeur négative indique une phase thermodynamiquement instable. La surface TPD dans l’espace compositionnel est non- convexe et peut être hautement non linéaire, ce qui fait que les calculs de stabilité peuvent être extrêmement difficiles pour certaines conditions, notamment aux voisinages des singularités. On distingue deux types de singularités : (i) au lieu de la limite du test de stabilité (stability test limit locus, ou STLL), et ii) à la spinodale (la limite intrinsèque de la stabilité thermodynamique). Du point de vue géométrique, la surface TPD présente un point selle, correspondant à une solution non triviale (à la STLL) ou triviale (à la spinodale). Dans le voisinage de ces singularités, le nombre d’itérations de toute méthode de minimisation augmente dramatiquement et la divergence peut survenir. Cet inconvénient est bien plus sévère pour la STLL que pour la spinodale. Le présent mémoire est structuré sur trois grandes lignes : (i) après la présentation du critère du plan tangent à la surface de l’énergie libre de Gibbs, plusieurs solutions itératives (gradient et méthodes d’accélération de la convergence, méthodes de second ordre de Newton et méthodes quasi- Newton), du problème de la stabilité des phases sont présentées et analysées, surtout du point de vue de leurs comportement près des singularités; (ii) Suivant l’analyse des valeurs propres, du conditionnement de la matrice Hessienne et de l’échelle du problème, ainsi que la représentation de la surface de la fonction TPD, la résolution du calcul de la stabilité des phases par la minimisation des fonctions coût modifiées est adoptée. Ces fonctions « coût » sont choisies de telle sorte que tout point stationnaire (y compris les points selle) de la fonction TPD soit converti en minimum global; la Hessienne à la STLL est dans ce cas positif définie, et non indéfinie, ce qui mène a une amélioration des propriétés de convergence, comme montré par plusieurs exemples pour des mélanges représentatifs, synthétiques et naturels. Finalement, (iii) les calculs de stabilité sont menés par une méthode d’optimisation globale, dite de Tunneling. La méthode de Tunneling consiste à détruire (en plaçant un pôle) les minima déjà trouvés par une méthode de minimisation locale, et a tunneliser pour trouver un point situé dans une autre vallée de la surface de la fonction coût qui contient un minimum 9 à une valeur plus petite de la fonction coût; le processus continue jusqu'à ce que les critères du minimum global soient remplis. Plusieurs exemples soigneusement choisis montrent la robustesse et l’efficacité de la méthode de Tunneling pour la minimisation de la fonction TPD, ainsi que pour la minimisation des fonctions coût modifiées
The thermodynamic phase equilibrium modelling is an essential issue for petroleum and process engineering. Phase stability analysis is a highly important problem among phase equilibrium calculations. The stability computation establishes whether a given mixture is in one or several phases. If a mixture splits into two or more phases, the stability calculations provide valuables initialisation sets for the flash calculations, and allow the validation of multiphase flash calculations. The phase stability problem is solved as an unconstrained minimisation of the tangent plan distance (TPD) function to the Gibbs free energy surface. A phase is thermodynamically stable if the TPD function is non-negative at all its stationary points, while a negative value indicates an unstable case. The TPD surface is non-convex and may be highly non-linear in the compositional space; for this reason, phase stability calculation may be extremely difficult for certain conditions, mainly within the vicinity of singularities. One can distinguish two types of singularities: (i) the stability test limit locus (STLL), and (ii) the intrinsic limit of stability (spinodal). Geometrically, the TPD surface exhibits a saddle point, corresponding to a non-trivial (at the STLL) or trivial solution (at the spinodal). In the immediate vicinity of these singularities, the number of iterations of all minimisation methods increases dramatically, and divergence could occur. This inconvenient is more severe for the STLL than for the spinodal. The work presented herein is structured as follow: (i) after the introduction to the concept of tangent plan distance to the Gibbs free energy surface, several iterative methods (gradient, acceleration methods, second-order Newton and quasi-Newton) are presented, and their behaviour analysed, especially near singularities. (ii) following the analysis of Hessian matrix eigenvalues and conditioning, of problem scaling, as well as of the TPD surface representation, the solution of phase stability computation using modified objective functions is adopted. The latter are chosen in such a manner that any stationary point of the TPD function becomes a global minimum of the modified function; at the STLL, the Hessian matrix is no more indefinite, but positive definite. This leads to a better scheme of convergence as will be shown in various examples for synthetic and naturally occurring mixtures. Finally, (iii) the so-called Tunneling global optimization method is used for the stability analysis. This method consists in destroying the minima already found (by placing poles), and to tunnel to another valley of the modified objective function to find a new minimum with a smaller value of the objective function. The process is resumed when criteria for the global minimum are fulfilled. Several carefully chosen examples demonstrate the robustness and the efficiency of the Tunneling method to minimize the TPD function, as well as the modified objective functions
Style APA, Harvard, Vancouver, ISO itp.
5

Rezig, Wafa. "Problèmes de multiflots : état de l'art et approche par décomposition décentralisée du biflot entier de coût minimum". Phd thesis, Université Joseph Fourier (Grenoble), 1995. http://tel.archives-ouvertes.fr/tel-00346082.

Pełny tekst źródła
Streszczenie:
Nous considèrerons ici les modèles linéaires de multiflots, en mettant l'accent sur leurs multiples applications, notamment dans les domaines de l'ordonnancement et de la gestion de production. Il est bien connu que ces problèmes, présentés sous forme de programmes linéaires, sont difficiles à résoudre, contrairement à leurs homologues en flot simple. Les méthodes de résolution classiques proposent, déjà dans le cas continu, des solutions approchées. On distingue: les méthodes de décomposition par les prix, par les ressources, ainsi que les techniques de partitionnement. Si l'on rajoute la contrainte d'intégralité sur les flots, ces problèmes deviennent extrêmement difficiles. Nous nous sommes intéressés à un cas particulier des problèmes de multiflots, à savoir: le biflot entier de coût minimum. Nous avons développé une approche de résolution heuristique basée sur un principe de décomposition mixte, opérant itérativement, à la fois par une allocation de ressources et par un ajustement des coûts. L'implémentation de cette approche met en évidence des résultats prometteurs, obtenus sur des problèmes de biflot purs, générés aléatoirement. Nous avons donc envisagé une deuxième application sur des problèmes de biflot plus structurés. Ces problèmes de biflot ont été proposés pour la modélisation du problème de voyageur de commerce. Cette application débouche d'une part, sur l'utilisation d'un algorithme de recherche d'un circuit hamiltonien dans un graphe, et d'autre part, sur le développement de techniques heuristiques pour la construction de tournées intéressantes
Style APA, Harvard, Vancouver, ISO itp.
6

Oukacha, Ouazna. "Optimisation de consommation pour un véhicule de type voiture". Thesis, Toulon, 2017. http://www.theses.fr/2017TOUL0016/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente l’étude d’un problème de contrôle optimal dont le coût est non-différentiable pourcertaines valeurs du contrôle ou de l’état, tout en restant Lipschitz. Ce problème nous a été inspiré par laproblématique générale de la minimisation de l’énergie dépensée par un véhicule ou robot de type voiture lelong d’un trajet dont le profil de route est connu à l’avance. Cette problématique est formulée à l’aide d’unmodèle simple de la dynamique longitudinale du véhicule et une fonction coût qui englobe la notiond’efficacité du processus de conversion énergétique. Nous présentons un résultat de régularité des contrôles,valable pour la classe des systèmes non-linéaires, affines dans les contrôles, classe à laquelle appartient notreproblème. Ce résultat nous permet d’exclure les phénomènes de chattering de l’ensemble des solutions. Nousréalisons trois études de cas pour lesquelles les trajectoires optimales sont composées d’arcs bang,d’inactivations, d’arcs singuliers et, dans certains cas, de retours en arrière
The present thesis is a study of an optimal control problem having a non-differentiable, but Lipschitz, costfunction. It is inspired by the minimization of the energy consumption of a car-like vehicle or robot along aroad which profile is known. This problem is stated by means of a simple model of the longitudinal dynamicsand a running cost that comprises both an absolute value function and a function that accounts for theefficiency of the energy conversion process. A regularity result that excludes chattering phenomena from theset of solutions is proven. It is valid for the class of control affine systems, which includes the consideredproblem. Three case studies are detailed and analysed. The optimal trajectories are shown to be made of bang,inactivated and backward arcs
Style APA, Harvard, Vancouver, ISO itp.
7

Closset, Mélanie. "Analyse de la stabilité de molécules anticancéreuses en dose-banding pour un développement clinique aux implications éthiques et économiques". Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILS087.

Pełny tekst źródła
Streszczenie:
Le dose-banding des anticancéreux constitue une standardisation des doses de chimiothérapies à partir d'une dose calculée individuellement sur base d'une donnée spécifique, qu'il s'agisse d'une donnée anthropométrique ou d'un paramètre physiologique. Etant donné l'index thérapeutique étroit des molécules anticancéreuses, la définition initiale du dose-banding tolère un écart maximal de ± 5% de la dose individuelle. Le dose-banding ouvre la voie à une production en série des préparations d'anticancéreux mais l'applicabilité du dose-banding nécessite le respect d'un certain nombre de critères parmi lesquels la stabilité à long terme des préparations de chimiothérapie est essentielle. Le recours au dose-banding pour la production de préparations d'anticancéreux en série et à l'avance dans des unités centralisées de production constitue une opportunité de rationalisation, de fluidification et de sécurisation des processus en pharmacie, tout en réduisant les délais d'attente des patients dans les unités de soin. Le dose-banding se développe dans un contexte qui se partage entre l'utilisation classique de doses individualisées et la généralisation progressive du concept de médecine « personnalisée ». L'objectif de ce travail interdisciplinaire est triple et consiste dans (i) la réalisation d'études de stabilité physicochimiques au long cours de préparations de molécules anticancéreuses; (ii) la comparaison des coûts de production de poches de gemcitabine sur base d'une préparation individualisée et d'une préparation en série à partir de données rétrospectives ; (iii) le questionnement éthique quant à la justification du dose-banding comme pratique de soin et à son positionnement dans la diversité des stratégies de soins. Au niveau technique, ce travail a permis de développer trois études de stabilité physicochimiques démontrant la stabilité au long cours du 5-fluorouracile et de la gemcitabine dans des conditions de préparation et de conservation typiques d'un dose-banding. Il a en outre mis en évidence l'existence d'un biais possible lors du recours aux dispositifs de transfert en système clos pour les études de stabilité. Au niveau économique, l'intérêt pour le département de pharmacie de l'implémentation d'un dose-banding pour la production en série de préparations de gemcitabine a pu être confirmé. Au niveau éthique, l'élaboration d'un concept de « juste » dose des chimiothérapies a éclairé les éléments de justification d'une application pratique du dose-banding tout en l'intégrant aux autres approches de soin
Anticancer dose-banding is a standardization of chemotherapy doses on the basis of a dose calculated individually according to a specific anthropometric or physiological parameter. Given the narrow therapeutic index of anticancer molecules, the initial definition of dose-banding tolerates a maximum deviation of ± 5% of the individual dose. In practice, the applicability of dose-banding depends on various criteria, including the long-term stability of chemotherapy preparations. The use of dose-banding allows the centralized and in advance production of anticancer drugs and represents an opportunity to rationalize, streamline and secure pharmacy processes, while reducing patient waiting times in care units. Dose-banding is developing in a context split between the common use of individualized doses and the gradual generalization of the concept of “personalized” medicine. The aim of this interdisciplinary study is triple: (i) to carry out long-term physicochemical stability studies on preparations of anticancer molecules; (ii) to compare the production costs of gemcitabine bags based on individualized and serial produced preparations, using retrospective data; (iii) to examine the ethical justification for dose-banding as a care practice, and its position within the diversity of care strategies. On a technical level, this work enabled us to carry out three physicochemical stability studies demonstrating the long-term stability of 5-fluorouracil and gemcitabine under preparation and storage conditions typical of dose-banding. It also highlighted the potential bias associated with the use of closed-system transfer devices for stability studies. From an economic point of view, the value to the pharmacy department of implementing dose-banding for the serial production of gemcitabine preparations was confirmed. On an ethical level, the development of a concept of “right” chemotherapy dose has shown the justification for a practical application of dose-banding, while integrating it with other care practices
Style APA, Harvard, Vancouver, ISO itp.
8

Alatorre-Frenk, Claudio. "Cost minimisation in micro-hydro systems using pumps-as-turbines". Thesis, University of Warwick, 1994. http://wrap.warwick.ac.uk/36099/.

Pełny tekst źródła
Streszczenie:
The use of reverse-running pumps as turbines (PATs) is a promising technology for small-scale hydropower. This thesis reviews the published knowledge about PATs and deals with some areas of uncertainty that have hampered their dissemination, especially in 'developing' countries. Two options for accommodating seasonal flow variations using PATs are examined and compared with using conventional turbines (that have flow control devices). This has been done using financial parameters, and it is shown' that, under typical conditions, PATs are more economic. The various published techniques for predicting the turbine-mode performance of a pump without expensive tests are reviewed; a new heuristic one is developed, and it is shown (using the same financial parameters and a large set of test data in both modes of operation) that the cost of prediction inaccuracy is negligible under typical circumstances. The economics of different ways of accommodating water-hammer are explored. Finally, the results of laboratory tests on a PAT are presented, including cavitation tests, and for the latter a theoretical framework is exposed.
Style APA, Harvard, Vancouver, ISO itp.
9

Jabbari, Sabegh Amir Hosein. "Resource hiring cost minimisation for constrained MapReduce computations in the cloud". Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/213188/1/Amir%20Hosein_Jabbari%20Sabegh_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This research tackles the problem of reducing the cost of cloud-based MapReduce computations while satisfying their deadlines. It is a multi-constrained combinational optimisation problem, which is challenging to solve. The minimisation goal is achieved by pre-planning and dynamic scheduling of virtual machine provisioning during computations. The proposed optimisation models and algorithms have been implemented and quantitatively evaluated in comparison with existing approaches. They are shown to reduce the costs of cloud-based computations by hiring fewer VMs and scheduling them more efficiently.
Style APA, Harvard, Vancouver, ISO itp.
10

Woodford, Spencer. "The Minimisation of Combat Aircraft Life Cycle Cost through Conceptual Design Optimisation". Thesis, Cranfield University, 1999. http://hdl.handle.net/1826/3496.

Pełny tekst źródła
Streszczenie:
In an effort to increase the cost-effectiveness of military equipment, a method has been developed to perform conceptual studies on combat aircraft, resulting in designs of specified capability optimised for minimum Life Cycle Cost (LCC). Consequently, the cost design loop can be considered as being closed, allowing the automated production of a consistent set of cost and performance data for different aircraft solutions. The design engineer can thus make informed, unbiased, design decisions, leading to a more efficient use of shrinking Defence budgets. Because of the vast scale to which the cost model could be developed, 'deep overheads' are not included, restricting the use of the tool to the comparison of similar weapons systems (combat aircraft), with a common set of design objectives and performance constraints. The aircraft conceptual design tool is based on classical design methods, recently adapted and updated, and validated with existing aircraft data. The engine performance and sizing modules have been developed from detailed thermodynamic models, whilst the LCC model is an amalgamation and update of several different methods, each written for a different phase in the system life cycle. The aircraft synthesis models, opfimisation tool and LCC algorithms are described, and validation results are presented where possible. The software cost model was used to generate a series of results, mimicking the early stages of an aircraft design selection procedure, and allowing a demonstration of the various trade-off studies that can be performed. Results from the selection process are presented and discussed, overall study conclusions arc drawn, and areas for further work suggested. Published data for real aircraft and engines are included in the Appendices, together with detailed aircraft parameter and cost output data generated by the model.
Style APA, Harvard, Vancouver, ISO itp.
11

Ramos, Gabriel de Oliveira. "Regret minimisation and system-efficiency in route choice". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/178665.

Pełny tekst źródła
Streszczenie:
Aprendizagem por reforço multiagente (do inglês, MARL) é uma tarefa desafiadora em que agentes buscam, concorrentemente, uma política capaz de maximizar sua utilidade. Aprender neste tipo de cenário é difícil porque os agentes devem se adaptar uns aos outros, tornando o objetivo um alvo em movimento. Consequentemente, não existem garantias de convergência para problemas de MARL em geral. Esta tese explora um problema em particular, denominado escolha de rotas (onde motoristas egoístas deve escolher rotas que minimizem seus custos de viagem), em busca de garantias de convergência. Em particular, esta tese busca garantir a convergência de algoritmos de MARL para o equilíbrio dos usuários (onde nenhum motorista consegue melhorar seu desempenho mudando de rota) e para o ótimo do sistema (onde o tempo médio de viagem é mínimo). O principal objetivo desta tese é mostrar que, no contexto de escolha de rotas, é possível garantir a convergência de algoritmos de MARL sob certas condições. Primeiramente, introduzimos uma algoritmo de aprendizagem por reforço baseado em minimização de arrependimento, o qual provamos ser capaz de convergir para o equilíbrio dos usuários Nosso algoritmo estima o arrependimento associado com as ações dos agentes e usa tal informação como sinal de reforço dos agentes. Além do mais, estabelecemos um limite superior no arrependimento dos agentes. Em seguida, estendemos o referido algoritmo para lidar com informações não-locais, fornecidas por um serviço de navegação. Ao usar tais informações, os agentes são capazes de estimar melhor o arrependimento de suas ações, o que melhora seu desempenho. Finalmente, de modo a mitigar os efeitos do egoísmo dos agentes, propomos ainda um método genérico de pedágios baseados em custos marginais, onde os agentes são cobrados proporcionalmente ao custo imposto por eles aos demais. Neste sentido, apresentamos ainda um algoritmo de aprendizagem por reforço baseado em pedágios que, provamos, converge para o ótimo do sistema e é mais justo que outros existentes na literatura.
Multiagent reinforcement learning (MARL) is a challenging task, where self-interested agents concurrently learn a policy that maximise their utilities. Learning here is difficult because agents must adapt to each other, which makes their objective a moving target. As a side effect, no convergence guarantees exist for the general MARL setting. This thesis exploits a particular MARL problem, namely route choice (where selfish drivers aim at choosing routes that minimise their travel costs), to deliver convergence guarantees. We are particularly interested in guaranteeing convergence to two fundamental solution concepts: the user equilibrium (UE, when no agent benefits from unilaterally changing its route) and the system optimum (SO, when average travel time is minimum). The main goal of this thesis is to show that, in the context of route choice, MARL can be guaranteed to converge to the UE as well as to the SO upon certain conditions. Firstly, we introduce a regret-minimising Q-learning algorithm, which we prove that converges to the UE. Our algorithm works by estimating the regret associated with agents’ actions and using such information as reinforcement signal for updating the corresponding Q-values. We also establish a bound on the agents’ regret. We then extend this algorithm to deal with non-local information provided by a navigation service. Using such information, agents can improve their regrets estimates, thus performing empirically better. Finally, in order to mitigate the effects of selfishness, we also present a generalised marginal-cost tolling scheme in which drivers are charged proportional to the cost imposed on others. We then devise a toll-based Q-learning algorithm, which we prove that converges to the SO and that is fairer than existing tolling schemes.
Style APA, Harvard, Vancouver, ISO itp.
12

Roberts, Theari. "Near-optimum cost minimisation of transporting bioenergy carriers from source to intermediate distributors". Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4117.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng (Industrial Engineering))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: The world is facing an energy crisis with worldwide energy consumption rising at an alarming rate. The effects that fossil fuels have on the environment are also causing concern. For these two reasons the world is determined to find ‘cleaner’, renewable and sustainable energy sources. The Cape Winelands District Munisipality (CWDM) area has been identified as the study area for a bioenergy project. The CWDM project aims to determine the possibility of producing bioenergy from lignocellulosic biomass, and transporting it as economically as possible to a number of electricity plants within the study area. From the CWDM project a number of research topics were identified. The aim of this thesis is to determine the best location for one or more processing plants that will maximise the potential profit through the entire system. This is achieved by minimising the overall life cycle cost of the project. It takes into account costs from establishing and maintaining the crops, harvesting, transportation, conversion and generation; with a strong focus on the transport costs. In conjunction with a Geographical Information Systems (GIS) specialist and taking into account various factors such as electricity demand, heat sales and substation locations, 14 possible plant locations were identified. The possible supply points for each of the 14 plant locations were then analysed by GIS again to yield data in terms of elevation, road distances and slope. The transport costs were calculated using the Vehicle Cost Schedule (VCS) from the Road Freight Association (RFA) and fuel consumption calculations. It takes into account slope, laden and unladen transport and considers different transport commodities. These calculations together with the other associated costs of the life cycle are then combined with the results of the GIS into an EXCEL file. From this a transportation optimisation model is developed and the equivalent yearly life cycle cost of each of the 14 demand points are minimised by means of LINGO software. Initially runs were done for 2.5 MW capacity plants. From the high profit areas identified here, a single area was chosen and further runs were done on it. These runs were performed to determine the effect of different plant capacities on the life cycle costs, as well as how it affects the farm gate price that can be paid to the farmer. It also determined the effect of farmer participation at different plant capacities. The results indicate that it is currently possible to pay a farmer between R 300.00 and R 358.00 for a ton of biomass. It also revealed that with higher participation from farmers in the CWDM project, lower costs and higher farm gate prices will result, since the transport costs will be lower. Although all the costs within the life cycle are variable over time, the transport cost is the only cost that varies spatially and this will have a major effect on the overall system cost. The thesis found that generating electricity from woody biomass is feasible for all areas that were considered as well as for all variations considered during the sensitivity analysis. For the recommended plant size of 5 MW the transport of logs will be optimum.
AFRIKAANSE OPSOMMING: Die tempo waarteen energieverbruik wêreldwyd styg is ʼn rede tot kommer. Die nadelige effek wat fossiel brandstowwe op die omgewing het, is ook ʼn probleem. Hierdie twee redes is hoofsaaklik wat die wêreld dryf om ‘skoner’ hernieubare en volhoubare energie bronne te vind. Die Kaapse Wynland Distrik Munisipaliteit (KWDM) area is identifiseer as ʼn studie area vir ʼn bio-energie projek. Die doel van die KWDM projek is om die vervaardiging van bio-energie vanaf plantasies, die vervoer van hierdie bome sowel as die prosessering koste by die fabriek te bepaal en te evalueer. Vanuit die KWDM projek het `n aantal tesisse ontwikkel waarvan hierdie een is. Die doel van hierdie tesis is om die beste posisie vir een of meer prosesserings fabrieke te bepaal wat die potensiële wins van die KWDM projek sal maksimeer. Dit is ook gemik daarop om die ekwivalente jaarlikse oorhoofse lewenssiklus koste van die projek te minimeer. Dit neem die vestiging en onderhoud van gewasse, oeskostes, vervoerkostes en proseskostes in ag, met ʼn spesifiek fokus op die vervoerkoste. In samewerking met `ʼn “Geographical Information Systems” (GIS) spesialis en deur verskeie faktore, soos elektrisiteitsverbruik, inkomste vanaf hitte verkope en substasie posisies, in ag te neem is 14 moontlike fabriek posisies identifiseer. Verder is die moontlike voorsienings areas van elk van die 14 fabriek posisies weer deur GIS analiseer om resultate in terme van hoogte bo seespieël, padafstand en helling te verkry. Die vervoerkostes is verkry vanaf die “Vehicle Cost Schedule” (VCS) van die “Road Freight Association” (RFA), asook berekeninge wat die brandstof verbruik in ag neem. Hierdie kostes sluit in die effek van gradiënt, gelaaide en ongelaaide vervoer sowel as verskillende vervoer produkte. Hierdie berekeninge sowel as die ander kostes in die siklus en die resultate van GIS is kombineer in ʼn EXCEL leer. Hierdie data word dan gebruik om ʼn LINGO model te ontwikkel en die oorhoofse lewenssiklus koste van elk van die 14 fabriek posisies te minimeer. Optimering is gedoen vir 2.5 MW kapasiteit fabrieke. Uit die beste areas is een area identifiseer en verdere lopies is daarop gedoen. Die doel van hierdie lopies is om die effek van verskillende fabriekskapasiteit op die lewensiklus koste te bepaal, asook die effek daarvan op die prys wat aan die boer betaal word vir hout. Hierdie lopies is ook gebruik om die effek van boer deelname te bepaal. Die resultaat dui aan dat dit tans moontlik is om ʼn boer tussen R 300.00 en R 358.00 te betaal vir ʼn ton biomassa. Dit het ook gewys dat hoe meer boere deelneem aan hierdie projek hoe laer is die oorhoofse lewensiklus koste en hoe hoër is die prys wat betaal kan word vir hout aangesien die vervoerkoste laer sal wees. Alhoewel al die lewensiklus kostes veranderlik is oor tyd, is dit net die vervoerkoste wat ʼn ruimtelike komponent ook het en dit sal ʼn groot effek op die oorhoofse lewenssiklus koste hê. Die tesis bevind dat dit lewensvatbaar is vir alle areas in die studie om elektrisiteit op te wek vanaf hout biomassa, selfs al word die uiterse variasie in die sensitiwiteitsanalise gebruik. Vir die aanbeveling van ʼn 5 MW fabriek sal die goedkoopste vervoer opsie boomstompe wees.
Style APA, Harvard, Vancouver, ISO itp.
13

Blair, Alistair. "Resource-aware cloud-based elastic content delivery network with cost minimisation and QoS guarantee". Thesis, Ulster University, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.674640.

Pełny tekst źródła
Streszczenie:
The distribution of digital multimedia, namely, audio, video, documents, images and Web pages is commonplace across today's Internet. The successful distribution of such multimedia, in particular video can be achieved using a number of proven architectures such as Internet Protocol Television (IPTV) and Over-The-Top (OTT) services. In order to maximise the scope and reach of this multimedia a need to combine aspects of the two architectures has arisen due to the rapid uptake of multimedia steaming to the plethora of Internet enabled devices that both architectures encompass. Content Delivery Networks (CDNs) have been proposed as an effective means to facilitate this unification in order to distribute multimedia in an efficient manner that enhances end-users' Web experience by replicating or copying content to edge of network locations in proximity to the end-user. However, CDNs often face resource over-provisioning, performance degradation and Service Level Agreement (SLA) violations, thus incurring high operational costs, hardware under-utilisation and a limited scope and scale of their services. The emergence of Cloud computing as a commercial reality has created an opportunity whereby Internet Service Providers (ISPs) can leverage their Cloud resources to disseminate multimedia. However, Cloud resource provisioning techniques can still result in over-provisioning and under-utilisation. To move beyond these shortcomings this thesis sets out to establish the basis for developing advanced and efficient techniques to enable the utilisation of Cloud-based resources in a highly scalable, and cost-effective manner that reduces over-provisioning and under-utilisation while minimising latency and therefore maintaining the QoS/QoE expected by end-users for streaming multimedia.
Style APA, Harvard, Vancouver, ISO itp.
14

Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020.

Pełny tekst źródła
Streszczenie:
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job
The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Style APA, Harvard, Vancouver, ISO itp.
15

Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters". Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.

Pełny tekst źródła
Streszczenie:
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job
The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Style APA, Harvard, Vancouver, ISO itp.
16

Cunningham-Davis, John Peter. "Is cost-minimisation analysis a scientifically acceptable method for deciding health sector intervention choices? : an observational case study of echocardiography". Thesis, University of Liverpool, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.629946.

Pełny tekst źródła
Streszczenie:
Due to limited budgets and tight financial control of budget within the NHS, policy makers are faced with difficult decisions. Recent Governmental White Papers have turned the NHS full circle and it has returned to where it started in the 1990's. The primary care health sector is increasingly responsible not only for managing their own budgets but for providing services which need to be economically evaluated. In order to achieve this various goal, sources of evidence stemming from evidence based medicine, involving various evaluations are/paramount. Policy makers need to base their decisions on sources of evidence which have been subject to economic evaluation. This thesis attempts to illustrate where the best sources of evidence can.be obtained, either from Randomised Controlled Trials of Observational Studies when making choices involving cost-effective treatments and investigations and ensuring to appropriately use the relevant economic evaluation technique. Heart failure represents a major problem not only for society but also for the NHS. Many treatments have been developed for this condition and have been proven to be not only beneficial for the morbidity and mortality of patients but have also been shown to be cost-effective for the NHS. Their prescription however needs assessment via echocardiography. There are a large proportion of patients not receiving the optimal treatment and management. Such crucial issues make heart failure one of the best targets for a high yield, low health care intervention. Due to the advances of a primary care led NHS, the viability of a primary care service involving echocardiography was economically evaluated and compared via cost minimisation analysis, to a secondary echocardiography service. Results demonstrated not only the costs of patients to the NHS but also how echocardiography could assist in reducing this burden. Primary care echocardiography has also been demonstrated to be cost-effective compared with secondary care provision and recommendations for further research should consider the expansion of similar echocardiographic service encompassing a larger patient population in a multi health care centre. This would provide further evidence combined with the recent advances in technology for the necessary resources for funding such an expansion of schemes where the biggest impact will be achieved. Issues of equality of provision inherent with this expansion must also be addressed before any introduction.
Style APA, Harvard, Vancouver, ISO itp.
17

Manan, Zainuddin Abdul. "Process synthesis for waste minimisation with emphasis on the synthesis of cleaner and cost effective distillation sequences for azeotropic mixtures". Thesis, University of Edinburgh, 1998. http://hdl.handle.net/1842/11936.

Pełny tekst źródła
Streszczenie:
1. Reaction-separation interactions: A waste minimisation approach to process design that promotes opportunistic recycling and includes a systematic technique for designing a recycle network in the context of an overall process. 2. Azeotropic separation systems: (a) A novel geometric approach for synthesis of cleaner and cost effective distillation sequences for homogeneous azeotropic mixtures with and without boundary crossing. Important insights include: • a geometric approach for synthesizing and screening the alternative separation sequences which results in a catalogue pairing the RCMs of the ternary systems with their most promising separation sequences: • a novel procedure for entrainer minimisation for azeotropic distillation sequences, and • new evidences linking the type of separation sequence, the azeotropic column feedstage location and the volatility of an entrainer with the separability of homogeneous azeotropic mixtures. These findings conclusively explain the peculiar dependencies of the separability of homogeneous azeotropic mixtures on the reflux ratio and the number of stages. (b) A geometric approach for synthesis of cost effective distillation sequences for heterogeneous azeotropic mixtures which enables the graphical prediction of the absolute minimum number of units, the region and the point of desirable entrainer flowrate, the optimum decanter tie line position, and the distillate composition for the entrainer recovery column. (c) Guidelines for exploiting feed composition flexibility to improve azeotropic separation based on a novel geometric approach. Important insights include: • the significance of the binary, ternary and desirable ternary feed compositions, and a procedure to achieve the desirable ternary feed composition. • the development of a selection catalogue for feed preconcentration based on a novel geometric approach. • the use of mixing and recycling for grassroot design and retrofit.
Style APA, Harvard, Vancouver, ISO itp.
18

Eklund, Arne. "Laparoscopic or Open Inguinal Hernia Repair - Which is Best for the Patient?" Doctoral thesis, Uppsala universitet, Centrum för klinisk forskning, Västerås, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-107630.

Pełny tekst źródła
Streszczenie:
Inguinal hernia repair is the most common operation in general surgery. Its main challenge is to achieve low recurrence rates. With the introduction of mesh implants, first in open and later in laparoscopic repair, recurrence rates have decreased substantially. Therefore, the focus has been shifted from clinical outcome, such as recurrence, towards patient-experienced endpoints, such as chronic pain. In order to compare the results of open and laparoscopic hernia repair, a randomised multicentre trial - the Swedish Multicentre trial of Inguinal hernia repair by Laparoscopy (SMIL) - was designed by a study group from 11 hospitals. Between November 1996 and August 2000, 1512 men aged 30-70 years with a primary inguinal hernia were randomised to either laparoscopic (TEP, Totally ExtraPeritoneal) or open (Lichtenstein) repair. The primary endpoint was recurrence at five years. Secondary endpoints were short-term results, frequency of chronic pain and a cost analysis including complications and recurrences up to five years after surgery. In total, 1370 patients, 665 in the TEP and 705 in the Lichtenstein group, underwent operation. With 94% of operated patients available for follow-up after 5.1 years, the recurrence rate was 3.5% in the TEP and 1.2% in the Lichtenstein group. Postoperative pain was lower in the TEP group up to 12 weeks after operation, resulting in five days less sick leave and 11 days shorter time to full recovery. Patients in the TEP group had a slightly increased risk of major complications. Chronic pain was reported by 9-11% of patients in the TEP and 19-25% in the Lichtenstein group at the different follow-up points. Hospital costs for TEP were higher than for Lichtenstein, while community costs were lower due to shorter sick leave. By avoiding disposable laparoscopic equipment, the cost for TEP would be almost equal compared with Lichtenstein. In conclusion, both TEP and Lichtenstein repair have advantages and disadvantages for the patient. Depending on local resources and expertise both methods can be used and recommended for primary inguinal hernia repair.
Style APA, Harvard, Vancouver, ISO itp.
19

Joubert, Janine Mari. "A cost minimisation analysis of the usage of central nervous system medicines by using a managed care medicine price list / Janine M. Joubert". Thesis, North-West University, 2004. http://hdl.handle.net/10394/607.

Pełny tekst źródła
Streszczenie:
Increasing health care costs is an international problem from which South Africa is not excluded. Prescription medication contributes most to these high health care costs, and methods to reduce their costs to society are implemented worldwide. In South Africa, such a method is a managed care reference medicine price list, as introduced by a PBM (pharmacy benefit management) company. This step had some cost implications in the private health sector in South Africa, and these implications were investigated in this study. Central nervous system (CNS) medicine items are among the top ten medicine items claimed and represent a substantial amount of the costs of all medicine items claimed during the study period. Antidepressants, a subdivision of the CNS agents, comprise the largest share of CNS agents claimed and CNS costs, and were therefore investigated more closely. The objective of this study was to analyse the usage patterns and costs of central nervous system medicine items, and more specifically, the antidepressants, against the background of the implementation of a managed care reference medicine price list in the private sector of South Africa. This study was conducted as a retrospective, non-experimental quantitative research project. The study population consisted of all medicine items claimed as observed on the database over the two-year study period of May 2001 to April 2002 (pre-MPL) and May 2002 to April 2003 (post-MPL). Data were provided by MedschemeTM/lnterpharm, and the Statistical Analysis System® SAS 8.2® was used to extract the data from the database. The central nervous system agents had a prevalence of 8.10% (N=49098736) and a total cost of R757576976.72 over the two-year study period. The cost per CNS item increased by 5.98% or R11.50 per CNS item in the year after MPL implementation, and the cost per prescription containing CNS medicine items increased by 4.09% or R9.07 per prescription. CNS agents are classified into ten sub-pharmacological groups, according to the MIMSC3 (Snyman, 2003:13a). One of these sub-pharmacological groups, antidepressants, comprised 33.97% of all CNS medicine items claimed (N=3978364) and 45.53% of all costs associated with CNS medicine items (N=R757576976.72) over the study period. The number one antidepressant claimed was amitriptyline, a tricyclic antidepressant. Of the antidepressants with generic substitutes, all with the exception of clomipramine, were prescribed at generic substitution rates of more than 50%. After the MPL implementation, generic antidepressant products were more frequently prescribed (16.48% increase, N=617190), although patient co-payments did not decrease immediately. Some innovator products had price reductions after the implementation of the MPL. This study indicates that cost minimisation analyses and retrospective drug utilisation reviews are valuable tools in the evaluation of managed care medicine price lists.
Thesis (M. Pharm. (Pharmacy Practice))--North-West University, Potchefstroom Campus, 2005.
Style APA, Harvard, Vancouver, ISO itp.
20

Boland, Angela. "The appropriateness of using cost-minimisation analysis as a methodology in the economic evaluation of health care technologies : considerations of evidence from randomised controlled trials". Thesis, University of Liverpool, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502543.

Pełny tekst źródła
Streszczenie:
In England and Wales the demand for health care .far outweighs the ability of the National Health Service (NHS) to supply health care. In such an environment, health care decision-making is becoming increasingly influenced by considerations of clinical and cost effectiveness. The robustness and reliability that can be attached to an economic evaluation depends, to a great extent, on the nature and quality of the underlying clinical evidence. Aims: The aims of this thesis are to explore the following questions: 1. In what circumstances is it appropriate to use CMA as a methodology in the economic evaluation of health care technologies? 2. To what extent can health care professionals rely on published evidence from CMAs to inform their decision-making? 3. What specific steps can be taken to improve the conduct and the quality assessment of CMAs? Methods: Various methods have been employed to address the key issues discussed in this thesis. These methodologies include reviews of published clinical and economics literature and analyses of clinical and economic data from a randomised controlled trial.
Style APA, Harvard, Vancouver, ISO itp.
21

Bain, Lynda M., of Western Sydney Nepean University i Faculty of Commerce. "Choice of labour flexibility vehicle within the Australian clothing industry : a case study". THESIS_FCOM_XXX_Bain_L.xml, 1996. http://handle.uws.edu.au:8081/1959.7/508.

Pełny tekst źródła
Streszczenie:
Existing theories and literature seeking to explain small business reticence to engage in enterprise bargaining, at times adopt a generalised approach which precludes or at least limits their relevance and ability to explain small business choice at the industry and even organisational level. Such explanations cannot be detached from the external contextual framework in which an organisation operates and its own, often unique, strategic corporate response to the environmental influences which are challenging it. Labour flexibility vehicles including bargaining, if chosen to facilitate broader corporate strategies, can thereby, be regarded as functionally dependent upon and interactive with the corporate orientations and objectives of the organisation which in turn are environmentally influenced and shaped. The research principally provides a focused description and analysis of the experiences of Clothingco, a small, up market, vertically integrated clothing manufacturer and retailer, which has undergone various strategic readjustments at the corporate and industrial relations level throughout the 1990s, in response to externally driven pressures. The research presents firm evidence to suggest that Clothingco has selected its labour flexibility mechanisms so that they are consistent with and able to accomodate prevailing corporate strategies and orientations. Its strategic corporate readjustments throughout the 90s, which can be perceived as falling along the continuum of cost minimisation to productivity enhancement, have in particular registered differing choices with respect to labour flexibility vehicle and strategies. In the light of the findings, the research as a preferred labour flexibility vehicle at Clothingco. These are identified as: an increasing corporate focus towards cost minimisation throughout the 1990s, coupled with an inability by management to countenance union intervention in enterprise bargaining procedures. The interaction of both these factors, rendered enterprise bargaining from the point of view of management, both a strategically and industrially inferior labour flexibility vehicle to the use of contract labour. The research's strength lies in these areas which have been highlighted and which can be monitored and tested more comprehensively in future research.
Master of Commerce (Hons)
Style APA, Harvard, Vancouver, ISO itp.
22

Taruvinga, Bridgit Gugulethu. "Market solutions to the low-income housing challenge – a case study of Bulawayo, Zimbabwe". Doctoral thesis, Faculty of Engineering and the Built Environment, 2019. http://hdl.handle.net/11427/31281.

Pełny tekst źródła
Streszczenie:
The provision of decent, affordable and well-located housing for low-income communities has been an intractable problem, especially for developing countries. The empirical puzzle that motivated this study is that, despite the adverse macro environment in Zimbabwe, there appears to be private-sector developers who are successfully developing housing benefiting the low-income group. This is so, despite numerous studies that claim that given the magnitude of the housing challenge, a neoliberal doxa in a developing country context as a solution is a fallacy. Working on the broad premise that these developments represent a successful adaptation to the structural environment, the main question guiding the study was - what accounts for the success of market provided low-income housing developments in Zimbabwe despite the environment not being conducive for it? The two sub-questions flowing from this main question were firstly, how does the structural environment enable and/or constrain private sector low-income developments in Zimbabwe? Secondly, what strategies do developers adopt in response to the structural enablers and/or constraints to develop low-income housing in Zimbabwe? From these questions, the study has two hypotheses – the first hypothesis is that despite the adverse environment there exists in Zimbabwe structural enablers that make market solutions to the low-income housing challenge possible. The second hypothesis states that developers have specific discernible strategies that they employ in response to the adverse operating environment to reduce development costs to levels that enable them to provide low-income housing successfully. Using the Structure-Agency model, which is a theoretical framework rooted in institutional economics, a conceptual model to study the development process was developed and used to theorise the impact of structure on agency in the development process. Empirical evidence was gathered using observation, household surveys, and semi-structured interviews. This evidence was obtained from five housing schemes, the local authority, central government, financiers and the developers of the housing schemes, and then processed using NVIVO and SPSS. The study finds that most challenges faced by developers emanate from the institutional environment and access to resources. These challenges are namely central-local government dynamics fuelled by political undertones, lack of access to land suitable for the target group, a bureaucratic and stiff regulatory framework as well as a lack of market provided developer and end-user finance. Enabling factors were mainly the withdrawal of the government in the provision of housing in line with World-Bank neoliberal orthodoxy and incapacitation of the local authority, which eliminated alternative sources of housing for the low income group other than market provided housing, thus widening the market base for the developers. Strategies used by the developers include developer provided finance to the target group, preselling developments, sidestepping the local authority through buying land at the periphery of the local authority boundary, sidestepping regulatory barriers through engaging in corruption, backward integration to promote efficient resource allocation, and an innovative approach to risk management that caters for the low-income group. The study concludes that all these strategies have one overriding objective of cost containment. The findings indicate that there is potential, appetite and scope for more private-sector engagement. On this basis, it is recommended that the key to unlocking this potential lies with the state, as there are several policy implications that flow from these findings if the highlighted constraints are to be addressed. The study makes a number of key contributions to knowledge on market solutions to the low-income housing challenge in the area of theory, methodology, policy and empirical data.
Style APA, Harvard, Vancouver, ISO itp.
23

Khabou, Amal. "Dense matrix computations : communication cost and numerical stability". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00833356.

Pełny tekst źródła
Streszczenie:
Cette thèse traite d'une routine d'algèbre linéaire largement utilisée pour la résolution des systèmes li- néaires, il s'agit de la factorisation LU. Habituellement, pour calculer une telle décomposition, on utilise l'élimination de Gauss avec pivotage partiel (GEPP). La stabilité numérique de l'élimination de Gauss avec pivotage partiel est caractérisée par un facteur de croissance qui est reste assez petit en pratique. Toutefois, la version parallèle de cet algorithme ne permet pas d'atteindre les bornes inférieures qui ca- ractérisent le coût de communication pour un algorithme donné. En effet, la factorisation d'un bloc de colonnes constitue un goulot d'étranglement en termes de communication. Pour remédier à ce problème, Grigori et al [60] ont développé une factorisation LU qui minimise la communication(CALU) au prix de quelques calculs redondants. En théorie la borne supérieure du facteur de croissance de CALU est plus grande que celle de l'élimination de Gauss avec pivotage partiel, cependant CALU est stable en pratique. Pour améliorer la borne supérieure du facteur de croissance, nous étudions une nouvelle stra- tégie de pivotage utilisant la factorisation QR avec forte révélation de rang. Ainsi nous développons un nouvel algorithme pour la factorisation LU par blocs. La borne supérieure du facteur de croissance de cet algorithme est plus petite que celle de l'élimination de Gauss avec pivotage partiel. Cette stratégie de pivotage est ensuite combinée avec le pivotage basé sur un tournoi pour produire une factorisation LU qui minimise la communication et qui est plus stable que CALU. Pour les systèmes hiérarchiques, plusieurs niveaux de parallélisme sont disponibles. Cependant, aucune des méthodes précédemment ci- tées n'exploite pleinement ces ressources. Nous proposons et étudions alors deux algorithmes récursifs qui utilisent les mêmes principes que CALU mais qui sont plus appropriés pour des architectures à plu- sieurs niveaux de parallélisme. Pour analyser d'une façon précise et réaliste
Style APA, Harvard, Vancouver, ISO itp.
24

Lupo, Zoya Sara. "Impact of low carbon technologies on the British wholesale electricity market". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33080.

Pełny tekst źródła
Streszczenie:
Since the late 1980s, the energy sector in Great Britain has undergone some core changes in its functionality; beginning with the early 1990s privatisation, followed by an increased green ambition, and commencing a transition towards a low-carbon economy. As the British energy sector prepares itself for another major overhaul, it also puts itself at risk for not being sufficiently prepared for the consequences this transition will have on the existing generating capacity, security of supply, and the national electricity market. Upon meeting existing targets, the government of the United Kingdom risks becoming complacent, putting energy regulation to the backseat and focusing on other regulatory tasks, while introducing cuts for thriving renewable and other low-carbon energy generating technologies. The government has implemented a variety of directives, initiatives, and policies that have sometimes been criticised due to their lack of clarity and potential overlap between energy and climate change directives. The government has introduced policies that aim to provide stable short-term solutions. However, a concrete way of resolving the energy trilemma and some of the long-term objectives and more importantly ways of achieving them are yet to be developed. This work builds on analysing each low-carbon technology individually by assessing its past and current state in the British energy mix. By accounting for the changes and progress the technology underwent in its journey towards becoming a part of the energy capacity in Great Britain, its impact on the future wholesale electricity prices is studied. Research covered in this thesis presents an assessment of the existing and incoming low-carbon technologies in Great Britain and their individual and combined impact on the future of British energy economics by studying their implications for the electricity market. The methodological framework presented here uses a cost-minimisation merit order model to provide useful insights for novel methods of electricity production and conventional thermal energy generation to aid with the aftermath of potential inadequate operational and fiscal flexibility. The thesis covers a variety of scenarios differing in renewable and thermal penetration and examines the impact of interconnection, energy storage, and demand side management on the British wholesale electricity prices. The implications of increasing low-carbon capacity in the British energy mix are examined and compared to similar developments across Europe. The analysis highlights that if the optimistic scenarios in terms of green energy installation are followed, there is sufficient energy supply, which results in renewable resources helping to keep the wholesale price of electricity down. However, if the desired capacity targets are not met, the lack of available supply could result in wholesale prices going up, especially in the case of a natural gas price increase. Although initially costly, the modernisation of the British grid leads to a long-term decrease in wholesale electricity prices and provides a greater degree of security of supply and flexibility for all market participants.
Style APA, Harvard, Vancouver, ISO itp.
25

Dičkutė, Asta. "Trends in the use of Angiotensin converting enzyme inhibitors and Angiotensin II antagonists in Lithuania on 2005-2007 years". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080616_100311-96511.

Pełny tekst źródła
Streszczenie:
Objective: To evaluate the tendencies of utilization of Angiotensin converting enzyme inhibitors and Angiotensin II receptors antagonists in Lithuania during 2005-2007 years. Methods: MEDLINE database was searched to identify and evaluate all literature relating to pharmacokinetic and pharmacodynamic characteristics of Angiotensin converting enzyme inhibitors and Angiotensin II antagonists. Utilization data of Angiotensin converting enzyme inhibitors (plain and combinations) and Angiotensin II antagonists (plain and combinations) in Lithuania over three years (2005 – 2007) period were obtained from SoftDent, JSC database. The retail prices of agents acting on the Renin-angiotensin-aldosterone selected from Reimbursed Medical Products Reference Prices Lists of 2005, 2006, 2007 years. Drugs were classified according to the Anatomic Therapeutic Chemical system and use was quantified in terms of defined daily doses (ATC/DDD). Consumption of Angiotensin converting enzyme inhibitors (plain and combinations) and Angiotensin II antagonists (plain and combinations) was calculated by DDD methodology and expressed as DDD per 1.000 inhabitants per day. Pharmacoeconomic calculations were done according to cost minimization and reference price methodologies. Results: According to meta-analysis, number of included studies (69 publications) that evaluated various treatment and head-to-head efficacy comparisons found in literature no consistent differential effects of ACEIs versus ARBs on... [to full text]
Tikslas: atlikti Angiotenziną konvertuojančių fermentų inhibitorių ir Angiotenzino II antagonistų suvartojimo tendencijų Lietuvoje analizę 2005 – 2007 metais. Metodai: Duomenys apie Angiotenziną konvertuojančio fermento inhibitorių ir Angiotenzino II antagonistų farmakokinetines ir farmakodinamines savybes buvo surinkti iš MEDLINE elektroninių duomenų šaltinių. Duomenys apie AKF inhibitorių (paprastų ir sudėtinių) ir Angiotenzino II antagonistų (paprastų ir sudėtinių) suvartojimą Lietuvoje per 2005 – 2007 metus gauti iš UAB SoftDent duomenų bazės. Renino-angiotenzino-aldosterono sistemą veikiančių vaistų mažmeninės kainos išrinktos iš Lietuvos kompensuojamų vaistinių preparatų 2005, 2006, 2007 metų kainynų. Vaistai buvo suklasifikuoti pagal anatominę terapinę cheminę (ATC) klasifikaciją. AKFI inhibitorių (paprastų ir sudėtinių) ir Angiotenzino II antagonistų (paprastų ir sudėtinių) suvartojimas buvo vertinamas pagal apibrėžtos dienos dozės (DDD – daily defined dose) metodiką, o duomenys įvertinti pagal DDD skaičių, tenkantį 1000 gyventojų per vieną dieną. AKF inhibitorių (paprastų ir sudėtinių) ir Angiotenzino II antagonistų (paprastų ir sudėtinių) farmakoekonominei analizei atlikti buvo taikytas kainų mažinimo bei standartinės kainos nustatymo metodai. Rezultatai: Vadovaujantis metaanalizių, įvairių klinikinių tyrimų 69 publikacijomis bei išsamia AKF inhibitorių ir Angiotenzino II antagonisų efektyvumo palyginimo analize galima teigti, kad šių vaistų grupių poveikis... [toliau žr. visą tekstą]
Style APA, Harvard, Vancouver, ISO itp.
26

Cheickh, Obeid Nour Eldin. "Régulation de processus multivariables par optimisation d'un critère fuyant". Besançon, 1987. http://www.theses.fr/1988BESA2018.

Pełny tekst źródła
Streszczenie:
Résume de comportement de processus physiques pilotes par une loi de commande issue de la minimisation d'un critère fuyant. La commande appliquée au processus est interprétée au sens des critères d'optimisation habituels. Présentation d'une application a un processus thermique
Style APA, Harvard, Vancouver, ISO itp.
27

Cheaitou, Ali. "Modèles Stochastiques pour La Planification de Production et la Gestion de Stocks : Application aux Produits à Court Cycle de Vie". Phd thesis, Ecole Centrale Paris, 2008. http://tel.archives-ouvertes.fr/tel-00275821.

Pełny tekst źródła
Streszczenie:
Le phénomène d'incertitude, dont les sources sont variées, est rencontré dans plusieurs domaines et on devrait y faire face. Cette incertitude est due essentiellement à notre incapacité à prédire avec exactitude le comportement futur d'une partie ou de la totalité d'un système. Dans les dernières décades, plusieurs techniques mathématiques ont été développées pour maitriser cette incertitude, afin de réduire son impact négatif, et par conséquent, l'impact négatif de notre méconnaissance.
Dans le domaine du « Supply Chain Management » la source principale d'incertitude est la demande future. Cette demande est, en général, modélisé par des lois de probabilité paramétrées en utilisant des techniques de prévision. L'impact de l'incertitude de la demande sur les performances de la « Supply Chain » est important: par exemple, le taux mondial de rupture de stock, dans l'industrie de distribution était en 2007 de 8.3%. De l'autre côté, le taux mondial de produits invendus, dans la grande distribution, était en 2003 de 1%. Ces deux types de coûts, qui sont dus essentiellement à l'incertitude de la demande, représentent des pertes significatives pour les différents acteurs de la « Supply Chain ».
Dans cette thèse, on s'intéresse au développement de modèles mathématiques de planification de production et de gestion de stock, qui prennent en compte ce phénomène d'incertitude sur la demande, essentiellement pour de produits à courte durée de vie. On propose plusieurs modèles de planification de production, à petit horizon de planification, qui prennent en compte les différents aspects de notre problématique, tels que les capacités de production, la remise à jour des prévisions de la demande, les options de réservation de capacité, et les options de retour « Payback » des produits. On souligne, dans ces modèles, un aspect important qui prend de l'ampleur à cause de la mondialisation, et qui est lié à la différence entre les coûts de production des différents fournisseurs. On propose à la fin de la thèse, un modèle généralisé qui pourrait être appliqué à des produits à longue durée de vie, et qui exploite quelques résultats obtenus pour les produits à courte durée de vie. Tous ces modèles sont résolus analytiquement ou bien numériquement en utilisant la programmation dynamique stochastique.
Style APA, Harvard, Vancouver, ISO itp.
28

Najem, Abdessamad. "Localisation optimale d'actionneurs pour une classe de systemes paraboliques". Perpignan, 1987. http://www.theses.fr/1987PERP0042.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Nassiri, Nabil. "Le financement des hôpitaux, apports des réseaux de télémédecine, validations empiriques". Brest, 2007. http://www.theses.fr/2007BRES6001.

Pełny tekst źródła
Streszczenie:
Les modes de paiement prospectif ont pour objectif de rétablir l’efficacité économique du système de financement des hôpitaux. Cependant, de nombreux travaux montrent qu’ils génèrent des effets indésirables. Parmi ces effets, nous nous intéressons aux arbitrages en termes de qualité des soins et de sélection des patients qui se traduisent par des transferts de patients. L’objectif principal de cette thèse est de montrer que les réseaux de télémédecine permettent de pallier ces effets et de réduire les transferts. Pour ce faire, nous adoptons plusieurs démarches fondées, d’abord, sur un travail de modélisation théorique et ses résultats de simulation, et ensuite, sur deux validations économétriques et leurs résultats empiriques, Nous montrons que la télémédecine, comme instrument complémentaire de régulation, présente des avantages considérables en permettant de combler les insuffisances de la régulation par la concurrence par comparaison implicite au paiement prospectif. Combien coûte une téléconsultation et est-ce qu’elle présente un avantage comparatif ? Telles sont les questions traitées dans une troisième validation empirique relative à un projet de téléconsultation guyanais. D’après nos résultats, la téléconsultation s’est avérée rentable dans un site mais pas dans les autres en raison des problèmes organisationnels qu’ils ont connus. Ces résultats sont à approfondir compte tenu de la durée faible d’observation. Les bénéfices de la télémédecine ne sont pas tous pris en compte par les méthodes d’évaluation uni-critère, Partant de ce constat, nous suggérons deux approches pour juger son efficacité globale: l’évaluation contingente et l’analyse multicritère
The Prospective Payment System aims to restore the economic efficiency of the hospital financing policies. However, many works show that it generates also undesirable effects. Among these effects, we are interested in the trade-off in terms of health care quality and patients selection that causes patients transfers. The main aim of this thesis is to show that telemedicine networks make it possible to mitigate these effects and to reduce the transfer decision, with this intention, we adopt several steps based, initially, on a theoretical model and its simulations results, and then, on two econometric validations and their empirical results. We show that telemedicine, like a complementary instrument for hospital regulation, bas considerable advantages while making it possible to fill the insufficiencies of the Yardstick competition implicit to the prospective payment. How much does a teleconsultation cost? Does it have a comparative advantage? Such are the question treated in a third impirical validation about project of Guyanese Teleconsultation. According to our results, the teleconsultation proved to be profitable on one site but not on the others because of the organizational problems, These results are to be deepened taking into account the weak duration of observation, All benefits of the telemedicine are net taken into account by the one-criterion evaluation methods. We suggest two approaches to consider its total effectiveness: the contingent evaluation and the multi-criterion analysis
Style APA, Harvard, Vancouver, ISO itp.
30

Anton, Arun V. "Choice of discount rate and agency cost minimisation in capital budgeting: analytical review and modelling approaches". Thesis, 2019. https://vuir.vu.edu.au/40447/.

Pełny tekst źródła
Streszczenie:
Capital budgeting is a crucial business function and most large firms use Discounted Cash Flow (DCF) methods, particularly the Net Present Value (NPV) method which takes into account the time value of money, for evaluating investment projects. Hence, the discount rate plays a major role in the choice of capital investments, and both the selection and appropriate use of a suitable discount rate are critical to sound capital budgeting. Extensive evidence from the literature indicates that agency problems exist in capital budgeting decisions, both when choosing and when using a discount rate for this process. Managers as agents can manipulate the choice of the discount rate to maximise their own benefits. This creates an agency problem that has impacts on efficient capital investment decisions. Most firms believe that using project-specific discount rates may open up incentives for managerial opportunistic behaviour and hence they prefer firm-wide single discount rates that might moderate the managerial bias. In other words, most firms use their company-wide Weighted Average Cost of Capital (WACC) to evaluate all of their capital projects. However, company-wide WACC is not a correct approach, in that it may lead to the selection of high-risk, unprofitable projects and hence to inefficient allocation of resources. This creates a need for a systematic and verifiable method to establish project- specific discount rates. If possible, the determination of these project-specific discount rates should be tied to outside market forces that are not under the control of the manager. But the selection of suitable project-specific discount rates alone may not completely minimise agency costs, as managers’ can manipulate capital budgeting decisions to maximise their benefits. Hence, an appropriate capital budgeting framework that can further minimise agency costs and maximise company value is required. The main aims of the study are to develop a process to select appropriate project- specific discount rates that minimise agency costs and to develop a better capital budgeting framework to further minimise agency costs in capital budgeting. Such a framework should provide management incentives to achieve efficient capital budgeting outcomes leading to enhanced company value
Style APA, Harvard, Vancouver, ISO itp.
31

Habicher, Alexandra [Verfasser]. "Behavioural cost minimisation and minimal invasive blood-sampling in meerkats (S. Suricatta, Herpestidae) / vorgelegt von Alexandra Habicher". 2009. http://d-nb.info/100975842X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Gouault, Laliberté Avril. "Impact économique d’un nouveau test diagnostique pour le cancer du poumon". Thèse, 2016. http://hdl.handle.net/1866/19206.

Pełny tekst źródła
Streszczenie:
Au Canada, le cancer du poumon est la cause principale de décès relié au cancer. À l’imagerie médicale, le cancer du poumon peut prendre la forme d’un nodule pulmonaire. La prise en charge menant au diagnostic définitif d’un nodule pulmonaire peut s’avérer complexe. La recherche en oncoprotéomique a permis le développement de nouveaux tests diagnostiques non-invasifs en cancer du poumon. Ceux-ci ont pour objectif d’évaluer le risque de malignité d’un nodule pour guider la prise en charge menant au diagnostic. Toutefois, l’impact économique de tels tests demeure inconnu. L’objectif de ce projet était de mesurer, en milieu de pratique réelle, l’utilisation des ressources en soins de santé pour l’investigation de nodules pulmonaires puis, de développer un modèle générique permettant d’évaluer l’impact économique au Québec des nouveaux tests protéomiques pour l’investigation de ces nodules. Tout d’abord, une revue de dossiers patients a été effectuée dans trois centres hospitaliers du Québec afin de mesurer les ressources en soins de santé et les coûts associés à l’investigation de nodules pulmonaires entre 0,8 et 3,0 cm. Par la suite, une analyse de minimisation de coûts a été effectuée à partir d’un modèle générique développé dans le cadre de ce projet. Ce modèle visait à comparer l’approche courante d’investigation à celle intégrant un test protéomique fictif afin de déterminer l’approche la moins dispendieuse. La revue de dossiers patients a permis de déterminer qu’au Québec, le coût moyen d’investigation d’un nodule pulmonaire est de 7 354$. Selon les résultats de l’analyse, si le coût du test protéomique est fixé en-deçà de 3 228,70$, l’approche intégrant celui-ci serait moins dispendieuse que l’approche courante. La présente analyse suggère que l’utilisation d’un test diagnostique protéomique non-invasif en début d’investigation pour un nodule de 0,8 à 3,0 cm, permettrait d’engendrer des économies pour le système de santé au Québec.
In Canada, lung cancer is the leading cause of death among cancer patients. Imaging technologies, such as computed tomography, allows the detection of potential lung cancers in the form of pulmonary nodules. The clinical pathway leading to the definitive diagnostic of a pulmonary nodule can be complex. Research in oncoproteomics has led to the development of novel noninvasive diagnostic tests in lung cancer. These tests aim to evaluate the risk of malignancy of a nodule in order to guide the clinical pathway leading to a diagnostic. However, the economic impact of such tests remains unknown. The objective of this project was to measure, in a real-life setting, health care resource utilization for the investigation of pulmonary nodules and then, develop a generic model to assess the economic impact in the province of Quebec of new proteomic tests for the investigation of these nodules. Firstly, a medical chart review was performed in three hospitals in Quebec to measure health care resource utilization for the investigation of pulmonary nodules of 0,8 to 3,0 cm. Then, a cost minimization analysis was performed by using a generic model developed for this project. This model compared the usual care to the approach integrating a fictive proteomic test in order to identify the less expensive approach. As per the medical chart review, the average cost for the investigation of a pulmonary nodule was $7,354. According to the results of the analysis, if the cost of the test is below $3,228.70, the approach integrating a proteomic test would be less expensive then the current approach. This study tends to demonstrate that the use of a noninvasive proteomic diagnostic test at the beginning of the investigation of a pulmonary nodule from 0,8 to 3,0 cm could generate savings for the health care system in Quebec.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii