To see the other types of publications on this topic, follow the link: Best Effort.

Dissertations / Theses on the topic 'Best Effort'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Best Effort.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Qiu, Jun. "Best effort decontamination of networks." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27908.

Full text
Abstract:
In this thesis we consider the problem to find the optimal strategy to decontaminate the maximum possible number of nodes in a contaminated network with a fixed number of agents. We are given a team of mobile agents located on a node in a contaminated network and the number of agents is not enough to decontaminate the whole network to reach a state in which all nodes are simultaneously clean. We want to find what the maximum number of decontaminated nodes is and how to decontaminate them. In this thesis we consider meshes (regular, octagonal, and hexagonal) and trees and give optimal strategies for those topologies. We also analyze the performance of our strategies according to the number of decontaminated nodes, number of agents' movement and time.
APA, Harvard, Vancouver, ISO, and other styles
2

Rojanarowan, Jerapong. "MPLS-Based Best-Effort Traffic Engineering." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7496.

Full text
Abstract:
MPLS-Based Best-Effort Traffic Engineering Jerapong Rojanarowan 120 Pages Directed by Dr. Henry L. Owen The objective of this research is to develop a multipath traffic engineering framework for best-effort traffic in Multiprotocol Label Switching (MPLS) networks so as to deliver more equal shares of bandwidth to best-effort users as compared to the traditional shortest-path algorithm. The proposed framework is static and the input to the traffic engineering algorithm is restricted to network topology. Performance evaluation of this framework is conducted by simulation using ns-2 network simulator. In a multi-service capable network, some portion of the bandwidth is reserved for guaranteed services and the leftover portion is dedicated to best-effort service. This research examines the problem of traffic engineering for the remaining network bandwidth that is utilized by best-effort traffic where demands are not known a priori. This framework will result in making the limited available best-effort traffic bandwidth more equitably shared by the best-effort flows over a wide range of demands. Traditional traffic engineering research has not examined best-effort traffic.
APA, Harvard, Vancouver, ISO, and other styles
3

Miller, Alan Henry David. "Best effort measurement based congestion control." Thesis, Connect to e-thesis, 2001. http://theses.gla.ac.uk/1015/.

Full text
Abstract:
Thesis (Ph.D.) -- University of Glasgow, 2001.
Includes bibliographical references (p.i-xv). Print version also available. Mode of access : World Wide Web. System requirments : Adobe Acrobat reader reuired to view PDF document.
APA, Harvard, Vancouver, ISO, and other styles
4

Dong, Xin. "Providing best-effort services in dataspace systems /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khalfallah, Sofiane. "Algorithmique best-effort pour les réseaux dynamiques." Compiègne, 2010. http://www.theses.fr/2010COMP1889.

Full text
Abstract:
Les réseaux dynamiques posent de nouvelles difficultés pour la construction d'applications réparties (mobilité, absence d'infrastructure, communication sans-fil, etc. ). Les réseaux ad hoc de véhicules (VANET) représentent un des cas d'étude des réseaux dynamiques. Nous avons commencé les travaux par un état des projets européens concernant les VANETs. Ensuite, nous avons modélisé la norme IEEE 802. 11, qui s'impose comme une technologie sans-fil standard pour la communication entre noeuds mobiles. Nous avons présenté l'algorithmique best-effort qui permet de compléter le concept d'auto-stabilisation afin de gérer la dynamique du réseau. C'est pourquoi nous avons introduit le concept de convergence continue. Ce concept est à rapprocher de la super-stabilisation. L'idée d'une métrique de la dynamique nous est apparue importante (comme la notion de durée d'une ronde continue). Nous avons proposé une application de l'algorithmique best-effort, à savoir un algorithme auto-stabilisant ayant une convergence continue pour la gestion de groupe. Nous avons présenté nos contributions dans la suite logicielle Airplug pour aboutir à une plate-forme complète pour l'évaluation de performances et un prototypage rapide des protocoles best-effort. Nous avons implémenté le protocole réparti GRP qui permet la gestion de groupe dans les réseaux dynamiques et évalué ses performances dans le mode Airplug-ns. Nous avons proposé des métriques appropriées, qui décrivent la stabilité des groupes, pour évaluer les performances de notre protocole
Many problems are open in the design of distributed applications (mobility, ad hoc communication, wireless technology, etc. ). We focus our work on a specific case study of dynamic networks, which is Vehicular ad-hoc networks (VANET). We first establish a state-of-the-art for this field based on the European projects in the VANETs. Second, we model the IEEE 802. 11 standard that tends to be a standard for mobile communication. Best-effort algorithmics allowing to complete the concept of auto-stabilization in the management of dynamic networks are presented. For that aim, we introduce the concept of service continuity. This concept is close to the super-stabilization. We believe that the idea of metrics studying dynamic topologies is important (as the notion of duration of a continuous round). The proposed algorithm works in dynamic and distributed systems. It globally ensures a kind of service continuity to applications while the system is still converging, except if a huge number of topology changes happen. After that, we present our contributions in the Airplug software, as well as in the design and the implementation of a complete platform for performance evaluation and fast prototyping of best-effort protocols. An implementation is done of the distributed protocol GRP to estimate its performances in the Airplug-ns mode. Finally, we propose appropriate metrics that describe the stability of groups in order to evaluate the performance of our protocol
APA, Harvard, Vancouver, ISO, and other styles
6

Ruiz, Sánchez Miguel Ángel. "Optimization of packet forwarding in best-effort routers." Nice, 2003. http://www.theses.fr/2003NICE4029.

Full text
Abstract:
La tâche principale d'un routeur est d'acheminer des paquets jusqu'à leur destination finale en passant par les différentes réseaux. Comme chaque paquet est traité individuellement, la performance d'un routeur dépend du temps nécessaire pour traiter chaque paquet. Due à la croissance et à la diversité du trafic dans l'Internet, le traitement nécessaire pour acheminer des paquets doit être optimisé. Cette thèse propose des algorithmes pour optimiser la performance du traitement de paquets lors de leur acheminement dans les routeurs best-effort. Pour acheminer (réexpédier) des paquets, un routeur doit tout d'abord rechercher l'information de routage correspondant à chaque paquet. La recherche d'information de routage est basée sur l'adresse destination du paquet et elle s'appelle consultation d'adresse. Nous proposons dans cette thèse deux mécanismes pour la mise à jour incrémentale des table de routage basées sur des tries multibit. Tout d'abord, nous déterminons les conditions nécessaires pour supporter des mises à jour incrémentales dans les tries multibit. À partir de ces conditions, nous proposons des algorithmes et des structures de données pour effectuer ces mises à jour incrémentales. En particulier, nous proposons une structure de données que nous appelons le vecteur de bits PN (pour prefix nesting en anglais). Le vecteur de bits PN code un ensemble de préfixes et leurs relations d'inclusion, car cette information est nécessaire pour supporter des mises à jour incrémentales. Nous évaluons la performance de nos mécanismes implémentés en langage C. Nous présentons les performances de nos mécanismes pour les opérations de recherche, insertion et suppression. Nous présentons également les besoins en termes de mémoire. Une deuxième contribution de cette thèse est l'introduction d'une taxonomie et un cadre de référence pour les algorithmes de consultation rapide d'adresse IP. Notre taxonomie est basée sur l'observation que la difficulté de trouver le plus long préfixe commun avec l'adresse destination est sa double dimension: valeur et longueur. Lorsque nous présentons et classifions les différents mécanismes, l'accent est mis sur le type de transformation que l'on effectue sur l'ensemble de préfixes pour chaque mécanisme. Cette approche unificatrice que nous proposons nous permet de comprendre et de comparer les compromis des différentes mécanismes. Nous comparons les mécanismes en termes de leur complexité en temps et en espace. Nous comparons aussi leur performance en mesurant le temps de l'opération de recherche. Ces mesures sont réalisées sur une même plateforme et en utilisant une vrai table de routage. Une troisième contribution de cette thèse est un mécanisme qui optimise l'usage des buffers dans les routeurs pour offrir un haut dégrée d'isolation entre flux. Tout d'abord, nous étudions la fonctionnalité des buffers dans les routeurs et nous déterminons les caractéristiques souhaitables des buffers dans les routeurs. Ensuite nous proposons MuxQ un mécanisme qui fournit un haut degré d'isolation entre flux. MuxQ est basé sur l'idée de protéger la fonction de multiplexage de la fonction d'absorption de rafales d'un buffer. Nous évaluons MuxQ en utilisant le simulateur ns-2. En particulier, nous étudions la capacité de MuxQ pour isoler différent types de flux. Nous comparons les performances de notre mécanisme avec celles des mécanismes Drop-Tail, CSFQ, FRED et DRR. Nous présentons les résultats de simulations avec des conditions de trafic différentes. MuxQ est un mécanisme simple, deployable et qui fournit un haut degré d'isolation de flux, tout en gardant une quantité limitée d'état.
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Kajsa, and Simon Mårtensson. "ESG investing in the Eurozone : Portfolio performance of best-effort and best-in-class approaches." Thesis, Umeå universitet, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-161407.

Full text
Abstract:
The last decades have seen a rapid increase of sustainable investing, also known as ESG (Environmental, Social and Governance) investing. There has also been an increasing body of academic literature devoted to whether investors can gain any financial benefits from taking ESG under consideration. Previous literature of portfolio performance in terms of risk-adjusted returns has given much of its attention to best-in-class approaches, which is a strategy that selects top performers in ESG within a sector or industry. The purpose of this study is foremost to investigate a best-effort approach to ESG investing, which is a strategy that focuses on the top improvers in ESG. The purpose is further to compare this with a best-in-class approach, since the findings from earlier studies of this strategy still are inconsistent. The region chosen to perform this study in is the Eurozone. Several theories that have implications for portfolio studies and abnormal returns are taken under consideration in relation to the study and its findings. This includes the efficient market hypothesis, the adaptive market hypothesis and modern portfolio theory. The theoretical framework also cover asset-pricing models and the notions of risk-adjusted returns. A quantitative study with a deductive approach are used to form portfolios, with a Eurozone index as the investable universe. Best-effort and best-in-class portfolios as well as difference portfolios of the two approaches are created, based on ESG data and different cut-off rates for portfolio inclusion. As for risk-adjusted performance measure, the Carhart four-factor model are used. The overall results are mostly insignificant findings in terms of abnormal returns. However, three best-effort portfolios based on the top ESG improvers show significant positive abnormal returns. These findings are strongest for the environmental and social factor. As for the best-in-class approach, only the governance portfolios provided weakly significant results in terms of abnormal returns. Further, the study is not able to significantly distinguish between a best-effort and a best-in-class approach when it comes to risk-adjusted performance. The exception is the environmental factor based on the top performers in each approach, where the best-effort portfolio outperforms the best-in-class portfolio. Finally, none of the portfolios provided significant negative risk-adjusted returns. This can at least be considered as good news for ESG investing, since it indicates that investors do not have to sacrifice risk-adjusted returns in order to invest in a more sustainable way.
APA, Harvard, Vancouver, ISO, and other styles
8

Papri, Rowshon Jahan. "Best effort query answering for mediators with union views." Thesis, Wichita State University, 2011. http://hdl.handle.net/10057/5032.

Full text
Abstract:
Consider an SQL query that involves joins of several relations, optionally followed by selections and/or projections. It can be represented by a conjunctive datalog query Q without negation or arithmetic subgoals. We consider the problem of answering such a query Q using a mediator M. For each relation R that corresponds to a subgoal in Q, M contains several sources; each source for R provides some of the tuples in R. The capability of each source are described in terms of templates. It might not be possible to get all the tuples in the result, Result(Q), using M, due to restrictions imposed by the templates. We consider best-effort query answering: Find as many tuples in Result(Q) as possible. We present an algorithm to determine if Q can be so answered using M.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
9

Koehler, Bernd G. "Best-effort traffic engineering in multiprotocol label switched networks." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/14937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Perera, Bandaralokuge Earl Shehan. "VoIP and best effort service enhancement on fixed WiMAX." Thesis, University of Canterbury. Electrical & Computer Engineering, 2008. http://hdl.handle.net/10092/1575.

Full text
Abstract:
Fixed Broadband Wireless Access (BWA) for the last mile is a promising technology which can offer high speed voice, video and data service and fill the technology gap between Wireless LANs and wide area networks. This is seen as a challenging competitor to conventional wired last mile access systems like DSL and cable, even in areas where those technologies are already available. More importantly the technology can provide a cost-effective broadband access solution in rural areas beyond the reach of DSL or cable and in developing countries with little or no wired last mile infrastructure. Earlier BWA systems were based on proprietary technologies which made them costly and impossible to interoperate. The IEEE 802.16 set of standards was developed to level the playing field. An industry group the WiMAX Forum, was established to promote interoperability and compliance to this standard. This thesis gives an overview of the IEEE 802.16 WirelessMAN OFDM standard which is the basis for Fixed WiMAX. An in depth description of the medium access control (MAC) layer is provided and functionality of its components explained. We have concentrated our effort on enhancing the performance of Fixed WiMAX for VoIP services, and best effort traffic which includes e-mail, web browsing, peer-to-peer traffic etc. The MAC layer defines four native service classes for differentiated QoS levels from the onset. The unsolicited grant service (UGS) class is designed to support real-time data streams consisting of fixed-size data packets issued at periodic intervals, such as T1/E1 and Voice over IP without silence suppression, while the non-real-time polling service (nrtPS) and best effort (BE) are meant for lower priority traffic. QoS and efficiency are at opposite ends of the scale in most cases, which makes it important to identify the trade-off between these two performance measures of a system. We have analyzed the effect the packetization interval of a UGS based VoIP stream has on system performance. The UGS service class has been modified so that the optimal packetization interval for VoIP can be dynamically selected based on PHY OFDM characteristics. This involves cross layer communication between the PHY, MAC and the Application Layer and selection of packetization intervals which keep the flow within packet loss and latency bounds while increasing efficiency. A low latency retransmission scheme and a new ARQ feedback scheme for UGS have also been introduced. The goal being to guarantee QoS while increasing system efficiency. BE traffic when serviced by contention based access is variable in speed and latency, and low in efficiency. A detailed analysis of the contention based access scheme is done using Markov chains. This leads to optimization of system parameters to increase utilization and reduce overheads, while taking into account TCP as the most common transport layer protocol. nrtPS is considered as a replacement for contention based access. Several enhancements have been proposed to increase efficiency and facilitate better connection management. The effects of proposed changes are validated using analytical models in Matlab and verified using simulations. A simulation model was specifically created for IEEE 802.16 WirelessMAN OFDM in the QualNet simulation package. In essence the aim of this work was, to develop means to support a maximum number of users, with the required level of service, using the limited wireless resource.
APA, Harvard, Vancouver, ISO, and other styles
11

Legout, Arnaud. "Contrôle de congestion multipoint pour les réseaux best effort." Nice, 2000. http://www.theses.fr/2000NICE5451.

Full text
Abstract:
Une des clefs de l'amélioration de la qualité de service pour les réseaux best effort est le contrôle de congestion. Dans cette thèse, on a étudié le problème du contrôle de congestion pour la transmission multipoint dans les réseaux best effort. Cette thèse présente quatre contributions majeures. On a commencé par étudier deux protocoles de contrôle de congestion multipoints RLM et RLC. On a identifié des comportements pathologiques pour chaque protocole. Ceux-ci sont extrêmement difficiles à corriger dans le contexte actuel de l'internet, c'est-à-dire en respectant le paradigme TCP -friendly. On a alors réfléchi au problème du contrôle de congestion dans le contexte plus général des réseaux best effort. Ceci nous a conduit à redéfinir la notion de congestion, définir les propriétés requises par un protocole de contrôle de congestion idéal et définir un nouveau paradigme pour la conception de protocoles de contrôle de congestion presque idéaux. On a introduit à cet effet le paradigme Fair Scheduler (FS). L'approche que l'on a utilisée pour définir ce nouveau paradigme est purement formelle. Pour valider cette approche théorique, on a conçu grâce au paradigme FS un nouveau protocole de contrôle de congestion multipoint à couches cumulatives et orienté récepteur : PLM, qui est capable de suivre les évolutions de la bande passante disponible sans aucune perte induite, même dans un environnement autosimilaire et multifractal. PLM surpasse RLM et RLC et valide le paradigme FS. Comme ce paradigme permet de concevoir des protocoles de contrôle de congestion multipoints et point à point, on a défini une nouvelle politique d'allocation de la bande passante entre flux multipoints et flux point à point. Cette politique, appelée LogRD, permet d'améliorer considérablement la satisfaction des utilisateurs multipoints sans nuire aux utilisateurs point à point.
APA, Harvard, Vancouver, ISO, and other styles
12

Oida, Kazumasa. "Internet Traffic Control for Best-Effort and Guaranteed Services." 京都大学 (Kyoto University), 2002. http://hdl.handle.net/2433/149383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Trang, Si Quoc Viet. "FLOWER, an innovative Fuzzy LOWer-than-best-EffoRt transport protocol." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0029/document.

Full text
Abstract:
Nous examinons la possibilité de déployer un service Lower-than-Best-Effort(LBE)sur des liens à long délai tels que des liens satellites. L'objectif estde fournir une deuxième classe de priorité dédiée à un trafic en tâche defond ou un trafic de signalisation. Dans le contexte des liens à long délai, unservice LBE peut aider à optimiser l'utilisation de la capacité du lien. Enoutre, un service de LBE peut permettre un accès à Internet à faible coût oumême gratuit dans les collectivités éloignées via la communication parsatellite. Il existe deux niveaux de déploiement possible d'une approche de LBE: soit àla couche MAC ou soità la couche de transport. Dans cette thèse, nous nousintéressons à une approche de bout-en-bout et donc nous nousconcentrons spécifiquement sur les solutions de la couche transport. Nousproposons tout d'abord d'étudier LEDBAT (Low Extra Delay BackgroundTransport)en raison de son potentiel. En effet, LEDBAT a été normalisé parl'IETF et est largement déployé dans le client BitTorrent officiel.Malheureusement, le réglage des paramètres de LEDBAT dépend fortement desconditions du réseau. Dans le pire des cas, les flux LEDBAT peuvent prendretoute la bande passante d'autre trafic tels que le trafic commercial sur lelien satellite. LEDBAT souffre également d'un problème intra-inéquité, appelélatecomer advantage. Toutes ces raisons empêchent souvent les opérateursde permettre l'utilisation de ce protocole sur le lien sans fil et à longdélai puisqu'une mauvaise configuration peut surcharger la capacité du lien.Pour répondre à l'ensemble de ces problèmes, nous proposons FLOWER, un nouveauprotocole de transport, qui se positionne comme alternative à LEDBAT. Enutilisant un contrôleur de logique floue pour réguler le débit des données,FLOWER vise à résoudre les problèmes de LEDBAT tout en remplissant le rôle d'unprotocole de LBE. Dans cette thèse, nous montrons que FLOWER peut transporter letrafic deLBE non seulement dans le contexte à long délai, mais dansplusieurs conditions du réseau où LEDBAT se trouve en échec
In this thesis, we look at the possibility to deploy a Lower-than-Best-Effort(LBE) service over long delay links such as satellite links. The objective isto provide a second priority class dedicated to background or signalingtraffic. In the context of long delay links, an LBE service might also help tooptimize the use of the link capacity. In addition, an LBE service can enablea low-cost or even free Internet access in remote communities via satellitecommunication. There exists two possible deployment level of an LBE approach: either at MAClayer or at transport layer. In this thesis, we are interested in anend-to-end approach and thusspecifically focus on the transport layersolutions. We first propose to study LEDBAT (Low Extra Delay BackgroundTransport) because of its potential. Indeed, LEDBAT has been standardized byIETF and is widely deployed within the official BitTorrent client.Unfortunately, the tuning of LEDBAT parameters is revealed to highly depend onthe network conditions. In the worst case scenario, LEDBAT flows can starveother traffic such as commercial traffic performing over a satellite link.LEDBAT also suffers from an intra-unfairness issue, called the latecomeradvantage. All these reasons often prevent operators to allow the use of suchprotocol over wireless and long-delay link as a misconfiguration can overloadthe link capacity. Therefore, we design FLOWER, a new delay-based transportprotocol, as an alternative to LEDBAT. By using a fuzzy controller to modulatethe sending rate, FLOWER aims to solve LEDBAT issues while fulfilling the roleof an LBE protocol. Our simulation results show that FLOWER can carry LBEtraffic not only in the long delay context, but in a wide range of networkconditions where LEDBAT usually fails
APA, Harvard, Vancouver, ISO, and other styles
14

Goichon, François. "Equité d'accès aux ressources dans les systèmes partagés best-effort." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00921313.

Full text
Abstract:
Au cours de la dernière décennie, l'industrie du service informatique s'est métamorphosée afin de répondre à des besoins client croissants en termes de disponibilité, de performance ou de capacité de stockage des systèmes informatisés. Afin de faire face à ces demandes, les hébergeurs d'infrastructures ont naturellement adopté le partage de systèmes où les charges de travail de différents clients sont exécutées simultanément. Cette technique, mutualisant les ressources à disposition d'un système entre les utilisateurs, permet aux hébergeurs de réduire le coût de maintenance de leurs infrastructures, mais pose des problèmes d'interférence de performance et d'équité d'accès aux ressources. Nous désignons par le terme systèmes partagés best-effort les systèmes dont la gestion de ressources est centrée autour d'une maximisation de l'usage des ressources à disposition, tout en garantissant une répartition équitable entre les différents utilisateurs. Dans ce travail, nous soulignons la possibilité pour un utilisateur abusif d'attaquer les ressources d'une plateforme partagée afin de réduire de manière significative la qualité de service fournie aux autres utilisateurs concurrents. Le manque de métriques génériques aux différentes ressources, ainsi que le compromis naturel entre équité et optimisation des performances forment les causes principales des problèmes rencontrés dans ces systèmes. Nous introduisons le temps d'utilisation comme métrique générique de consommation des ressources, métrique s'adaptant aux différentes ressources gérées par les systèmes partagés best-effort. Ceci nous amène à la spécification de couches de contrôles génériques, transparentes et automatisées d'application de politiques d'équité garantissant une utilisation maximisée des ressources régulées. Notre prototype, implémenté au sein du noyau Linux, nous permet d'évaluer l'apport de notre approche pour la régulation des surcharges d'utilisation mémoire. Nous observons une amélioration significative de la performance d'applications typiques des systèmes partagés best-effort en temps de contention mémoire. De plus, notre technique borne l'impact d'applications abusives sur d'autres applications légitimes concurrentes, puisque l'incertitude sur les durées d'exécution est naturellement amoindrie.
APA, Harvard, Vancouver, ISO, and other styles
15

Pu, Song 1968. "MPEG-2 transport over ATM networks with best effort service." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20972.

Full text
Abstract:
With increasing interest in the transmission of audio-visual applications (e.g. MPEG-2) over ATM best effort services, such as Available Bit-Rate (ABR) and Unspecific Bit-Rate (UBR), efficient video-oriented control mechanisms for improving the video quality in the presence of loss have to be designed. In this thesis, we proposed and evaluated a new quality of service control framework for use with modified Unspecific Bit Rate service.
We surveyed a number of issues related to the coding and control of MPEG-2 video data streams transmitted over ATM networks, analyzed the network factors affecting the quality of service of real-time video applications and showed how this proposed video-oriented QoS control framework improve the performance for such services.
The presented framework relies on four components: a dynamic frame-level priority data partition mechanism based on MPEG-2 data structure and feedback from the network; an enhanced ATM Adaptation Layer type 5 (AAL-5) associated with a new slice-based MPEG-2 encapsulation strategy; a forward error correction (FEC) mechanism, which is implemented at the AAL-5 service specific convergence sublayer to provide the error detection and recovery capability, and a video-oriented cell discarding scheme, which adaptively and selectively adjusts discard level according to switch buffer occupancy, video cell payload types and FEC drop tolerance.
This best-effort video delivery framework is evaluated using simulation and real MPEG-2 video data. The overall objective of this proposed framework is twofold. First, ensuring a graceful picture quality degradation by minimizing cell loss probability for critical video data while guaranteeing a bounded cell transfer delay. Second, optimizing the network effective throughput by reducing the transmission of non useful data.
In comparison to previous approaches, the performance evaluation has shown a significant reduction of the bad throughput and minimization of losses of Intra- and Predictive-coded frames at the video slice layer.
APA, Harvard, Vancouver, ISO, and other styles
16

Pu, Song. "MPEG-2 transport over ATM networks with best effort service." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0022/MQ50861.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Luo, Heng. "Best effort QoS support routing in mobile ad hoc networks." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6255.

Full text
Abstract:
In the past decades, mobile traffic generated by devices such as smartphones, iphones, laptops and mobile gateways has been growing rapidly. While traditional direct connection techniques evolve to provide better access to the Internet, a new type of wireless network, mobile ad hoc network (MANET), has emerged. A MANET differs from a direct connection network in the way that it is multi-hopping and self-organizing and thus able to operate without the help of prefixed infrastructures. However, challenges such dynamic topology, unreliable wireless links and resource constraints impede the wide applications of MANETs. Routing in a MANET is complex because it has to react efficiently to unfavourable conditions and support traditional IP services. In addition, Quality of Service (QoS) provision is required to support the rapid growth of video in mobile traffic. As a consequence, tremendous efforts have been devoted to the design of QoS routing in MANETs, leading to the emergence of a number of QoS support techniques. However, the application independent nature of QoS routing protocols results in the absence of a one-for-all solution for MANETs. Meanwhile, the relative importance of QoS metrics in real applications is not considered in many studies. A Best Effort QoS support (BEQoS) routing model which evaluates and ranks alternative routing protocols by considering the relative importance of multiple QoS metrics is proposed in this thesis. BEQoS has two algorithms, SAW-AHP and FPP for different scenarios. The former is suitable for cases where uncertainty factors such as standard deviation can be neglected while the latter considers uncertainty of the problems. SAW-AHP is a combination of Simple Additive Weighting and Analytic Hierarchical Process in which the decision maker or network operator is firstly required to assign his/her preference of metrics with a specific number according to given rules. The comparison matrices are composed accordingly, based on which the synthetic weights for alternatives are gained. The one with the highest weight is the optimal protocol among all alternatives. The reliability and efficiency of SAW-AHP are validated through simulations. An integrated architecture, using evaluation results of SAW-AHP is proposed which incorporates the ad hoc technology into the existing WLAN and therefore provides a solution for the last mile access problems. The protocol selection induced cost and gains are also discussed. The thesis concludes by describing the potential application area of the proposed method. Fuzzy SAW-AHP is extended to accommodate the vagueness of the decision maker and complexity of problems such as standard deviation in simulations. The fuzzy triangular numbers are used to substitute the crisp numbers in comparison matrices in traditional AHP. Fuzzy Preference Programming (FPP) is employed to obtain the crisp synthetic weight for alternatives based on which they are ranked. The reliability and efficiency of SAW-FPP are demonstrated by simulations.
APA, Harvard, Vancouver, ISO, and other styles
18

Ye, Dan. "Control of real-time multimedia applications in best-effort networks." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Garyali, Piyush. "On Best-Effort Utility Accrual Real-Time Scheduling on Multiprocessors." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/34112.

Full text
Abstract:
We consider the problem of scheduling real-time tasks on a multiprocessor system. Our primary focus is scheduling on multiprocessor systems where the total task utilization demand, U, is greater than m, the number of processors on a multiprocessor system---i.e., the total available processing capacity of the system. When U > m, the system is said to be overloaded; otherwise, the system is said to be underloaded. While significant literature exists on multiprocessor real-time scheduling during underloads, little is known about scheduling during overloads, in particular, in the presence of task dependencies---e.g., due to synchronization constraints. We consider real-time tasks that are subject to time/utility function (or TUF) time constraints, which allow task urgency to be expressed independently of task importance---e.g., the most urgent task being the least important. The urgency/importance decoupling allowed by TUFs is especially important during overloads, when not all tasks can be optimally completed. We consider the timeliness optimization objective of maximizing the total accrued utility and the number of deadlines satisfied during overloads, while ensuring task mutual exclusion constraints and freedom from deadlocks. This problem is NP-hard. We develop a class of polynomial-time heuristic algorithms, called the Global Utility Accrual (or GUA) class of algorithms. The algorithms construct a directed acyclic graph representation of the task dependency relationship, and build a global multiprocessor schedule of the zero in-degree tasks to heuristically maximize the total accrued utility and ensure mutual exclusion. Potential deadlocks are detected through a cycle-detection algorithm, and resolved by aborting a task in the deadlock cycle. The GUA class of algorithms include two algorithms, namely, the Non-Greedy Global Utility Accrual (or NG-GUA) and Greedy Global Utility Accrual (or G-GUA) algorithms. NG-GUA and G-GUA differ in the way schedules are constructed towards meeting all task deadlines, when possible to do so. We establish several properties of the algorithms including conditions under which all task deadlines are met, satisfaction of mutual exclusion constraints, and deadlock-freedom. We create a Linux-based real-time kernel called ChronOS for multiprocessors. ChronOS is extended from the PREEMPT_RT real-time Linux patch, which provides optimized interrupt service latencies and real-time locking primitives. ChronOS provides a scheduling framework for the implementation of a broad range of real-time scheduling algorithms, including utility accrual, non-utility accrual, global, and partitioned scheduling algorithms. We implement the GUA class of algorithms and their competitors in ChronOS and conduct experimental studies. The competitors include G-EDF, G-NP-EDF, G-FIFO, gMUA, P-EDF and P-DASA. Our study reveals that the GUA class of algorithms accrue higher utility and satisfy greater number of deadlines than the deadline-based scheduling algorithms by as much as 750% and 600%, respectively. In addition, we observe that G-GUA accrues higher utility than NG-GUA during overloads by as much as 25% while NG-GUA satisfies greater number of deadlines than G-GUA by as much as 5% during underloads.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Khariwal, Vivek. "Adaptive control of real-time media applications in best-effort networks." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1236.

Full text
Abstract:
Quality of Service (QoS) in real-time media applications can be defined as the ability to guarantee the delivery of packets from source to destination over best-effort networks within some constraints. These constraints defined as the QoS metrics are end-to-end packet delay, delay jitter, throughtput, and packet losses. Transporting real-time media applications over best-effort networks, e.g. the Internet, is an area of current research. Both the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) have failed to provide the desired QoS. This research aims at developing application-level end-to-end QoS controls to improve the user-perceived quality of real-time media applications over best-effort networks, such as, the public Internet. In this research an end-to-end packet based approach is developed. The end-to- end packet based approach consists of source buffer, network simulator ns-2, destina- tion buffer, and controller. Unconstrained model predictive control (MPC) methods are implemented by the controller at the application layer. The end-to-end packet based approach uses end-to-end network measurements and predictions as feedback signals. Effectiveness of the developed control methods are examined using Matlab and ns-2. The results demonstrate that sender-based control schemes utilizing UDP at transport layer are effective in providing QoS for real-time media applications transported over best-effort networks. Significant improvements in providing QoS are visible by the reduction of packet losses and the elimination of disruptions during the playback of real-time media. This is accompanied by either a decrease or increase in the playback start-time.
APA, Harvard, Vancouver, ISO, and other styles
21

Bhattacharya, Aninda. "Flow control of real-time unicast multimedia applications in best-effort networks." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Doddi, Srikar. "Empirical modeling of end-to-end delay dynamics in best-effort networks." Texas A&M University, 2003. http://hdl.handle.net/1969.1/2244.

Full text
Abstract:
Quality of Service (QoS) is the ability to guarantee that data sent across a network will be recieved by the desination within some constraints. For many advanced applications, such as real-time multimedia QoS is determined by four parameters--end-to-end delay, delay jitter, available bandwidth or throughput, and packet drop or loss rate. It is interesting to study and be able to predict the behavior of end-to-end packet delays in a Wide area network (WAN) because it directly a??ects the QoS of real-time distributed applications. In the current work a time-series representation of end-to-end packet delay dynamics transported over standard IP networks has been considered. As it is of interest to model the open loop delay dynamics of an IP WAN, the UDP is used for transport purposes. This research aims at developing models for single-step-ahead and multi-step-ahead prediction of moving average, one-way end-to-end delays in standard IP WAN??s. The data used in this research has been obtained from simulations performed using the widely used simulator ns-2. Simulation conditions have been tuned to enable some matching of the end-to-end delay profiles with real traffic data. This has been accomplished through the use of delay autocorrelation profiles. The linear system identification models Auto-Regressive eXogenous (AR) and Auto-Regressive Moving Average with eXtra / eXternal (ARMA) and non-linear models like the Feedforwad Multi-layer Perceptron (FMLP) have been found to perform accurate single-step-ahead predictions under varying conditions of cross-traffic flow and source send rates. However as expected, as the multi-step-ahead prediction horizon is increased, the models do not perform as accurately as the single-step-ahead prediction models. Acceptable multi-step-ahead predictions for up to 500 msec horizon have been obtained.
APA, Harvard, Vancouver, ISO, and other styles
23

Shukla, Yashkumar Dipakkumar. "Prediction of end-to-end single flow characteristics in best-effort networks." Thesis, Texas A&M University, 2005. http://hdl.handle.net/1969.1/2362.

Full text
Abstract:
The nature of user traffic in coming years will become increasingly multimediaoriented which has much more stringent Quality of Service (QoS) requirements. The current generation of the public Internet does not provide any strict QoS guarantees. Providing Quality of Service (QoS) for multimedia application has been a difficult and challenging problem. Developing predictive models for best-effort networks, like the Internet, would be beneficial for addressing a number of technical issues, such as network bandwidth provisioning, congestion avoidance/control to name a few. The immediate motivation for creating predictive models is to improve the QoS perceived by end-users in real-time applications, such as audio and video. This research aims at developing models for single-step-ahead and multi-stepahead prediction of end-to-end single flow characteristics in best-effort networks. The performance of path-independent predictors has also been studied in this research. Empirical predictors are developed using simulated traffic data obtained from ns-2 as well as for actual traffic data collected from PlanetLab. The linear system identification models Auto-Regressive (AR), Auto-Regressive Moving Average (ARMA) and the non-linear models Feed-forward Multi-layer Perceptron (FMLP) have been used to develop predictive models. In the present research, accumulation is chosen as a signal to model the end-to-end single flow characteristics. As the raw accumulation signal is extremely noisy, the moving average of the accumulation isused for the prediction. Developed predictors have been found to perform accurate single-step-ahead predictions. However, as the multi-step-ahead prediction horizon is increased, the models do not perform as accurately as in the single-step-ahead prediction case. Acceptable multi-step-ahead predictors for up to 240 msec prediction horizon have been obtained using actual traffic data.
APA, Harvard, Vancouver, ISO, and other styles
24

Konstantinou, Apostolos. "Flow control techniques for real-time media applications in best-effort networks." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/1085.

Full text
Abstract:
Quality of Service (QoS) in real-time media applications is an area of current interest because of the increasing demand for audio/video, and generally multimedia applications, over best effort networks, such as the Internet. Media applications are transported using the User Datagram Protocol (UDP) and tend to use a disproportionate amount of network bandwidth as they do not perform congestion or flow control. Methods for application QoS control are desirable to enable users to perceive a consistent media quality. This can be accomplished by either modifying current protocols at the transport layer or by implementing new control algorithms at the application layer irrespective of the protocol used at the transport layer. The objective of this research is to improve the QoS delivered to end-users in real-time applications transported over best-effort packet-switched networks. This is accomplished using UDP at the transport layer, along with adaptive predictive and reactive control at the application layer. An end-to-end fluid model is used, including the source buffer, the network and the destination buffer. Traditional control techniques, along with more advanced adaptive predictive control methods, are considered in order to provide the desirable QoS and make a best-effort network an attractive channel for interactive multimedia applications. The effectiveness of the control methods, is examined using a Simulink-based fluid-level simulator in combination with trace files extracted from the well-known network simulator ns-2. The results show that improvement in real-time applications transported over best-effort networks using unreliable transport protocols, such as UDP, is feasible. The improvement in QoS is reflected in the reduction of flow loss at the expense of flow dead-time increase or playback disruptions or both.
APA, Harvard, Vancouver, ISO, and other styles
25

Axell, Erik. "Coexistence of Real Time and Best Effort Services in Enhanced Uplink WCDMA." Thesis, Linköping University, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2725.

Full text
Abstract:

The increasing use of data services and the importance of IP basedservices in third generation mobile communication system (3G), requires the transmission from the cell phone to the base station, i.e. uplink, to manage high speed data rates. In the air interface for 3G in Europe, WCDMA, a concept for enhancing the transmission from the cell phone to the base station, called Enhanced Uplink, is being standardized. The overall goal is to provide high speed data access for the uplink. One of the requirements is that the enhanced uplink channels must be able to coexist with already existing WCDMA releases. For example, the enhanced uplink must not impact seriously on real time services, such as speech, carried on current WCDMA channels.

The purpose of this work is to study how the quality, coverage and capacity of real time services carried on previous WCDMA releases is affected when introducing the Enhanced Uplink in a WCDMA network. The main focus of the study is thus to demonstrate the trade-off between voice and best effort performances.

Theoretical assessments and simulations show that the Enhanced Uplink has many advantages over previous WCDMA releases. For example the Enhanced Uplink yields a larger system throughput for all voice loads. The noise rise, i.e. the ratio of total received power to the background noise power is being considered as the resource. It is shown that user traffic carried on the Enhanced Uplink is able to operate under a higher noise rise level as well as to get a higher throughput per noise rise. The resource is hence more efficiently utilized.

APA, Harvard, Vancouver, ISO, and other styles
26

Cherbonnier, Jean. "Prévention de la congestion dans le service "best-effort" des réseaux locaux ATM /." [S.l.] : [s.n.], 1995. http://library.epfl.ch/theses/?nr=1299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Angadi, Raghavendra. "Best effort MPI/RT as an alternative to MPI design and performance comparison /." Master's thesis, Mississippi State : Mississippi State University, 2002. http://library.msstate.edu/etd/show.asp?etd=etd-12032002-162333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Jinggang. "Soft Real-Time Switched Ethernet: Best-Effort Packet Scheduling Algorithm, Implementation, and Feasibility Analysis." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35277.

Full text
Abstract:
In this thesis, we present a MAC-layer packet scheduling algorithm, called Best-effort Packet Scheduling Algorithm(BPA), for real-time switched Ethernet networks. BPA considers a message model where application messages have trans-node timeliness requirements that are specified using Jensen's benefit functions. The algorithm seeks to maximize aggregate message benefit by allowing message packets to inherit benefit functions of their parent messages and scheduling packets to maximize aggregate packet-level benefit. Since the packet scheduling problem is NP-hard, BPA heuristically computes schedules with a worst-case cost of O(n^2), faster than the O(n^3) cost of the best known Chen and Muhlethaler's Algorithm(CMA) for the same problem. Our simulation studies show that BPA performs the same or significantly better than CMA. We also construct a real-time switched Ethernet by prototyping an Ethernet switch using a Personal Computer(PC) and implementing BPA in the network protocol stack of the Linux kernel for packet scheduling. Our actual performance measurements of BPA using the network implementation reveal the effectiveness of the algorithm. Finally, we derive timeliness feasibility conditions of real-time switched Ethernet systems that use the BPA algorithm. The feasibility conditions allow real-time distributed systems to be constructed using BPA, with guaranteed soft timeliness.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Peranginangin, Nathanael 1969. "Integrating best-effort and guaranteed sessions through a two-level generalized processor sharing approach." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/16757.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaf 72).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
by Nathanael Peranginangin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Javadtalab, Abbas. "An End-to-End Solution for High Definition Video Conferencing over Best-Effort Networks." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/31954.

Full text
Abstract:
Video streaming applications over best-effort networks, such as the Internet, have become very popular among Internet users. Watching live sports and news, renting movies, watching clips online, making video calls, and participating in videoconferences are typical video applications that millions of people use daily. One of the most challenging aspects of video communication is the proper transmission of video in various network bandwidth conditions. Currently, various devices with different processing powers and various connection speeds (2G, 3G, Wi-Fi, and LTE) are used to access video over the Internet, which offers best-effort services only. Skype, ooVoo, Yahoo Messenger, and Zoom are some well-known applications employed on a daily basis by people throughout the world; however, best-effort networks are characterized by dynamic and unpredictable changes in the available bandwidth, which adversely affect the quality of the video. For the average consumer, there is no guarantee of receiving an exact amount of bandwidth for sending or receiving video data. Therefore, the video delivery system must use a bandwidth adaptation mechanism to deliver video content properly. Otherwise, bandwidth variations will lead to degradation in video quality or, in the worst case, disrupt the entire service. This is especially problematic for videoconferencing (VC) because of the bulkiness of the video, the stringent bandwidth demands, and the delay constraints. Furthermore, for business grade VC, which uses high definition videoconferencing (HDVC), user expectations regarding video quality are much higher than they are for ordinary VC. To manage network fluctuations and handle the video traffic, two major components in the system should be improved: the video encoder and the congestion control. The video encoder is responsible for compressing raw video captured by a camera and generating a bitstream. In addition to the efficiency of the encoder and compression speed, its output flow is also important. Though the nature of video content may make it impossible to generate a constant bitstream for a long period of time, the encoder must generate a flow around the given bitrate. While the encoder generates the video traffic around the given bitrate, congestion management plays a key role in determining the current available bandwidth. This can be done by analyzing the statistics of the sent/received packets, applying mathematical models, updating parameters, and informing the encoder. The performance of the whole system is related to the in-line collaboration of the encoder and the congestion management, in which the congestion control system detects and calculates the available bandwidth for a specific period of time, preferably per incoming packet, and informs rate control (RC) to adapt its bitrate in a reasonable time frame, so that the network oscillations do not affect the perceived quality on the decoder side and do not impose adverse effects on the video session. To address these problems, this thesis proposes a collaborative management architecture that monitors the network situation and manages the encoded video rate. The goal of this architecture is twofold: First, it aims to monitor the available network bandwidth, to predict network behavior and to pass that information to the encoder. So encoder can encode a suitable video bitrate. Second, by using a smart rate controller, it aims for an optimal adaptation of the encoder output bitrate to the bitrate determined by congestion control. Merging RC operations and network congestion management, to provide a reliable infrastructure for HDVC over the Internet, represents a unique approach. The primary motivation behind this project is that by applying videoconference features, which are explained in the rate controller and congestion management chapter, the HDVC application becomes feasible and reliable for the business grade application even in the best-effort networks such as the Internet.
APA, Harvard, Vancouver, ISO, and other styles
31

Beyah, Raheem A. "A deployable framework for providing better than best-effort quality of service for traffic flows." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Torres, Isaac, and Marvin Ross. "How can we best achieve contracting unity of effort in the CentCom area of responsibility?" Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/39027.

Full text
Abstract:
Approved for public release; distribution is unlimited.
The purpose of this research is to investigate how to better achieve contracting unity of effort in the U.S. Central Command area of operations and the implications for other combatant commands in similar contingency situations. In the U.S. Central Command area of operations, numerous contracting agencies operate in Afghanistan, each with its own contract authority, but these agencies have little synchronization and no common operating picture. In contrast, there is only one overarching operational command authority in this area with a clear chain of command to help accomplish common objectives and achieve operational unity of effort. After completing a literature review of our topic, we conducted in-depth interviews with senior Department of Defense individuals who were knowledgeable and/or experienced with contingency contracting in the U.S. Central Command area of operations. This approach allowed us to gain detailed information and examples from our respondents. After a detailed analysis of selected interview data, we made our final recommendations on improving contracting unity of effort and increasing the effectiveness of operational contract support across the department.
APA, Harvard, Vancouver, ISO, and other styles
33

Wofford, Corey D. "A best effort traffic management solution for server and agent-based active network management (SAAM)." Thesis, Monterey, California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/6000.

Full text
Abstract:
Approved for public release, distribution unlimited
Server and Agent-based Active Network Management (SAAM) is a promising network management solution for the Internet of tomorrow, "Next Generation Internet (NGI)." SAAM is a new network architecture that incorporates many of the latest features of Internet technologies. The primary purpose of SAAM is managing network quality of service (QoS) to support the resource-intensive next-generation Internet applications. Best effort (BE) traffic will continue to exist in the era of NGI. Thus SAAM must be able to manage such traffic. In this thesis, we propose a solution for management of BE traffic within SAAM. With SAAM, it is possible to make a "better best effort" in routing BE packets. Currently, routers handle BE traffic based solely on local information or from information obtained by linkstate flooding which may not be reliable. In contrast, SAAM centralizes management at a server where better (more optimal) decisions can be made. SAAM's servers have access to accurate topology and timely traffic-condition information. Additionally, due to their placement on high-end routers or dedicated machines, the servers can better afford computationally intensive routing solutions. It is these characteristics that are exploited by the solution design and implementation of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
34

Safdar, G. A. "Improved enery-saving medium access control schemes for best-effort and QoS-enabled point-controlled wireless networks." Thesis, Queen's University Belfast, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ayvat, Birol. "An evaluation of best effort traffic management of server and agent based active network management (SAAM) architecture." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FAyvat.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Targa, Alexandre. "Development of multi-physics and multi-scale Best Effort Modelling of pressurized water reactor under accidental situations." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX032/document.

Full text
Abstract:
L’analyse de sûreté des réacteurs nucléaires nécessite la modélisation fine des phénomènes y survenant et plus spécifiquement ceux permettant d’assurer l’intégrité des barrières de confinement. Les outils de modélisation et codes actuels favorisent une analyse fine du système réacteur par discipline dédiée, et couplée avec des modèles simplifiés. Néanmoins, le développement depuis plusieurs années d’une approche dite « Best Estimate », basée sur des calculs multiphysiques et multi-échelle, est en cours de réalisation. Cette approche permettra d’accéder au suivi et à l’analyse détaillée de problèmes complexes tels que l’étude des Réacteurs nucléaires en situation standard et accidentelle. Dans cette approche, les phénomènes physiques sont simulés aussi précisément que possible (selon la connaissance actuelle) par les modèles couplés. Par exemple, des codes disciplinaires existent et permettent la modélisation précise de la neutronique, de la thermohydraulique du cœur du réacteur ou de la thermohydraulique sur l'ensemble du système, de la thermomécanique du combustible ou des structures. Une approche « Best Estimate » consiste à coupler ces modèles afin de réaliser une modélisation globale et précise du système de réacteur nucléaire. Cette approche nécessite de bien définir les modèles qui sont utilisés afin de préciser exactement leurs limites, et donc préciser les incertitudes des résultats des modèles couplés afin de les assumer et de les optimiser.C’est dans ce contexte de travail que s’inscrit cette thèse. Elle consiste dans le développement d'un couplage multiphysique et multi-échelle « Best Estimate » afin d'obtenir une analyse précise des Réacteurs à Eau Légère en situations normale et accidentelle. Elle a consisté principalement en l’analyse des modèles et de leurs interactions et à la mise en œuvre d'un algorithme de couplage multiphysique entre une neutronique et une thermohydraulique exprimées à l'échelle du réacteur, ainsi qu’avec une thermomécanique fine à l'échelle élémentaire du crayon combustible. En outre, un travail spécifique a été effectué afin de préparer ou d'améliorer l’accés à l'information physique locale nécessaire à la mise en œuvre de modélisations couplées multi-échelles, à l'échelle du combustible
The safety analysis of nuclear power plants requires a deep understanding of underlying key physical phenomena that determine the integrity of the physical containment barriers. At the present time, cutting edge models focus on a single aspect (discipline) of the physical system coupled with rough models of the other aspects needed to simulate the global system. But, safety analyses can be carried out based on Multiphysics and Multiscales modelling. This Best Effort approach would give a full and accurate (High Fidelity) comprehension of the reactor core under standard and accidental situations. In this approach, the physical phenomena are simulated as accurately as possible (according to present knowledge) by coupled models in the most efficient way. For example, codes exists that are accurate modellings of Neutronics, or modellings of thermal fluid mechanics inside the core, or modellings of thermal fluid mechanics over the whole system, or modellings of thermal mechanics of the fuel pin or over the whole device structure. A Best Estimate approach would couple these models in order to realize a global and accurate modelling of the Nuclear reactor. This approach requires to define well the models that are used in order to exactly specify their limits, and hence, specify uncertainties of the coupled model results in order to assume and optimize them.It is in this context that this PhD thesis work is being under taken. It consists in the development of a Multi-physics and multi-scale Best Estimate modelling in order to obtain an accurate analysis of Pressurized Water Reactor under standard and accidental operating situations. It mainly involves the understanding of each model and their interactions, followed by the implementation of multiphysics algorithms coupling Neutronics and Thermohydraulics at reactor scale to an accurate Thermomechanics at the elementary scale of the fuel pin. In addition, a work project has been carried out in order to prepare or improve the access to the local physical informations that are needed for the implementation of multiscale coupling scheme, at the elementary scale of the fuel pin
APA, Harvard, Vancouver, ISO, and other styles
37

Akcasoy, Alican. "Connectionless Traffic And Variable Packet Size Support In High Speed Network Switches: Improvements For The Delay-limiter Switch." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609582/index.pdf.

Full text
Abstract:
Quality of Service (QoS) support for real-time traffic is a critical issue in high speed networks. The previously proposed Delay-Limiter Switch working with the Framed-Deadline Scheduler (FDS) is a combined input-output queuing (CIOQ) packet switch that can provide end-to-end bandwidth and delay guarantees for connection-oriented traffic. The Delay-Limiter Switch works with fixed-size packets. It has a scalable architecture and can provide QoS support for connection-oriented real-time traffic in a low-complexity fashion. The Delay-Limiter Switch serves connectionless traffic by using the remaining resources from the connection-oriented traffic. In this case, efficient management of the residual resources plays an important role on the performance of the connectionless traffic. This thesis work integrates new methods to the Delay-Limiter Switch that can improve the performance of the connectionless traffic while still serving the connection-oriented traffic with the promised QoS guarantees. A new method that makes it possible for the Delay-Limiter Switch to support variable-sized packets is also proposed.
APA, Harvard, Vancouver, ISO, and other styles
38

Bumbál, Miroslav. "QoS v IP síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217998.

Full text
Abstract:
Master 's thesis deals about computer networks, which constitutes a global communication structure and play a very important role in today's society. The rapid development of Internet, the emergence of new multimedia applications and their increasing use of calls to the efficient functioning of the creation of such governance mechanisms of transmission, which are able to secure the required parameters. The thesis deals about the issue of quality of service (QoS) in IP networks. It presents the basic characteristics and requirements of these networks for the transmission of sensitive data by the quality of services, deals with the QoS definition, and describes the essential parameters to be followed to achieve the required quality of service in practical deployment. In addition, lists the various principles and options to ensure QoS in computer networks. Generally, it represents the Cisco 1841 router features and options to ensure quality of service in the network based on these routers. Practical thesis part provides two types of model IP networks, which were designed in order to verify the impact of service quality in real practice. Of the known methods to ensure QoS, which include a mechanism of Integrated services, Differentiated services, it focus its content about the Differentiated Services and the implementation of these in proposed network model. The last part of the work presents the results obtained by the impact of quality of service for the applications and their assessment.
APA, Harvard, Vancouver, ISO, and other styles
39

Castilho, Taarik de Freitas. "Distinção entre obrigações de meios e obrigações de resultado." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/2/2131/tde-10092012-155344/.

Full text
Abstract:
O objeto da dissertação é o estudo de uma classificação das obrigações entre aquelas de meios e as de resultado, as primeiras obrigando o devedor a uma prestação de diligência, as segundas, à realização de uma vantagem para o credor, sem o que não haveria o devedor de exonerar-se. Um estudo histórico abre o trabalho, pesquisando os antecedentes remotos da distinção, até a sua consagração, no Traité de René Demogue, o que tornou famosa a distinção que passou a gerar profundas discussões doutrinárias na França e no resto do mundo. São também tratados os antecedentes mais recentes, o contexto histórico do surgimento da classificação e sua evolução sucessiva, tanto na França quanto em outros países, para, enumeradas algumas das muitas dificuldades envolvidas no estudo do tema, ainda hoje severamente combatido, pesquisar-se, do ponto de vista da estrutura do vínculo obrigacional, como esta classificação se relaciona com a prestação devida. Assim, acredita-se, seria possível dizer se, uma vez que todo vínculo obrigacional surge tendo em vista um resultado, a prestação obrigacional admite uma distinção entre aquelas que implicam uma atividade do devedor limitada por sua diligência (obrigação de meios) e aquelas que somente conduzem a obrigação a seu termo mediante cumprimento uma vez realizado um resultado, ou seja, desde que produzido um benefício específico para o credor (obrigações de resultado).
The purpose of the dissertation is to study the classification of obligations in the ones so called best efforts duty and those referred to as duty to achieve a result, the first imposing to the debtor the duty to act diligently, the second attain a certain benefit to the creditor, without which the debtor would never be exonerated. A historical study initiates the dissertation with the research of the remote antecedents of such classification until its public recognition through the Traité written by René Demogue, which brought it great fame and created deep debates both in France and throughout the world. The recent antecedents of the classification are also pointed out, the historical context of its emergence and successive evolution in France and in other countries, to, after listing some of the many difficulties involved in the study of such subject, strongly criticized even nowadays, research through the scope of the legal relationship how the classification would relate with the duties imposed to the debtor. In doing so, it is believed to be possible to say, since every legal duty emerges to accomplish a certain objective, if would be possible to admit a distinction between duties that imply a conduct limited by diligence (best efforts duty) and those that would only be extinguished by the fulfillment of its purpose, in other words, since a specific result is produce and delivered to the creditor (duty to achieve a result).
APA, Harvard, Vancouver, ISO, and other styles
40

Muratori, Alessia. "Il percorso di evoluzione della Neutralità della Rete." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13292/.

Full text
Abstract:
L'obiettivo di questa tesi è cercare di esaminare l'evoluzione della Neutralità della Rete passando attraverso l'aspetto informatico e giuridico. Per l'aspetto informatico viene preso in considerazione il Best Effort e il principio dell'End-To-End. Dall'altro lato per l'aspetto giuridico vi è una panoramica sulle normative regolate da autorità come l'AGCOM, il BEREC e la FCC. Inoltre vengono prese in considerazione vicende italiane, europee e statunitensi che mostrano violazioni contro i principi neutrali. La parte finale è incentrata sul fatto di capire se vi possa essere una soluzione al problema della Neutralità della Rete.
APA, Harvard, Vancouver, ISO, and other styles
41

Kumar, Tushar. "Characterizing and controlling program behavior using execution-time variance." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55000.

Full text
Abstract:
Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to characterize and control using traditional methods. This thesis proposes new techniques reliant on execution-time variance to both characterize and control program behavior. The proposed techniques are intended to be broadly applicable to a wide variety of immersive applications and are intended to be easy for programmers to apply without needing to gain specialized expertise. First, we create new QoS controllers that programmers can easily apply to their applications to achieve desired application-specific QoS objectives on any platform or application data-set, provided the programmers verify that their applications satisfy some simple domain requirements specific to immersive applications. The controllers adjust programmer-identified knobs every application frame to effect desired values for programmer-identified QoS metrics. The control techniques are novel in that they do not require the user to provide any kind of application behavior models, and are effective for immersive applications that defy the traditional requirements for feedback controller construction. Second, we create new profiling techniques that provide visibility into the behavior of a large complex application, inferring behavior relationships across application components based on the execution-time variance observed at all levels of granularity of the application functionality. Additionally for immersive applications, some of the most important QoS requirements relate to managing the execution-time variance of key application components, for example, the frame-rate. The profiling techniques not only identify and summarize behavior directly relevant to the QoS aspects related to timing, but also indirectly reveal non-timing related properties of behavior, such as the identification of components that are sensitive to data, or those whose behavior changes based on the call-context.
APA, Harvard, Vancouver, ISO, and other styles
42

MACIEL, JÚNIOR Paulo Ditarso. "Gerenciamento de uma estrutura híbrida de TI dirigido por métricas de negócio." Universidade Federal de Campina Grande, 2013. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1301.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-31T13:58:16Z No. of bitstreams: 1 PAULO DITARSO MACIEL JÚNIOR - TESE PPGCC 2013..pdf: 21161997 bytes, checksum: 33b051924023dbbac092de80229a7705 (MD5)
Made available in DSpace on 2018-07-31T13:58:16Z (GMT). No. of bitstreams: 1 PAULO DITARSO MACIEL JÚNIOR - TESE PPGCC 2013..pdf: 21161997 bytes, checksum: 33b051924023dbbac092de80229a7705 (MD5) Previous issue date: 2013-06-14
CNPq
Capes
Com o surgimento do paradigma de computação na nuvem e a busca contínua para reduzir o custo de operar infraestruturas de Tecnologia da Informação (TI), estamos vivenciando nos dias de hoje uma importante mudança na forma como estas infraestruturas estão sendo montadas, configuradas e gerenciadas. Nesta pesquisa consideramos o problema de gerenciar uma infraestrutura híbrida, cujo poder computacional é formado por máquinas locais dedicadas, máquinas virtuais obtidas de provedores de computação na nuvem e máquinas virtuais remotas disponíveis a partir de uma grade peer-to-peer (P2P) best-effort. As aplicações executadas nesta infraestrutura são caracterizadas por uma função de utilidade no tempo, ou seja, a utilidade produzida pela execução completa da aplicação depende do tempo total necessário para sua finalização. Tomamos uma abordagem dirigida a negócios para gerenciar esta infraestrutura, buscando maximizar o lucro total obtido. Aplicações são executadas utilizando poder computacional local e da grade best-effort, quando possível. Qualquer capacidade extra requerida no intuito de melhorar a lucratividade da infraestrutura é adquirida no mercado de computação na nuvem. Também assumimos que esta capacidade extra pode ser reservada para uso futuro através de contratos de curta ou longa duração, negociados sem intervenção humana. Para contratos de curto prazo, o custo por unidade de recurso computacional pode variar significativamente entre contratos, com contratos mais urgentes apresentando, geralmente, custos mais caros. Além disso, devido à incerteza inerente à grade best-effort, podemos não saber exatamente quantos recursos serão necessários do mercado de computação na nuvem com certa antecedência. Superestimar a quantidade de recursos necessários leva a uma reserva maior do que necessária; enquanto subestimar leva à necessidade de negociar contratos adicionais posteriormente. Neste contexto, propomos heurísticas que podem ser usadas por agentes planejadores de contratos no intuito de balancear o custo e a utilidade obtida na execução das aplicações, com o objetivo de alcançar um alto lucro global. Demonstramos que a habilidade de estimar o comportamento da grade é uma importante condição para estabelecer contratos que produzem alta eficiência no uso da infraestrutura híbrida de TI.
With the emergence of the cloud computing paradigm and the continuous search to reduce the cost of running Information Technology (IT) infrastructures, we are currently experiencing an importam change in the way these infrastructures are assembled, configured and managed. In this research we consider the problem of managing a hybrid high-performance computing infrastructure whose processing elements are comprised of in-house dedicated machines, virtual machines acquired from cloud computing providers, and remote virtual machines made available by a best-effort peer-to-peer (P2P) grid. The applications that run in this hybrid infrastructure are characterised by a utility function: the utility yielded by the completion of an application depends on the time taken to execute it. We take a business-driven approach to manage this infrastructure, aiming at maximising the total profit achieved. Applications are run using computing power from both in-house resources and the best-effort grid. whenever possible. Any extra capacity required to improve the profitability of the infrastructure is purchased from the cloud computing market. We also assume that this extra capacity is reserved for future use through either short or long term contracts, which are negotiated without human intervention. For short term contracts. the cost per unit of computing resource may vary significantly between contracts, with more urgent contracts normally being more expensive. Furthermore, due to the uncertainty inherent in the besteffort grid, it may not be possible to know in advance exactly how much computing resource will be needed from the cloud computing market. Overestimation of the amount of resources required leads to the reservation of more than is necessary; while underestimation leads to the necessity of negotiating additional contracts later on to acquire the remaining required capacity. In this context, we propose heuristics to be used by a contract planning agent in order to balance the cost of running the applications and the utility that is achieved with their execution. with the aim of producing a high overall profit. We demonstrate that the ability to estimate the grid behaviour is an important condition for making contracts that produce high efficiency in the use of the hybrid IT infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
43

Schiller, Martin. "Quantenoszillationsexperimente an quasi-zweidimensionalen organischen Metallen : (BEDT-TTF) 4 (Ni(dto) 2 ) und Kappa-(BEDT-TTF) 2 I 3 /." [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10067912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Patki, Sourabh. "Effect of South Carolina primary belt law on safety belt use." Connect to this title online, 2006. http://etd.lib.clemson.edu/documents/1175186270/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Costa, Márcio Henriques da. "A cláusula de melhores esforços (best efforts) na prática jurídica brasileira: uma nova perspectiva." reponame:Repositório Institucional do FGV, 2016. http://hdl.handle.net/10438/16226.

Full text
Abstract:
Submitted by Marcio Costa (marciohcosta@hotmail.com) on 2016-03-18T20:02:30Z No. of bitstreams: 1 Dissertação - Marcio Henriques da Costa - Cláusula de Melhores Esforços (deposito)(180316).pdf: 969740 bytes, checksum: 25d7825a9fca49a77ffd610e4d07a0c3 (MD5)
Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2016-04-04T13:07:53Z (GMT) No. of bitstreams: 1 Dissertação - Marcio Henriques da Costa - Cláusula de Melhores Esforços (deposito)(180316).pdf: 969740 bytes, checksum: 25d7825a9fca49a77ffd610e4d07a0c3 (MD5)
Made available in DSpace on 2016-04-04T13:14:31Z (GMT). No. of bitstreams: 1 Dissertação - Marcio Henriques da Costa - Cláusula de Melhores Esforços (deposito)(180316).pdf: 969740 bytes, checksum: 25d7825a9fca49a77ffd610e4d07a0c3 (MD5) Previous issue date: 2016-02-23
The use of the best efforts clause is a common practice among Brazilian businessmen and lawyers. A study of sophisticated shareholders agreement of listed companies in Brazil shows the high incidence of the clause. Such inclusion has strong economic reasons, to justify its recognition and interpretation under Brazilian law. The standard of conduct required by this best efforts clause shall be analyzed according to different criteria, with subjective and objective elements, as well as the social environment and related custom and usage, based on well-established private law principles and rules. Brazil’s limited case law on this, as well as the consolidated jurisprudence in the U.S. relating to the clause, contribute to the best understanding of its legal nature and of the level of conduct required, which distinguishes the best efforts obligation from implicit good faith duties. Among findings, we can mention that the best efforts clause shall not be equate to the good faith duties or a mere moral duty. Its legal recognition as a distinct pattern of conduct, as each specific situation, shall be enforced by the national legal system
A utilização da cláusula de melhores esforços, ou best efforts, é prática comum do empresariado e advogados nacionais. Este trabalho realiza um levantamento a fim de demonstrar a alta incidência em acordos sofisticados entre acionistas de companhias abertas brasileiras. Tal inclusão tem fortes motivos econômicos, a justificar o reconhecimento e interpretação pelo aplicador do direito nacional. O padrão de conduta dessa obrigação de meio deve ser analisado por critérios distintos, por meio de elementos subjetivos e objetivos, bem como à luz do contexto social e usos e costumes relacionados, baseados em normas e princípios de direito privado amplamente aceitos. A escassa jurisprudência sobre o tema bem como a já consolidada jurisprudência norte-americana contribuem para o melhor entendimento sobre a natureza jurídica e o modelo de interpretação de conduta a ser aplicado, diferenciando a obrigação de melhores esforços dos deveres decorrentes da boa-fé objetiva. Entre as conclusões, pode-se mencionar que a cláusula de melhores esforços não deve ser igualada aos deveres de boa-fé ou a um mero dever moral. Seu reconhecimento legal como padrão de conduta distinto, apurado conforme cada caso, deve ser amparado pelo ordenamento jurídico nacional
APA, Harvard, Vancouver, ISO, and other styles
46

Bylin, Johan. "Best practice of extracting magnetocaloric properties in magnetic simulations." Thesis, Uppsala universitet, Materialteori, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388356.

Full text
Abstract:
In this thesis, a numerical study of simulating and computing the magnetocaloric properties of magnetic materials is presented. The main objective was to deduce the optimal procedure to obtain the isothermal change in entropy of magnetic systems, by evaluating two different formulas of entropy extraction, one relying on the magnetization of the material and the other on the magnet's heat capacity. The magnetic systems were simulated using two different Monte Carlo algorithms, the Metropolis and Wang-Landau procedures. The two entropy methods proved to be comparably similar to one another. Both approaches produced reliable and consistent results, though finite size effects could occur if the simulated system became too small. Erroneous fluctuations that invalidated the results did not seem stem from discrepancies between the entropy methods but mainly from the computation of the heat capacity itself. Accurate determination of the heat capacity via an internal energy derivative generated excellent results, while a heat capacity obtained from a variance formula of the internal energy rendered the extracted entropy unusable. The results acquired from the Metropolis algorithm were consistent, accurate and dependable, while all of those produced via the Wang-Landau method exhibited intrinsic fluctuations of varying severity. The Wang-Landau method also proved to be computationally ineffective compared to the Metropolis algorithm, rendering the method not suitable for magnetic simulations of this type.
APA, Harvard, Vancouver, ISO, and other styles
47

Nam, Moon-Sun. "Magnetotransport in BEDT-TTF salts." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Nádasi, Hajnalka. "Bent-core mesogens - substituent effect and phase behavior." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=973405104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wilson, Chantel. "Effect of Golf Course Turfgrass Management on Water Quality of Non-tidal Streams in the Chesapeake Bay Watershed." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51683.

Full text
Abstract:
Turfgrass management activities on golf courses have been identified as a possible source of Chesapeake Bay nutrient pollution. Total Maximum Daily Load goals are in place to reduce nutrient amounts entering the Bay. Dissertation investigations include (1) the role of golf course turfgrass management in nutrient deposition or attenuation in local streams, (2) estimations of total nitrogen (N) discharging to the watershed from stream outlet points as a function of land use and watershed area, and (3) other factors potentially affecting water quality on golf courses, including soil characteristics and use of best management practices (BMPs). Total N, nitrate-N, ammonium-N, phosphate-phosphorus (P), streamwater temperature, specific conductance (SpC), pH and dissolved oxygen (DO) were sampled at 12-14 golf course stream sites in the James River and Roanoke River watersheds during baseflow conditions. Discharge was determined at outflow locations. Unit-area loads (UALs) were calculated from monitoring data. These UALs were then compared to UALs from Chesapeake Bay Watershed Model land use acreages and simulated loads for corresponding watershed segments. Virginia golf course superintendents were also surveyed to determine BMP use. No consistent impairment trends were detected for streamwater temperature, SpC, pH, or DO at any of the sites. Outflow NO3-N was below the 10 mg L-1 EPA drinking water standard. However, some sites may be at increased risk for benthic impairment with total N concentrations >2 mg L-1, as suggested by VADEQ. Significant increases in nitrate-N at OUT locations were measured at four sites, whereas decreases were measured at two sites. Ammonium-N significantly decreased at two sites. Golf course N UALs calculated from baseflow monitoring were lower than or similar to UALs estimated for forested areas in the associated watershed segment at seven out of the 12 sites. Golf course UALs ranged from 1.3-87 kg N ha-1 yr-1. Twenty-one of 32 surveyed BMPs had an adoption rate ≥50% among survey respondents. In most cases, presence of golf courses generally does not appear to significantly degrade baseflow water quality of streams in this study. Management level appears to be an influencing factor on water quality and concerns may be heightened in urban areas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Brunelle, Caroline. "The role of alcohol-induced cardiac reactivity in addiction : investigations into a positive reinforcement pathway." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102482.

Full text
Abstract:
Alcohol abuse is the second most prevalent lifetime psychiatric disorder. However, individuals do not face an equal risk of developing problematic alcoholrelated behaviors. Alcohol use disorders are heterogeneous conditions whose development may be caused by a variety of factors and vulnerabilities. The identification of markers of risk is necessary in order to identify individuals at higher risk for addiction early on as well as to help develop treatment interventions which target an individual's specific risk factors. The goal of the present dissertation is to increase our understanding of the role that one putative risk factor, an exaggerated cardiac response to alcohol, may play in the development of addictive behaviors. Five studies are reported.
The first study revealed that an exaggerated heart rate response to alcohol is associated with subjective reports of increased alcohol-induced stimulation. In a second study, the relationship between the cardiac response to alcohol and personality characteristics was examined. Individuals who demonstrated the elevated cardiac response to alcohol displayed a distinct personality profile characterized by high sensation-seeking and sensitivity to reward. Two separate studies followed investigating the relationship between this physiological response to alcohol and other addictive behaviours. One study found that individuals with an exaggerated cardiac response to alcohol were more likely to obtain superior scores on a measure of pathological gambling, while the next study found that users of psychostimulants (e.g., cocaine) also displayed heightened alcohol-induced cardiac responses. A final study examined the impact of conditioned cues of reward and non-reward on alcohol-induced cardiac responses. Individuals who had previously displayed elevated cardiac responses to ethanol showed reduced cardiac reactivity when alcohol ingestion occurred in a non-rewarding environment. Overall, these findings suggest that the cardiac response to alcohol is a marker of a pathway that may lead to addictive behaviors through increased sensitivity to incentive reward.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography