To see the other types of publications on this topic, follow the link: Queue mode.

Dissertations / Theses on the topic 'Queue mode'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'Queue mode.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jou, Jia-Shiang. "Multifractal internet traffic model and active queue management." College Park, Md. : University of Maryland, 2003. http://hdl.handle.net/1903/53.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2003.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Chan, Ming Kit. "Active queue management schemes using a capture-recapture model /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20CHAN.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 58-61). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
3

Horký, Miroslav. "Modely hromadné obsluhy." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232033.

Full text
Abstract:
The master’s thesis solves models of queueing systems, which use the property of Markov chains. The queueing system is a system, where the objects enter into this system in random moments and require the service. This thesis solves specifically such models of queueing systems, in which the intervals between the objects incomings and service time have exponential distribution. In the theoretical part of the master’s thesis I deal with the topics stochastic process, queueing theory, classification of models and description of the models having Markovian property. In the practical part I describe realization and function of the program, which solves simulation of chosen model M/M/m. At the end I compare results which were calculated in analytic way and by simulation of the model M/M/m.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdel-Jaber, Hussein F. "Performance Modelling and Evaluation of Active Queue Management Techniques in Communication Networks. The development and performance evaluation of some new active queue management methods for internet congestion control based on fuzzy logic and random early detection using discrete-time queueing analysis and simulation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4261.

Full text
Abstract:
Since the field of computer networks has rapidly grown in the last two decades, congestion control of traffic loads within networks has become a high priority. Congestion occurs in network routers when the number of incoming packets exceeds the available network resources, such as buffer space and bandwidth allocation. This may result in a poor network performance with reference to average packet queueing delay, packet loss rate and throughput. To enhance the performance when the network becomes congested, several different active queue management (AQM) methods have been proposed and some of these are discussed in this thesis. Specifically, these AQM methods are surveyed in detail and their strengths and limitations are highlighted. A comparison is conducted between five known AQM methods, Random Early Detection (RED), Gentle Random Early Detection (GRED), Adaptive Random Early Detection (ARED), Dynamic Random Early Drop (DRED) and BLUE, based on several performance measures, including mean queue length, throughput, average queueing delay, overflow packet loss probability, packet dropping probability and the total of overflow loss and dropping probabilities for packets, with the aim of identifying which AQM method gives the most satisfactory results of the performance measures. This thesis presents a new AQM approach based on the RED algorithm that determines and controls the congested router buffers in an early stage. This approach is called Dynamic RED (REDD), which stabilises the average queue length between minimum and maximum threshold positions at a certain level called the target level to prevent building up the queues in the router buffers. A comparison is made between the proposed REDD, RED and ARED approaches regarding the above performance measures. Moreover, three methods based on RED and fuzzy logic are proposed to control the congested router buffers incipiently. These methods are named REDD1, REDD2, and REDD3 and their performances are also compared with RED using the above performance measures to identify which method achieves the most satisfactory results. Furthermore, a set of discrete-time queue analytical models are developed based on the following approaches: RED, GRED, DRED and BLUE, to detect the congestion at router buffers in an early stage. The proposed analytical models use the instantaneous queue length as a congestion measure to capture short term changes in the input and prevent packet loss due to overflow. The proposed analytical models are experimentally compared with their corresponding AQM simulations with reference to the above performance measures to identify which approach gives the most satisfactory results. The simulations for RED, GRED, ARED, DRED, BLUE, REDD, REDD1, REDD2 and REDD3 are run ten times, each time with a change of seed and the results of each run are used to obtain mean values, variance, standard deviation and 95% confidence intervals. The performance measures are calculated based on data collected only after the system has reached a steady state. After extensive experimentation, the results show that the proposed REDD, REDD1, REDD2 and REDD3 algorithms and some of the proposed analytical models such as DRED-Alpha, RED and GRED models offer somewhat better results of mean queue length and average queueing delay than these achieved by RED and its variants when the values of packet arrival probability are greater than the value of packet departure probability, i.e. in a congestion situation. This suggests that when traffic is largely of a non bursty nature, instantaneous queue length might be a better congestion measure to use rather than the average queue length as in the more traditional models.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Zhenyu. "Discrete-time queueing model for responsive network traffic and bottleneck queues." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/21314.

Full text
Abstract:
The Internet has been more and more intensively used in recent years. Although network infrastructure has been regularly upgraded, and the ability to manage heavy traffic greatly increased, especially on the core networks, congestion never ceases to appear, as the amount of traffic that flow on the Internet seems to be increasing at an even faster rate. Thus, congestion control mechanisms play a vital role in the functioning of the Internet. Active Queue Management (AQM) is a popular type of congestion control mechanism that is implemented on gateways (most notably routers), which can predict and avoid the congestion before it happens. When properly configured, AQMs can effectively reduce the congestion, and alleviate some of the problems such as global synchronisation and unfairness to bursty traffic. However, there are still many problems regarding AQMs. Most of the AQM schemes are quite sensitive to their parameters setting, and these parameters may be heavily dependent on the network traffic profile, which the administrator may not have intensive knowledge of, and is likely to change over time. When poorly configured, many AQMs perform no better than the basic drop-tail queue. There is currently no effective method to compare the performance of these AQM algorithms, caused by the parameter configuration problem. In this research, the aim is to propose a new analytical model, which mainly uses discrete-time queueing theory. A novel transient modification to the conventional equilibrium-based method is proposed, and it is utilised to further develop a dynamic interactive model of responsive traffic and bottleneck queues. Using step-by-step analysis, it represents the bursty traffic and oscillating queue length behaviour in practical network more accurately. It also provides an effective way of predicting the behaviour of a TCP-AQM system, allowing easier parameter optimisation for AQM schemes. Numerical solution using MATLAB and software simulation using NS-2 are used to extensively validate the proposed models, theories and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Lefei. "Development and Evaluation of Transit Signal Priority Strategies with Physical Queue Models." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/193823.

Full text
Abstract:
With the rapid growth in modern cities and congestion on major freeways and local streets, public transit services have become more and more important for urban transportation. As an important component of Intelligent Transportation Systems (ITS), Transit Signal Priority (TSP) systems have been extensively studied and widely implemented to improve the quality of transit service by reducing transit delay. The focus of this research is on the development of a platform with the physical queue representation that can be employed to evaluate and/or improve TSP strategies with the consideration of the interaction between transit vehicles and queues at the intersection.This dissertation starts with deterministic analyses of TSP systems based on a physical queue model. A request oriented TSP decision process is then developed which incorporates a set of TSP decision regions defined on a time-space diagram with the physical queue representation. These regions help identify the optimal detector location, select the appropriate priority control strategy, and handle the situations with multiple priority requests. In order to handle uncertainties in TSP systems arising in bus travel time and dwell time estimation, a type-2 fuzzy logic forecasting system is presented and tested with field data. Type-2 fuzzy logic is very powerful in dealing with uncertainty. The use of Type-2 fuzzy logic helps improve the performance of TSP systems. The last component of the dissertation is the development of a Colored Petri Net (CPN) model for TSP systems. With CPN tools, computer simulation can be performed to evaluate various TSP control strategies and the decision process. Examples for demonstrating the process of implementing the green extension strategy and the proposed TSP decision process are presented in the dissertation. The CPN model can also serve as an interface between the platform developed in this dissertation and the implementation of the control strategies at the controller level.
APA, Harvard, Vancouver, ISO, and other styles
7

Palekar, Trishul Ajit. "Signal optimization at isolated intersections using pre-signals." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4279.

Full text
Abstract:
This research proposes a new signal operation strategy aimed at efficient utilization of green time by cutting down on the start up and response loss times. The idea is to have a "pre-signal" on each main approach a few hundred feet upstream of the intersection in addition to the main intersection signal, which is coordinated with the pre-signal. The offset between the main and pre-signal ensures that the majority of start up losses does not occur at the main signal. The benefits of the system under various traffic conditions were evaluated based on analysis of the queue discharge process and Corridor Simulation (CORSIM) study. The proposed measure should reduce the travel time and total control delay for the signalized network. To attain the objective the following two studies were undertaken: 1. Development of a queue discharge model to investigate the expected benefits of the system. 2. Simulation of the system: In the second part of the research, the proposed strategy was tested using CORSIM to evaluate its performance vis-à-vis the baseline case. The queue discharge model (QDM) was found to be linear in nature in contrast to prior expectations. The model was used to quantify the benefits obtained from the pre-signal system. The result of this analysis indicated that the proposed strategy would yield significant travel time savings and reductions in total control delay. In addition to the QDM analysis, CORSIM simulations were used to code various hypothetical scenarios to test the concept under various constraints and limitations. As per expectations, it was found that the system was beneficial for high demand levels and longer offsets. The upper limit on offsets was determined by visual observation of platoon dispersion and therefore the maximum offset distance was restricted to 450 feet. For scenarios where split phasing was used, the break even point in terms of demand level was found to be 2500 vph on a three lane approach, whereas that for a lag-lag type of phasing strategy was found to be 1800 vph, also on a three lane approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Fils, Ebba, Clara Harrison, and Mathilda Nilsson. "Swedes only hate queue jumpers they don't know : A description of brand attitudes on Google's SERPs." Thesis, Linnéuniversitetet, Institutionen för marknadsföring (MF), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75866.

Full text
Abstract:
Background: The Internet has developed the world of advertising by giving advertisers the possibility to track specific patterns among their consumers, which shows how consumers are clicking on online advertisements and what translates into sales for the brand. Lately, companies have actively starting to make use of search engines marketing (SEM). The paid advertising on search engines is one option to make a brand’s website visible to its consumers. The attitudes towards advertisements have previously been examined in traditional media and in other online settings, but the research in the context of search engines is limited. Therefore, it calls for deeper insights and knowledge in how consumers hold attitudes towards a brand and its paid advertising on search engines such as Google. Purpose: The purpose is to describe how users’ attitudes towards brands are influenced by the fact that brands have paid for advertising on search engine result pages. This is done through the ABC-model of attitudes. The question asked in this study was: How does paid advertising displayed on search engines affect the attitudes held towards a brand? Methodology: This thesis project used a qualitative approach and was of descriptive nature. The data was gathered through seven unstructured in-depths interviews based on a quota sample considering three criteria: age group, in this case, 18-29-year-olds, and the variable of regular e-commerce buyers, as well as the participants being users of the search engine Google. The researchers verified data saturation at seven interviews. Conclusion: The main finding in this study is that the level of familiarity influences the participants attitudes towards the brand. Previous experience and knowledge with a brand was an affecting factor of how they interpreted the brand’s advertising on Google’s search engine result pages. Knowledge and a positive experience with a brand generated a more positive attitude towards the brand when an unknown brand generated a neutral or more negative attitude towards the brand. Related factors that also influenced the study were the clicking pattern, the landing page, the choices of wording and the intended target groups by the brands. The study also presents a range of recommendations for future research, as well as theoretical and managerial implications.
APA, Harvard, Vancouver, ISO, and other styles
9

Bahr, Hubert. "DATA BANDWIDTH REDUCTION TECHNIQUES FOR DISTRIBUTED EMBEDDED SIMULATIO." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2778.

Full text
Abstract:
Maintaining coherence between the independent views of multiple participants at distributed locations is essential in an Embedded Simulation environment. Currently, the Distributed Interactive Simulation (DIS) protocol maintains coherence by broadcasting the entity state streams from each simulation station. In this dissertation, a novel alternative to DIS that replaces the transmitting sources with local sources is developed, validated, and assessed by analytical and experimental means. The proposed Concurrent Model approach reduces the communication burden to transmission of only synchronization and model-update messages. Necessary and sufficient conditions for the correctness of Concurrent Models in a discrete event simulation environment are established by developing Behavioral Congruence ¨B(EL, ER) and Temporal Congruence ¨T(t, ER) functions. They indicate model discrepancies with respect to the simulation time t, and the local and remote entity state streams EL and ER, respectively. Performance benefits were quantified in terms of the bandwidth reduction ratio BR=N/I obtained from the comparison of the OneSAF Testbed Semi-Automated Forces (OTBSAF) simulator under DIS requiring a total of N bits and a testbed modified for the Concurrent Model approach which required I bits. In the experiments conducted, a range of 100 d BR d 294 was obtained representing two orders of magnitude reduction in simulation traffic. Investigation showed that the models rely heavily on the priority data structure of the discrete event simulation and that performance of the overall simulation can be enhanced by an additional 6% by improving the queue management. A low run-time overhead, self-adapting storage policy called the Smart Priority Queue (SPQ) was developed and evaluated within the Concurrent Model. The proposed SPQ policies employ a lowcomplexity linear queue for near head activities and a rapid-indexing variable binwidth calendar queue for distant events. The SPQ configuration is determined by monitoring queue access behavior using cost scoring factors and then applying heuristics to adjust the organization of the underlying data structures. Results indicate that optimizing storage to the spatial distribution of queue access can decrease HOLD operation cost between 25% and 250% over existing algorithms such as calendar queues. Taken together, these techniques provide an entity state generation mechanism capable of overcoming the challenges of Embedded Simulation in harsh mobile communications environments with restricted bandwidth, increased message latency, and extended message drop-outs.
Ph.D.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
10

Nassir, Neema. "Optimal Integrated Dynamic Traffic Assignment and Signal Control for Evacuation of Large Traffic Networks with Varying Threat Levels." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/297042.

Full text
Abstract:
This research contributes to the state of the art and state of the practice in solving a very important and computationally challenging problem in the areas of urban transportation systems, operations research, disaster management, and public policy. Being a very active topic of research during the past few decades, the problem of developing an efficient and practical strategy for evacuation of real-sized urban traffic networks in case of disasters from different causes, quickly enough to be employed in immediate disaster management scenarios, has been identified as one of the most challenging and yet vital problems by many researchers. More specifically, this research develops fast methods to find the optimal integrated strategy for traffic routing and traffic signal control to evacuate real-sized urban networks in the most efficient manner. In this research a solution framework is proposed, developed and tested which is capable of solving these problems in very short computational time. An efficient relaxation-based decomposition method is proposed, implemented for two evacuation integrated routing and signal control model formulations, proven to be optimal for both formulations, and verified to reduce the computational complexity of the optimal integrated routing and signal control problem. The efficiency of the proposed decomposition method is gained by reducing the integrated optimal routing and signal control problem into a relaxed optimal routing problem. This has been achieved through an insight into intersection flows in the optimal routing solution: in at least one of the optimal solutions of the routing problem, each street during each time interval only carries vehicles in at most one direction. This property, being essential to the proposed decomposition method, is called "unidirectionality" in this dissertation. The conditions under which this property exists in the optimal evacuation routing solution are identified, and the existence of unidirectionality is proven for: (1) the common Single-Destination System-Optimal Dynamic Traffic Assignment (SD-SODTA) problem, with the objective to minimize the total time spent in the threat area; and, (2) for the single-destination evacuation problem with varying threat levels, with traffic models that have no spatial queue propagation. The proposed decomposition method has been implemented in compliance with two widely-accepted traffic flow models, the Cell Transmission Model (CTM) and the Point Queue (PQ) model. In each case, the decomposition method finds the optimal solution for the integrated routing and signal control problem. Both traffic models have been coded and applied to a realistic real-size evacuation scenario with promising results. One important feature that is explored is the incorporation of evacuation safety aspects in the optimization model. An index of the threat level is associated with each link that reflects the adverse effects of traveling in a given threat zone on the safety and health of evacuees during the process of evacuation. The optimization problem is then formulated to minimize the total exposure of evacuees to the threat. A hypothetical large-scale chlorine gas spill in a high populated urban area (downtown Tucson, Arizona) has been modeled for testing the evacuation models where the network has varying threat levels. In addition to the proposed decomposition method, an efficient network-flow solution algorithm is also proposed to find the optimal routing of traffic in networks with several threat zones, where the threat levels may be non-uniform across different zones. The proposed method can be categorized in the class of "negative cycle canceling" algorithms for solving minimum cost flow problems. The unique feature in the proposed algorithm is introducing a multi-source shortest path calculation which enables the efficient detection and cancellation of negative cycles. The proposed method is proven to find the optimal solution, and it is also applied to and verified for a mid-size test network scenario.
APA, Harvard, Vancouver, ISO, and other styles
11

Alghamdi, Aliaa. "Queued and Pooled Semantics for State Machines in the Umple Model-Oriented Programming Language." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/31961.

Full text
Abstract:
This thesis describes extensions to state machines in the Umple model-oriented programming language to offer queued state machines (QSM), pooled state machines (PSM) and handing of the arrival of unexpected events. These features allow for modeling the behavior of a system or protocol in a more accurate way in Umple because they enable detecting and fixing common design errors such as unspecified receptions. In addition, they simplify the communication between communicating state machines by allowing for asynchronous calls of events and passing of messages between state machines. Also, a pooled state machine (PSM) has been developed to provide a different policy of handling events that avoid unspecified receptions. This mechanism has similar semantics as a queued state machine, but it differs in the way of detecting unspecified receptions because it helps handling these errors. Another mechanism has been designed to use the keyword ‘unspecified’ in whatever state of a state machine the user wants to detect these errors. In this thesis, the test-driven development (TDD) process has been followed to first modify the Umple syntax to add ‘queued,’ ‘pooled,’ and ‘unspecified’ keywords to Umple state machine’s grammar; and second, to make a change to the Umple semantics in order to implement these extensions in Umple. Then, additional modifications have been made to allow for Java code generation from those types of state machines. Finally, more test cases have been written to ensure that these models are syntactically and semantically correct. In order to show the usefulness and usability of these new features, an example is shown as a case study that is modeled using the queued state machine (QSM) besides other small tests cases.
APA, Harvard, Vancouver, ISO, and other styles
12

Rueda, Javier Eduardo. "The Ph(t)/Ph(t)/s/c Queueing Model and Approximation." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9637.

Full text
Abstract:
Time-dependent queueing models are important since most of real-life problems are time-dependent. We develop a numerical approximation algorithm for the mean, variance and higher-order moments of the number of entities in the system at time t for the Ph(t)/Ph(t)/s/c queueing model. This model can be thought as a reparameterization to the G(t)/GI(t)/s. Our approach is to partition the state space into known and identifiable structures, such as the M(t)/M(t)/s/c or M(t)/M(t)/1 queueing models. We then use the Polya-Eggenberger distribution to approximate certain unknown probabilities via a two-moment matching algorithm. We describe the necessary steps to validate the approximation and measure the accuracy of the model.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Rahul. "Load Balancing Parallel Explicit State Model Checking." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd455.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Trönnberg, Filip. "Empirical evaluation of a Markovian model in a limit order market." Thesis, Uppsala universitet, Matematiska institutionen, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-176726.

Full text
Abstract:
A stochastic model for the dynamics of a limit order book is evaluated and tested on empirical data. Arrival of limit, market and cancellation orders are described in terms of a Markovian queuing system with exponentially distributed occurrences. In this model, several key quantities can be analytically calculated, such as the distribution of times between price moves, price volatility and the probability of an upward price move, all conditional on the state of the order book. We show that the exponential distribution poorly fits the occurrences of order book events and further show that little resemblance exists between the analytical formulas in this model and the empirical data. The log-normal and Weibull distribution are suggested as replacements as they appear to fit the empirical data better.
APA, Harvard, Vancouver, ISO, and other styles
15

QUET, Pierre-Francois D. "A ROBUST CONTROL THEORETIC APPROACH TO FLOW CONTROLLER DESIGNS FOR CONGESTION CONTROL IN COMMUNICATION NETWORKS." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1032194223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

ROCHA, Tamires Taís Bezerra. "Pernambuco’s health sector: analysis of queueing problems and an economic growth model." Universidade Federal de Pernambuco, 2013. https://repositorio.ufpe.br/handle/123456789/19010.

Full text
Abstract:
Submitted by Caroline Falcao (caroline.rfalcao@ufpe.br) on 2017-06-05T16:18:45Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Tamires Taís.pdf: 9141530 bytes, checksum: dab991dd87283a9e223705028b80d093 (MD5)
Made available in DSpace on 2017-06-05T16:18:45Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Tamires Taís.pdf: 9141530 bytes, checksum: dab991dd87283a9e223705028b80d093 (MD5) Previous issue date: 2013-04-04
Esta dissertação apresenta um panorama do sistema de saúde brasileiro, com ênfase no caso do Estado de Pernambuco. A gestão de sistemas de saúde se manifesta sob a forma geral de longas filas de espera, que são analisados neste contexto, incluindo algumas abordagens que têm sido propostas e implementadas em Pernambuco, a fim de resolver o problema. Um modelo de crescimento econômico ótimo destacando o setor de saúde, e, em seguida, operando em conjunto, os setores de saúde e educação é proposto. Os resultados do princípio do máximo de Pontryagin aplicado a este modelo mostram os benefícios mútuos para ambos os setores e os seus efeitos no bem-estar da sociedade. Um estudo de caso de filas de espera no Hospital da Restauração, em Recife, Pernambuco, é apresentado.
An overview of the Brazilian health care system is presented, with an emphasis in the Pernambuco state case. One central issue concerning health systems management manifests itself under the general form of long waiting lines, which are then here analyzed in this context, including some approaches that have been proposed and implemented in Pernambuco in order to tackle the problem. An optimal economic growth model highlighting the health sector, and then, operating jointly, the health and education sectors, is proposed. The results of the Pontryagin Maximum Principle applied to this model show the mutual benefits for both sectors and their effects in the community welfare. A case study of queueing systems in Hospital da Restaura¸c˜ao (an emergency hospital) in Recife, Pernambuco, is presented.
APA, Harvard, Vancouver, ISO, and other styles
17

Pande, Mani. "Understanding the effects of the technology life cycle model and job and labor queues on employment of women in the Indian software industry /." Search for this dissertation online, 2004. http://wwwlib.umi.com/cr/ksu/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Renaud, Matthieu. "Évaluation d'un substitut osseux résorbable porteur de cellules souches : approche cellulaire pour la régénération osseuse in vivo." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTT081.

Full text
Abstract:
Malgré le développement de biomatériaux de plus en plus nombreux dans le domaine des greffes osseuses et de la préservation alvéolaire, les résultats sont toujours insuffisants pour assurer des reconstructions ad integrum du tissu osseux. Les techniques d’ingénieries osseuses semblent être la piste à privilégier pour améliorer nos techniques chirurgicales. Le silicium poreux est un matériau prometteur pour l’ingénierie tissulaire et notamment pour la régénération osseuse. En effet, son état de surface permet une adhésion cellulaire importante et ses propriétés non toxique et résorbable en fond un matériau porteur de cellules souches intéressant. Les cellules souches de la pulpe dentaire (DPSC) sont des cellules facilement accessibles dans la cavité buccale. Leurs capacités de prolifération et de différenciation associées au silicium poreux semblent être un atout pour les applications thérapeutique pour la régénération osseuse. Des résultats d’études ultérieures in vitro ont montré leur fort intérêt à une application in vivo. Dans ce travail thèse, nous avons tester l’association silicium poreux et cellules souches de la pulpe dentaire, aux même caractéristiques énoncées dans l’étude de référence in vitro, chez l’animal. Pour cela, le matériau a été produit sous forme de particules de manière a être utilisé comme moyen de comblement osseux, associé ou non à des DPSC. Le modèle de queue de rat a été développé et tester pour diminuer le nombre d’animaux nécessaire à l’étude tout en conservant la puissance statistique des résultats. Les études ont montré la possibilité d’utiliser ce modèle pour la régénération de défauts osseux crées chirurgicalement. De plus, il semblerait que ce modèle puisse également être utile pour les études sur l’ostéointégration de système implantables et sur la régénération osseuse autour de ces implants. Le silicium poreux a ensuite été testé dans ces conditions, associé ou non aux DPSC, en comparaison avec un témoin positif et un témoin négatif. Cette association est apparue comme une piste prometteuse pour la régénération osseuse in vivo
Despite the development of biomaterials in the field of bone grafts and alveolar preservation, the results are no sufficient to made reconstructions ad integrum of bone tissue. Bone engineering techniques seem to be the preferred way to improve our surgical techniques. Porous silicon is a promising material for tissue engineering and especially for bone regeneration. Indeed, its surface allows cell adhesion. And then, it’s a non-toxic and bioresorbable interesting material properties carrying stem cells. Dental pulp stem cells (DPSC) are easily accessible cells in the oral cavity. Their proliferation and differentiation capacities associated with porous silicon appear to be attractive for therapeutic applications in bone regeneration. The results of the in vitro studies have shown the interest for in vivo application. In this thesis, we have tested the combination of porous silicon and dental pulp stem cells in vivo experimentation, using the same characteristics of the in vitro reference study. For this, the material was produced in particle form to be used as bone filling material, associated or not with DPSC. The rat-tail model was developed and tested to reduce the number of animals needed for the study while maintaining the statistical power of the results. Studies have shown the possibility of using this model for bone regeneration defects surgically created. In addition, it seems that this model can also be useful for studies on osseointegration of implantable systems and bone regeneration around these implants. Then, the porous silicon was tested under these conditions, with or without DPSC, in comparison with a positive control and a negative control. This association has emerged as a promising approach for bone regeneration in vivo
APA, Harvard, Vancouver, ISO, and other styles
19

Boucharel, Julien. "Modes de variabilité climatique dans l'océan Pacifique tropical : quantification des non-linéarités et rôle sur les changements de régimes climatiques." Phd thesis, Université Paul Sabatier - Toulouse III, 2010. http://tel.archives-ouvertes.fr/tel-00720706.

Full text
Abstract:
Dans cette thèse, nous nous sommes consacrés au problème d'interaction d'échelles selon deux angles distincts : d'une part une approche globale et grande échelle du système climatique qui nous a permis d'étudier la modulation basse fréquence d'ENSO, d'autre part une démarche plus locale au cours de laquelle nous avons étudié plus particulièrement la dynamique du Pacifique tropical est et du système de courants de Humboldt au large du Pérou. La première partie a été motivée par une approche relativement récente dans la communauté des climatologues. Il s'agit de la question cruciale de la variabilité basse fréquence d'ENSO, et de la possibilité que celle-ci puisse émerger " simplement " du système climatique tropical, sans action extérieure qu'elle soit stochastique ou en lien avec la variabilité des plus hautes latitudesDans ce contexte particulier, il est alors question de mécanismes nonlinéaires pour expliquer comment la stabilité d'ENSO peut être influencée par la variabilité climatique. Ceci a servi d'hypothèse de travail pour l'ensemble de cette thèse. Nous avons ainsi abordé la possibilité qu'ENSO pouvait être rectifié sur des échelles de temps longues (interdécennales) par la modulation de la nonlinéarité elle-même. Pour cela, nous avons utilisé des méthodes mathématiques originales qui nous ont permis d'une part de détecter des changements brusques (statistiquement significatifs) de l'état moyen du Pacifique tropical et d'autre part d'accéder à un proxy de la nonlinéarité intégrée dans le système tropical. En combinant ces deux démarches, nous avons pu mettre en évidence une boucle de rétroaction auto entretenue sur des échelles de temps longues qui serait pilotée par des mécanismes nonlinéaires qui auraient la capacité de faire interférer diverses échelles temporelles et ainsi de transférer l'énergie des basses fréquences (état moyen du pacifique tropical) vers les hautes fréquences (oscillation australe) et vice-versa. Dans la seconde partie de cette thèse nous nous sommes focalisés sur la modélisation climatique du Pacifique tropical oriental. En effet, cette région, pourtant au cœur des préoccupations de la communauté scientifique en raison de son écosystème parmi les plus productifs de la planète, reste mal connue du point de vue des processus océanographiques et climatiques. En particulier, les modèles climatiques globaux présentent des biais importants dans cette région en terme d'état climatologique moyen. Nous avons testé, dans une approche de modélisation haute résolution, différentes sources possibles de ces biais : les caractéristiques bathymétriques des îles Galápagos (mal représentées dans les modèles globaux) capables de par leur position équatoriale de modifier la circulation régionale moyenne et donc le bilan thermodynamique; ou alors les processus associés aux mélanges turbulents (et par extension les processus nonlinéaires) à l'aide d'un modèle régional. Pour ce faire, nous avons procédé à des expériences de sensibilité qui nous ont permis d'une part de relativiser le rôle de l'archipel des Galápagos comme source de biais et d'autre part de mettre en exergue le rôle de la variabilité intra-saisonnière dans la rectification de l'état moyen du Pacifique tropical est.
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Bei. "Les approches extrêmes de la contagion sur les marchés financiers." Thesis, Bordeaux 4, 2012. http://www.theses.fr/2012BOR40033.

Full text
Abstract:
La thèse est composée de trois parties. La première présente un certain nombre de mesures de dépendance extrême. Une application sur les actions et les obligations de 49 pays montre que la théorie des valeurs extrêmes multivariées conduit aux résultats différents de ceux issus du coefficient de corrélation, mais relativement proches de ceux obtenus du rho de Spearman conditionnel multivarié. Cette partie évalue aussi le risque de pertes importantes simultanées. La deuxième partie examine les déterminants des co-mouvements extrêmes entre 5 pays core et 49 pays non core. Les mécanismes de transmission des chocs varient de la période moins récente à la période récente, des pays développés aux pays émergents, des chocs normaux aux chocs extrêmes. La troisième partie étudie le rôle de valeur refuge de l’or sur la période 1986-2012. Les gains positifs extrêmes de l'or peuvent être liés aux pertes extrêmes du S&P. Cependant, ce lien n'est pas toujours valable, il évolue dans le temps et serait conditionné par d'autres facteurs
The thesis consists of three parts. The first part introduces a number of measures of extreme dependency. An application on stock and bond markets of 49 countries shows the multivariate extreme value theory leads to results which are different from those from the correlation coefficient, but relatively close to those obtained from multivariate conditional Spearman's rho. This part also assesses the risk of simultaneous losses. The second part examines the determinants of extreme co-movements between 5 core countries and 49 non-core countries. Transmission mechanisms of shocks vary from less recent to recent period, from developed to emerging markets, from normal to extreme shocks. The third part examines the role of safe haven of gold over the period 1986-2012. Extreme positive gains of gold can be linked to extreme losses of S&P. However, this relationship is not always valid, it evolves over time and could be determined by other factors
APA, Harvard, Vancouver, ISO, and other styles
21

Rodrigues, Joana Filipa Soares Pereira. "Modelos matemáticos para a gestão do serviço de receção no Instituto Português de Reumatologia." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/6550.

Full text
Abstract:
Mestrado em Decisão Económica e Empresarial
O objetivo deste trabalho final de mestrado (TFM) é o estudo de modelos de Investigação Operacional, para resolver um problema sentido nos serviços de receção de utentes do Instituto Português de Reumatologia (IPR). Durante o estágio nesta instituição foi identificada a dificuldade em dar resposta, de forma rápida, ao elevado número de utentes que se concentram na sala de espera em certos períodos de tempo, provocando desequilíbrios na agenda dos médicos e tensão nas salas de espera. Estes utentes têm de aguardar pelo serviço de receção por diferentes razões: dar entrada para consultas, marcar novas consultas ou exames, efetuar pagamentos de exames, receber informações sobre o funcionamento do instituto e ainda concluir processos de internamento. Com o objetivo de obter possíveis soluções para este problema, a metodologia proposta neste relatório foi a construção de dois modelos matemáticos: modelo de programação linear inteira e modelo de simulação. Foram estudadas diversas propostas de reorganização do serviço de receção do instituto, utilizando o modelo de simulação. O modelo de programação linear inteira foi utilizado para resolver instâncias do problema com o objetivo de determinar o número mínimo de balcões necessários para o fluxo esperado de utentes, usando dados recolhidos no IPR e dados da simulação. Com base na análise dos indicadores dos modelos de simulação concluiu-se que a melhor proposta seria a da marcação da próxima consulta pelos médicos. De acordo com as soluções obtidas pelas diversas instâncias do modelo de programação linear inteira, concluiu-se que um dos balcões não está em funcionamento na maioria dos turnos. Foi ainda construído um modelo em Excel que permite aos funcionários decidir, em tempo real, o número de balcões que deverão abrir em cada turno, conhecendo o número e tipo de utentes que estão em fila de espera.
The purpose of this internship report is to study some models of Operational Research, so that the problem that is felt by the users at the reception desk of the Portuguese Institute of Rheumatology (IPR) can be solved. During the internship at this institution the problem that was identified was the difficulty of the reception desk in giving a quick answer to the numerous users that are in the waiting room at certain hours. This causes a certain imbalance in the doctors’ schedules and also some tension in the waiting room. The reasons why the users have to wait at the reception desk are the following: admission to appointments, schedule of new appointments or medical exams, paying for the provided services, getting information about the functioning of the institute and also concluding the hospitalization process. In order to obtain possible solutions to this problem, the proposed methodology for this report was to build two mathematical models: linear integer programming model and simulation model. Several proposals for reorganizing the reception desk service were carried out by using the simulation model. Based on the data gathered from the IPR and from the simulation model, the integer linear programming model was used to solve the problem instances, in order to determine the minimum number of service counters that is necessary to handle the users expected flow. By analyzing the simulation model indicators we came to the conclusion that the best proposal would be the scheduling of the new appointments by the doctors. According to solutions provided by the several instances of the integer linear programming model, we can conclude that one of the service counters of the reception desk doesn’t function in most shifts. Guided by the number and type of users that are queuing, an Excel model was constructed. It allows the employees to decide, in real time, the number of service counters that should be open each shift.
APA, Harvard, Vancouver, ISO, and other styles
22

Torri, Niccolò. "Phénomènes de localisation et d’universalité pour des polymères aléatoires." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10114/document.

Full text
Abstract:
Le modèle d'accrochage de polymère décrit le comportement d'une chaîne de Markov en interaction avec un état donné. Cette interaction peut attirer ou repousser la chaîne de Markov et elle est modulée par deux paramètres, h et β. Quand β = 0 on parle de modèle homogène, qui est complètement solvable. Le modèle désordonné, i.e. quand β > 0, est mathématiquement le plus intéressant. Dans ce cas, l'interaction dépend d'une source d'aléa extérieur indépendant de la chaîne de Markov, appelée désordre. L'interaction est réalisée en modifiant la loi originelle de la chaîne de Markov par une mesure de Gibbs et la probabilité obtenue définit le modèle d'accrochage de polymère. Le but principal est d'étudier et de comprendre la structure des trajectoires typiques de la chaîne de Markov sous cette nouvelle probabilité. Le premier sujet de recherche concerne le modèle d'accrochage de polymère où le désordre est à queues lourdes et où le temps de retour de la chaîne de Markov suit une distribution sous-exponentielle. Dans notre deuxième résultat nous étudions le modèle d'accrochage de polymère avec un désordre à queues légères et le temps de retour de la chaîne de Markov avec une distribution à queues polynomiales d'exposant α > 0. On peut démontrer qu'il existe un point critique, h(β). Notre but est comprendre le comportement du point critique quand β -> 0. La réponse dépend de la valeur de α. Dans la littérature on a des résultats précis pour α < ½ et α > 1. Nous montrons que α ∈ (1/2, 1) le comportement du modèle dans la limite du désordre faible est universel et le point critique, opportunément changé d'échelle, converge vers la même quantité donnée par un modèle continu
The pinning model describes the behavior of a Markov chain in interaction with a distinguished state. This interaction can attract or repel the Markov chain path with a force tuned by two parameters, h and β. If β = 0 we obtain the homogeneous pinning model, which is completely solvable. The disordered pinning model, i.e. when β > 0, is most challenging and mathematically interesting. In this case the interaction depends on an external source of randomness, independent of the Markov chain, called disorder. The interaction is realized by perturbing the original Markov chain law via a Gibbs measure, which defines the Pinning Model. Our main aim is to understand the structure of a typical Markov chain path under this new probability measure. The first research topic of this thesis is the pinning model in which the disorder is heavy-tailed and the return times of the Markov chain have a sub-exponential distribution. In our second result we consider a pinning model with a light-tailed disorder and the return times of the Markov chain with a polynomial tail distribution, with exponent α > 0. It is possible to show that there exists a critical point, h(β). Our goal is to understand the behavior of the critical point when β -> 0. The answer depends on the value of α and in the literature there are precise results only for the case α < ½ et α > 1. We show that for α ∈ (1/2, 1) the behavior of the pinning model in the weak disorder limit is universal and the critical point, suitably rescaled, converges to the related quantity of a continuum model
APA, Harvard, Vancouver, ISO, and other styles
23

Jaunâtre, Kévin. "Analyse et modélisation statistique de données de consommation électrique." Thesis, Lorient, 2019. http://www.theses.fr/2019LORIS520.

Full text
Abstract:
En octobre 2014, l'Agence De l'Environnement et de la Maîtrise de l'Energie (ADEME) en coopération avec l'entreprise ENEDIS (anciennement ERDF pour Électricité Réseau Distribution France) a démarré un projet de recherche dénommé "smart-grid SOLidarité-ENergie-iNovation" (SOLENN) avec comme objectifs l'étude de la maîtrise de la consommation électrique par un accompagnement des foyers et la sécurisation de l'approvisionnement électrique entre autres. Cette thèse s'inscrit dans le cadre des objectifs susnommés. Le projet SOLENN est piloté par l'ADEME et s'est déroulé sur la commune de Lorient. Le projet a pour but de mettre en œuvre une pédagogie pour sensibiliser les foyers aux économies d'énergie. Dans ce contexte, nous abordons une méthode d'estimation des quantiles extrêmes et des probabilités d'événements rares pour des données fonctionnelles non-paramétriques qui fait l'objet d'un package R. Nous proposons ensuite une extension du fameux modèle de Cox à hasards proportionnels et permet l'estimation des probabilités d'événements rares et des quantiles extrêmes. Enfin, nous donnons l'application de certains modèles statistique développés dans ce document sur les données de consommation électrique et qui se sont avérés utiles pour le projet SOLENN. Une première application est en liaison avec le programme d'écrêtement mené par ENEDIS afin de sécuriser le fonctionnement du réseau électrique. Une deuxième application est la mise en place du modèle linéaire pour étudier l'effet de plusieurs visites individuelles sur la consommation électrique
In October 2014, the French Environment & Energy Management Agency with the ENEDIS company started a research project named SOLENN ("SOLidarité ENergie iNovation") with multiple objectives such as the study of the control of the electric consumption by following the households and to secure the electric supply. The SOLENN project was lead by the ADEME and took place in Lorient, France. The main goal of this project is to improve the knowledge of the households concerning the saving of electric energy. In this context, we describe a method to estimate extreme quantiles and probabilites of rare events which is implemented in a R package. Then, we propose an extension of the famous Cox's proportional hazards model which allows the etimation of the probabilites of rare events. Finally, we give an application of some statistics models developped in this document on electric consumption data sets which were useful for the SOLENN project. A first application is linked to the electric constraint program directed by ENEDIS in order to secure the electric network. The houses are under a reduction of their maximal power for a short period of time. The goal is to study how the household behaves during this period of time. A second application concern the utilisation of the multiple regression model to study the effect of individuals visits on the electric consumption. The goal is to study the impact on the electric consumption for the week or the month following a visit
APA, Harvard, Vancouver, ISO, and other styles
24

Golder, Jacques. "Modélisation d'un phénomène pluvieux local et analyse de son transfert vers la nappe phréatique." Phd thesis, Université d'Avignon, 2013. http://tel.archives-ouvertes.fr/tel-01057725.

Full text
Abstract:
Dans le cadre des recherches de la qualité des ressources en eau, l'étude du processus de transfert de masse du sol vers la nappe phréatique constitue un élément primordial pour la compréhension de la pollution de cette dernière. En effet, les éléments polluants solubles à la surface (produits liés aux activités humaines tels engrais, pesticides...) peuvent transiter vers la nappe à travers le milieu poreux qu'est le sol. Ce scénario de transfert de pollution repose sur deux phénomènes : la pluie qui génère la masse d'eau à la surface et la dispersion de celle-ci à travers le milieu poreux. La dispersion de masse dans un milieu poreux naturel comme le sol forme un sujet de recherche vaste et difficile aussi bien au plan expérimental que théorique. Sa modélisation constitue une préoccupation du laboratoire EMMAH, en particulier dans le cadre du projet Sol Virtuel dans lequel un modèle de transfert (modèle PASTIS) a été développé. Le couplage de ce modèle de transfert avec en entrée un modèle décrivant la dynamique aléatoire de la pluie est un des objectifs de la présente thèse. Ce travail de thèse aborde cet objectif en s'appuyant d'une part sur des résultats d'observations expérimentaux et d'autre part sur de la modélisation inspirée par l'analyse des données d'observation. La première partie du travail est consacrée à l'élaboration d'un modèle stochastique de pluie. Le choix et la nature du modèle sont basés sur les caractéristiques obtenus à partir de l'analyse de données de hauteur de pluie recueillies sur 40 ans (1968-2008) sur le Centre de Recherche de l'INRA d'Avignon. Pour cela, la représentation cumulée des précipitations sera assimilée à une marche aléatoire dans laquelle les sauts et les temps d'attente entre les sauts sont respectivement les amplitudes et les durées aléatoires entre deux occurrences d'événements de pluie. Ainsi, la loi de probabilité des sauts (loi log-normale) et celle des temps d'attente entre les sauts (loi alpha-stable) sont obtenus en analysant les lois de probabilité des amplitudes et des occurrences des événements de pluie. Nous montrons alors que ce modèle de marche aléatoire tend vers un mouvement brownien géométrique subordonné en temps (quand les pas d'espace et de temps de la marche tendent simultanément vers zéro tout en gardant un rapport constant) dont la loi de densité de probabilité est régie par une équation de Fokker Planck fractionnaire (FFPE). Deux approches sont ensuite utilisées pour la mise en œuvre du modèle. La première approche est de type stochastique et repose sur le lien existant entre le processus stochastique issu de l'équation différentielle d'Itô et la FFPE. La deuxième approche utilise une résolution numérique directe par discrétisation de la FFPE. Conformément à l'objectif principal de la thèse, la seconde partie du travail est consacrée à l'analyse de la contribution de la pluie aux fluctuations de la nappe phréatique. Cette analyse est faite sur la base de deux relevés simultanées d'observations de hauteurs de pluie et de la nappe phréatique sur 14 mois (février 2005-mars 2006). Une étude statistique des liens entre les signaux de pluie et de fluctuations de la nappe est menée comme suit : Les données de variations de hauteur de nappe sont analysées et traitées pour isoler les fluctuations cohérentes avec les événements de pluie. Par ailleurs, afin de tenir compte de la dispersion de masse dans le sol, le transport de la masse d'eau pluviale dans le sol sera modélisé par un code de calcul de transfert (modèle PASTIS) auquel nous appliquons en entrée les données de hauteurs de pluie mesurées. Les résultats du modèle permettent entre autre d'estimer l'état hydrique du sol à une profondeur donnée (ici fixée à 1.6m). Une étude de la corrélation entre cet état hydrique et les fluctuations de la nappe sera ensuite effectuée en complément à celle décrite ci-dessus pour illustrer la possibilité de modéliser l'impact de la pluie sur les fluctuations de la nappe
APA, Harvard, Vancouver, ISO, and other styles
25

Saliba, Pamela. "High-frequency trading : statistical analysis, modelling and regulation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX044.

Full text
Abstract:
Cette thèse est constituée de deux parties liées l’une à l’autre. Dans la première, nous étudions empiriquement le comportement des traders haute fréquence sur les marchés financiers européens. Nous utilisons les résultats obtenus afin de construire dans la seconde partie de nouveaux modèles multi-agents. L’objectif principal de ces modèles est de fournir aux régulateurs et plateformes de négociation des outils innovants leur permettant de mettre en place des règles pertinentes pour la microstructure et de quantifier l’impact des divers participants sur la qualité du marché.Dans la première partie, nous effectuons deux études empiriques sur des données uniques fournies par le régulateur français. Nous avons accès à l’ensemble des ordres et transactions des actifs du CAC 40, à l’échelle de la microseconde, avec par ailleurs les identités des acteurs impliqués. Nous commençons par comparer le comportement des traders haute fréquence à celui des autres intervenants, notamment pendant les périodes de stress, en termes de provision de liquidité et d’activité de négociation. Nous approfondissons ensuite notre analyse en nous focalisant sur les ordres consommant la liquidité. Nous étudions leur impact sur le processus de formation des prix et leur contenu informationnel selon les différentes catégories de flux : traders haute fréquence, participants agissant pour compte client et participants agissant pour compte propre.Dans la seconde partie, nous proposons trois modèles multi-agents. À l’aide d’une approche à la Glosten-Milgrom, nous parvenons avec notre premier modèle à construire l’ensemble du carnet d’ordres (spread et volume disponible à chaque prix) à partir des interactions entre trois types d’agents : un agent informé, un agent non informé et des teneurs de marché. Ce modèle nous permet par ailleurs de développer une méthodologie de prédiction du spread en cas de modification du pas de cotation et de quantifier la valeur de la priorité dans la file d’attente. Afin de se concentrer sur une échelle individuelle, nous proposons une deuxième approche où les dynamiques spécifiques des agents sont modélisées par des processus de type Hawkes non linéaires et dépendants de l’état du carnet d’ordres. Dans ce cadre, nous sommes en mesure de calculer en fonction des flux individuels plusieurs indicateurs pertinents relatifs à la microstructure. Il est notamment possible de classer les teneurs de marché selon leur contribution propre à la volatilité. Enfin, nous introduisons un modèle où les fournisseurs de liquidité optimisent leurs meilleurs prix à l’achat et à la vente en fonction du profit qu’ils peuvent générer et du risque d’inventaire auquel ils sont confrontés. Nous mettons alors en évidence théoriquement et empiriquement une nouvelle relation importante entre inventaire et volatilité
This thesis is made of two related parts. In the first one, we study the empirical behaviour of high-frequency traders on European financial markets. We use the obtained results to build in the second part new agent-based models for market dynamics. The main purpose of these models is to provide innovative tools for regulators and exchanges allowing them to design suitable rules at the microstructure level and to assess the impact of the various participants on market quality.In the first part, we conduct two empirical studies on unique data sets provided by the French regulator. It covers the trades and orders of the CAC 40 securities, with microseconds accuracy and labelled by the market participants identities. We begin by investigating the behaviour of high-frequency traders compared to the rest of the market, notably during periods of stress, in terms of liquidity provision and trading activity. We work both at the day-to-day scale and at the intra-day level. We then deepen our analysis by focusing on liquidity consuming orders. We give some evidence concerning their impact on the price formation process and their information content according to the different order flow categories: high-frequency traders, agency participants and proprietary participants.In the second part, we propose three different agent-based models. Using a Glosten-Milgrom type approach, the first model enables us to deduce the whole limit order book (bid-ask spread and volume available at each price) from the interactions between three kinds of agents: an informed trader, a noise trader and several market makers. It also allows us to build a spread forecasting methodology in case of a tick size change and to quantify the queue priority value. To work at the individual agent level, we propose a second approach where market participants specific dynamics are modelled by non-linear and state dependent Hawkes type processes. In this setting, we are able to compute several relevant microstructural indicators in terms of the individual flows. It is notably possible to rank market makers according to their own contribution to volatility. Finally, we introduce a model where market makers optimise their best bid and ask according to the profit they can generate from them and the inventory risk they face. We then establish theoretically and empirically a new important relationship between inventory and volatility
APA, Harvard, Vancouver, ISO, and other styles
26

Chung, Yi-Dar, and 鍾宜達. "Packet Mode Cell-based Combined Input Output Queue Switches." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/60735069402603583005.

Full text
Abstract:
碩士
義守大學
資訊工程學系
91
As the rapid growth of the Multimedia Applications and the popularity of the Internet, the bandwidth of Networks is becoming to be a bottleneck. In order to provide the Internet access to individuals and corporate customers, designing a better performance switch structure is emergence. A new switch structure, called Combined Input Output Queueing (CIOQ) switch, has been proposed to reduce HOL blocking, to provide better Quality of Service (QoS) guarantee and reach high scalability. In the CIOQ switch, an Input segmentation module and an output reassembly module are needed at the input port and the output port, respectively. The input segmentation module segments an arrival packet into a train of cells. The output reassembly module is used to reassemble the cells into the original packet before it departs. Previous studies of CIOQ packet switches, however, make the same assumption of segmentation and reassembly: packets are segmented into cells as they arrive, send into input queue at the speed of one cell per slot time, and reassembled back into packets again before they depart. This thesis shows that, based on the assumption, a cell-based CIOQ packet switch with a speedup of two could not exactly emulate an OQ packet switch. Furthermore, this thesis analysis the feasibility of six possible combination for segmentation and reassembly. We get two feasible solutions: cut-through segmentation and explicative reassembly, and store and cut segmentation and implicative reassembly. Compare these two feasible solutions, we choose the last combination as our CIOQ switches’ segmentation and reassembly, this combination simplifies the matching algorithm. Furthermore, this thesis proposed a new matching algorithm, called Packet Mode Cell-based Matching Algorithm (PCMA) with a CIOQ packet switch to exactly emulate an OQ packet switch. Thus, CIOQ packet switches can provide the scalability and QoS guarantee requirements for the future high-speed networks.
APA, Harvard, Vancouver, ISO, and other styles
27

陳群元. "The Research of Setting the Fittest Queue Model for Bank User." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/16724882730643438818.

Full text
Abstract:
碩士
東吳大學
企業管理學系
92
The Research of Setting the Fittest Queue Model for Bank Abstract The research uses the Multiple-Item Scale for Measuring Waiting Quality to measure the waiting quality for bank user. Moreover, adding the TAM(technology acceptance model)into setting model establishes the peasonal waiting model by different character of each person. The main purpose is to combine the affecting factors of service quality and TAM for each person in waiting the bank service. However, according to the different agencies for getting bank service I try to set the fittest model for each customer in different agencies.To know what they need or what affect their desire can set the fittest service agencies they needed. Last, the biggest contribution in this research converts the TAM used in working to customer service. And in the other word, people prefer to use it, such as internet, telephone and ATM machine is to show that they get more satisfaction by using it. Keywords:bank, personal character, waiting quality, service agencies, waiting model.
APA, Harvard, Vancouver, ISO, and other styles
28

Chang, Ling-Cheng, and 張凌誠. "Model-based Computing Budget Allocation for G/G Queue System Simulations." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50633513309799879519.

Full text
Abstract:
碩士
國立臺灣大學
工業工程學研究所
96
Parameter setting to minimize the expected waiting time in G/G queue systems is an important issue. Regression models are constructed to describe the relationship between the expected waiting time and the parameter setting to search for the optimal setting. In the literature, Cheng and Kleijnen, Yang, Ankenman and Nelson have proposed procedures to choose setting levels needed to be simulated and the number of replication for each level. However, their models consider only one decision variable, i.e., the traffic intensity rate or the throughput rate. We propose a procedure, referred to as Model-based Computing Budget Allocation (MCBA), which combines the queuing theory and the optimum design of experiment to solve the budget allocation problem with multiple decision variables. Our approach approximates the expected waiting time with polynomial functions based on formulas developed in queuing theories and sequentially decides which parameter settings are needed to be simulated based on the concept of D-optimality. To verify the performance of MCBA, we study two cases. The first case is a G/G/1 queuing problem with the optimal parameter setting difficult to determine. The second case has an additional binary decision variable representing two different dispatching rules. Compared with the results of Optimal Computing Budget Allocation (OCBA), the proposed approach is observed to achieve higher probability of correct selection under the same simulation cost.
APA, Harvard, Vancouver, ISO, and other styles
29

Chang, Ling-Cheng. "Model-based Computing Budget Allocation for G/G Queue System Simulations." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2708200816502600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chakraborty, Avijit. "Delay Differentiation By Balancing Weighted Queue Lengths." Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2613.

Full text
Abstract:
Scheduling policies adopted for statistical multiplexing should provide delay differentiation between different traffic classes, where each class represents an aggregate traffic of individual applications having same target-queueing-delay requirements. We propose scheduling to optimally balance weighted mean instanteneous queue lengths and later weighted mean cumulative queue lengths as an approach to delay differentiation, where the class weights are set inversely proportional to the respective products of target delays and packet arrival rates. In particular, we assume a discrete-time, two-class, single-server queueing model with unit service time per packet and provide mathematical frame-work throughout our work. For iid Bernoulli packet arrivals, using a step-wise cost-dominance analytical approach using instantaneous queue lengths alone, for a class of one-stage cost functions not necessarily convex, we find the structure of the total-cost optimal policies for a part of the state space. We then consider two particular one-stage cost functions for finding two scheduling policies that are total-cost optimal for the whole state-space. The policy for the absolute weighted difference cost function minimizes the stationary mean, and the policy for the weighted sum-of-square cost function minimizes the stationary second-order moment, of the absolute value of the weighted difference of queue lengths. For the case of weighted sum-of-square cost function, the ‘iid Bernoulli arrivals’ assumption can be relaxed to either ‘iid arrivals with general batch sizes’ or to ‘Markovian zero-one arrivals’ for all of the state space, but for the linear switching curve. We then show that the average cost, starting from any initial state, exists, and is finite for every stationary work-conserving policy for our choices of the one-stage cost-function. This is shown for arbitrary number of class queues and for any i.i.d. batch arrival processes with finite appropriate moments. We then use cumulative queue lengths information in the one-step cost function of the optimization formulation and obtain an optimal myopic policy with 3 stages to go for iid arrivals with general batch sizes. We show analytically that this policy achieves the given target delay ratio in the long run under finite buffer assumption, given that feasibility conditions are satisfied. We take recourse to numerical value iteration to show the existence of average-cost for this policy. Simulations with varied class-weights for Bernoulli arrivals and batch arrivals with Poisson batch sizes show that this policy achieves mean queueing delays closer to the respective target delays than the policy obtained earlier. We also note that the coefficients of variation of the queueing delays of both the classes using cumulative queue lengths are of the same order as those using instantaneous queue lengths. Moreover, the short-term behaviour of the optimal myopic policy using cumulative queue lengths is superior to the existing standard policy reported by Coffman and Mitrani by a factor in the range of 3 to 8. Though our policy performs marginally poorer compared to the value-iterated, sampled, and then stationarily employed policy, the later lacks any closed-form structure. We then modify the definition of the third state variable and look to directly balance weighted mean delays. We come up with another optimal myopic policy with 3 stages to go, following which the error in the ratio of mean delays decreases as the window-size, as opposed to the policy mentioned in the last paragraph, wherein the error decreases as the square-root of the window-size. We perform numerical value-iteration to show the existence of average-cost and study the performance by simulation. Performance of our policy is comparable with the value-iterated, sampled, and then stationarily employed policy, reported by Mallesh. We have then studied general inter-arrival time processes and obtained the optimal myopic policy for the Pareto inter-arrival process, in particular. We have supported with simulation that our policy fares similarly to the PAD policy, reported by Dovrolis et. al., which is primarily heuristic in nature. We then model the possible packet errors in the multiplexed channel by either a Bernoulli process, or a Markov modulated Bernoulli process with two possible channel states. We also consider two possible round-trip-time values for control information, namely zero and one-slot. The policies that are next-stage optimal (for zero round-trip-time), and two-stage optimal (for one-slot round-trip-time) are obtained. Simulations with varied class-weights for Bernoulli arrivals and batch arrivals with Poisson batch sizes show that these policies indeed achieve mean queueing delays very close to the respective target delays. We also obtain the structure for optimal policies with N = 2 + ⌈rtt⌉ stages-to-go for generic values of rtt, and which need not be multiple of time-slots.
APA, Harvard, Vancouver, ISO, and other styles
31

李涵恕. "Analyzing the Waiting Time Perception by Using Flow Theory and Queue Model." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/73458440884746653454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Majedi, Mohammad. "A Queueing Model to Study Ambulance Offload Delays." Thesis, 2008. http://hdl.handle.net/10012/4019.

Full text
Abstract:
The ambulance offload delay problem is a well-known result of overcrowding and congestion in emergency departments. Offload delay refers to the situation where area hospitals are unable to accept patients from regional ambulances in a timely manner due to lack of staff and bed capacity. The problem of offload delays is not a simple issue to resolve and has caused severe problems to the emergency medical services (EMS) providers, emergency department (ED) staff, and most importantly patients that are transferred to hospitals by ambulance. Except for several reports on the problem, not much research has been done on the subject. Almost all research to date has focused on either EMS or ED planning and operation and as far as we are aware there are no models which have considered the coordination of these units. We propose an analytical model which will allow us to analyze and explore the ambulance offload delay problem. We use queuing theory to construct a system representing the interaction of EMS and ED, and model the behavior of the system as a continuous time Markov chain. The matrix geometric method will be used to numerically compute various system performance measures under different conditions. We analyze the effect of adding more emergency beds in the ED, adding more ambulances, and reducing the ED patient length of stay, on various system performance measures such as the average number of ambulances in offload delay, average time in offload delay, and ambulance and bed utilization. We will show that adding more beds to the ED or reducing ED patient length of stay will have a positive impact on system performance and in particular will decrease the average number of ambulances experiencing offload delay and the average time in offload delay. Also, it will be shown that increasing the number of ambulances will have a negative impact on offload delays and increases the average number of ambulances in offload delay. However, other system performance measures are improved by adding more ambulances to the system. Finally, we will show the tradeoffs between adding more emergency beds, adding more ambulances, and reducing ED patient length of stay. We conclude that the hospital is the bottleneck in the system and in order to reduce ambulance offload delays, either hospital capacity has to be increased or ED patient length of stay is to be reduced.
APA, Harvard, Vancouver, ISO, and other styles
33

Guan, Lin, Mike E. Woodward, and Irfan U. Awan. "A discrete-time performance model for congestion control mechanism using queue thresholds with QOS constraints." 2005. http://hdl.handle.net/10454/473.

Full text
Abstract:
This paper presents a new analytical framework for the congestion control of Internet traffic using a queue threshold scheme. This framework includes two discrete-time analytical models for the performance evaluation of a threshold based congestion control mechanism and compares performance measurements through typical numerical results. To satisfy the low delay along with high throughput, model-I incorporates one threshold to make the arrival process step reduce from arrival rate ¿1 directly to ¿2 once the number of packets in the system has reached the threshold value L1. The source operates normally, otherwise. Model-II incorporates two thresholds to make the arrival rate linearly reduce from ¿1 to ¿2 with system contents when the number of packets in the system is between two thresholds L1 and L2. The source operates normally with arrival rate ¿1 before threshold L1, and with arrival rate ¿2 after the threshold L2. In both performance models, the mean packet delay W, probability of packet loss PL and throughput S have been found as functions of the thresholds and maximum drop probability. The performance comparison results for the two models have also been made through typical numerical results. The results clearly demonstrate how different load settings can provide different tradeoffs between throughput, loss probability and delay to suit different service requirements.
APA, Harvard, Vancouver, ISO, and other styles
34

Tung, Kuan-Po, and 董冠伯. "The Design and Implementation of an Integrated Altmetrics Analysis System Based on Task Queue Model." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/13324270275547388775.

Full text
Abstract:
碩士
國立臺灣大學
圖書資訊學研究所
104
Altmetrics is a new research field which to resolve the limit of traditional Bibliometrics and Informetrics. It’s integrated benefits and features from Webometrics, extends the data source from many informal activities of academic and made all activities of internet, discussion, courses and code that can be included to the statistic. It can be used to solve the problem of traditional Informetrics which is too slow on spreading and only includes data from professional database. Although there are many altmetrics tools in foreign country, but they are limited on analysis local information, we need a new system to analysis the influences of scholars through Chinese name. The subject of this thesis is that to explore the data source and data type to analysis the influence of local scholars, design a Chinese Altmetrics analysis system based on the result of research and covers the shortage of the tools in foreign country on researching local scholars. This system must adapt many different websites/platforms and fetch corresponding data, but these data are unstructured, and must analysis by different rules. We must design a structure which is able to adapt to many environments and easily to extend new data source. In this thesis, will use task queue pattern, execute and assign tasks by many tasks in a list. In the result, we created a system with well GUI, and there are very few differences between system auto analysis and manual analysis. It has something referential value and is available to replace the manual analysis. We can expect there will be more research to find more data type and develop more algorithm to make the effort of local scholars be known by more people.
APA, Harvard, Vancouver, ISO, and other styles
35

Cho, Kai-Lin, and 卓楷霖. "A model of the effect on main-lane traffic while an off ramp queue spillback." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/29429211837756463243.

Full text
Abstract:
碩士
國立交通大學
運輸科技與管理學系
100
Recurrent congestions would occur at some interchanges which intersect with busy arterials in urban at peak hours. One of such congestions is off ramp queues spillback onto freeway main-lane. The cause of this phenomenon is, if off ramp queues could not be discharged instantly when the demand for off ramp and arterial increase at the same time, the off ramp queues would become longer. While the storage of deceleration lanes is not sufficient for the off ramp queues, these queues would spillback onto freeway main-lane and occupy the most outside lane. Such phenomenon would slow down the speed of through traffic and decrease total throughputs on main-lane. This research uses Chupei Interchange located on 91 km of Freeway No.1 as case to analyze the effect on main-lane traffic when the fore mentioned congestion occurs. Due to the insufficient real data for our research, a simulation method would be applied in this research. We would first calibrate the parameters in simulation model, then using GEH as an index to test if simulation outputs coincide with real data. After the test results are accepted, the simulation model becomes usable. The simulation outputs would be plotted into many charts to analyze the effect on main-lane traffic under different lengths of off ramp queues. An model is also constructed to compute the delay time of main-lane traffic.
APA, Harvard, Vancouver, ISO, and other styles
36

Jen, Hsiao-Hsuan, and 任小萱. "Queue, Hurdle, and Coxian Phase-type Model for Time Distributions Related to Early Detection and Hospitalization of Colorectal Cancer." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/90023214285973387246.

Full text
Abstract:
碩士
國立臺灣大學
流行病學與預防醫學研究所
103
Background As the incidence rate of colorectal cancer (CRC) has been increasing in Taiwan, early detection of CRC through fecal immunochemical test (FIT) screening first and then colonoscopy examination and hospitalization of CRCs cannot be overemphasized. However, the arrival rate of screenees, the non-compliers of undergoing colonoscopy, the waiting time (WT) for undergoing colonoscopy, and the length of stay (LOS) for CRCs has rendered the conventional queue model infeasible. Aims The objective was to integrate the queue process, hurdle model, and Coxian phase-type model into a unifying framework that was applied to two empirical datasets, one relating to the WT of undergoing colonoscopy from Taiwanese nationwide screening program, and the other pertaining to the LOS on hospitalized CRCs enrolled from one medical centre. Methods The hurdle model was developed in combination with a mixture of the logistic regression model that dealt with the non-compliance part and the truncated Poisson regression model pertaining to the WT distribution. The Coxian phase-type was further developed to identify the optimal hidden phase of WT. To further consider the arrival rate of screenees, we developed the queue hurdle Coxian phase-type model which is the combination of the Poisson process, hurdle model and Coxian phase-type model. Data on the LOS of 178 CRCs were modelled by the Coxian phase-type model to identify the optimal number of hidden phases. Results Part I : From 2004 to 2009, the results of the hurdle model indicate the factors associated with non-compliance for colonoscopy included female, older age group, eastern Taiwan or offshore islands area, rural area, hospital screening unit and prevalent screening rounds, and the factors associated with shorter WT for colonoscopy included middle Taiwan area, main urban area, public health centers screening unit and subsequent screening rounds. Part II : The queue hurdle 2-phase Coxian phase-type model was classified as short- and long waiting phase. The arrival rate was 0.00021 per person-days and the probability of non-compliance with colonoscopy was 0.26. Annually, around 15% subjects were so hesitant to be referred to undergo colonoscopy that they were trapped in long waiting phase. The mean WT of short waiting phase and long waiting phase were 32 days and 169 days, respectively. Further to consider the effect of risk score on the model, the queue hurdle 2-phase Coxian phase-type model indicates the mean WT in short waiting phase were 36 days and 30 days for the low score group and the high score group, separately and 167 days in longer waiting phase among these two groups. Part III : For hospitalization, the LOS with 178 CRCs was modelled by the 3-phase Coxian phase-type model classified as short-stay, medium-stay and longer-stay phase. In the short-stay phase, the expected LOS was 10 days whereas both the medium- and longer-stay phases were 49 days. When gender was taken into account, the LOS was modelled as a 2-phase Coxian phase-type model, short- and long-stay care. It shows that male would discharge or die earlier than female. Regarding age, it shows the elderly would discharge or die earlier than the young. Conclusions A new queue hurdle Coxian phase-type model was developed to solve the queue process, the hurdle issue in relation to the problem of non-compliance with the referral of positive results of screenees to have confirmatory diagnosis, and to identify hidden phases during the WT for undergoing colonoscopy among the referrals and LOS in hospitalization for the treated CRCs.
APA, Harvard, Vancouver, ISO, and other styles
37

Teo, Chee Chong, Rohit Bhatnagar, and Stephen C. Graves. "An Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control." 2004. http://hdl.handle.net/1721.1/7447.

Full text
Abstract:
We develop an extension to the tactical planning model (TPM) for a job shop by the third author. The TPM is a discrete-time model in which all transitions occur at the start of each time period. The time period must be defined appropriately in order for the model to be meaningful. Each period must be short enough so that a job is unlikely to travel through more than one station in one period. At the same time, the time period needs to be long enough to justify the assumptions of continuous workflow and Markovian job movements. We build an extension to the TPM that overcomes this restriction of period sizing by permitting production control over shorter time intervals. We achieve this by deriving a continuous-time linear control rule for a single station. We then determine the first two moments of the production level and queue length for the workstation.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
38

Hofmann, Jens [Verfasser]. "The BMAP-G-1 queue with level dependent arrivals : an extended queueing model for stations with nonrenewal and state dependent input traffic / vorgelegt von Jens Hofmann." 2004. http://d-nb.info/971720665/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sukumaran, Vineeth Bala. "On the Tradeoff Of Average Delay, Average Service Cost, and Average Utility for Single Server Queues with Monotone Policies." Thesis, 2013. http://etd.iisc.ernet.in/2005/3434.

Full text
Abstract:
In this thesis, we study the tradeoff of average delay with average service cost and average utility for both continuous time and discrete time single server queueing models without and with admission control. The continuous time and discrete time queueing models that we consider are motivated by cross-layer models for point-to-point links with random packet arrivals and fading at slow and fast time scales. Our studies are motivated by the need to optimally tradeoff the average delay of the packets (a network layer performance measure) with the average service cost of transmitting the packets, e.g. the average power required for transmission (a physical layer performance measure) under a lower bound constraint on the average throughput, in various point-to-point communication scenarios. The tradeoff problems are studied for a class of monotone and stationary scheduling policies and under the assumption that the service cost rate and utility rate are respectively convex and concave functions of the service rate and arrival rate. We also consider the problem of optimally trading off the average delay and average error rate of randomly arriving message symbols which are transmitted over a noisy point-to-point link, in which case the service cost function is non-convex. The solutions to the tradeoff problems that we address in the thesis are asymptotic in nature, and are similar in spirit to the Berry-Gallager asymptotic bounds. It is intuitive that to keep a queue stable under a lower bound constraint on the average utility a minimum number of customers have to be served per unit time. This in turn implies that queue stability requires a minimum average service cost expenditure. In the thesis we obtain an asymptotic characterization of the minimum average delay for monotone stationary policies subject to an upper bound constraint on the average service cost and a lower bound constraint on the average utility, in the asymptotic regime where the average service cost constraint is made arbitrarily close to the above minimum average service cost. In the thesis, we obtain asymptotic lower bounds on the minimum average delay for the cases for which lower bounds were previously not known. The asymptotic characterization of the minimum average delay for monotone stationary policies, for both continuous time and discrete time models, is obtained via geometric bounds on the stationary probability of the queue length, in the above asymptotic regime. The restriction to monotone stationary policies enables us to obtain an intuitive explanation for the behaviour of the asymptotic lower bounds using the above geometric bounds on the stationary probability distribution of the queue length. The geometric bounds on the stationary probability of the queue length also lead to a partial asymptotic characterization of the structure of any optimal monotone stationary policy, in the above asymptotic regime, which was not available in previous work. Furthermore, the geometric bounds on the stationary probability can be extended to analyse the tradeoff problem in other scenarios, such as for other continuous time queueing models, multiple user communication models, queueing models with service time control, and queueing models with general holding costs. Usually, queueing models with integer valued queue evolution, are approximated by queueing models with real valued queue evolution and strictly convex service cost functions for analytical tractability. Using the asymptotic bounds, we show that for some cases the average delay does not grow to infinity in the asymptotic regime, although the approximate model suggests that the average delay does grow to infinity. In other cases where the average delay does grow to infinity in the asymptotic regime, our results illustrate that the tradeoff behaviour of the approximate model is different from that of the original integer valued queueing model unless the service cost function is modelled as the piecewise linear lower convex envelope of the service cost function for the original model.
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Yu-Nien, and 楊裕年. "Analysis and Measurement of the Equivalent Model of Serial and Parallel Queues for a Web Cluster with a Low Rejection Rate." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/22146557149707033813.

Full text
Abstract:
碩士
輔仁大學
電子工程學系
94
Due to the diverse development of the Web Services and Internet applications, the Web service requests of Web servers have increased substantially. In recent years, in order to fulfill the service requests and maintain the quality of Web services, one can use Web clusters to share the large amount of service requests. However, for a practical operation, multiple groups of Web servers may coexist in a Web cluster, due to different types of products. Thus, in this paper, we propose a serial and parallel equivalent equation and general form with a low rejection rate based on a typical M/M/1 queuing model to analyze the service performance of a Web cluster. In order to increase the practicability of the equivalent equations, we also propose a method for obtaining the system service. This method computes the system service rate by using the Minimum Mean Square Error. It is useful to adjust the parameters for load balance not only by the equivalent equation but also by the method for exacting the system service. Overall, by using the QNAT simulation and the Webserver Stress Tool measurement, we provide a general quantitative relationship between the numbers of servers of these groups for a cluster in order to expand the service capacity. These results also show the two-pass method can provide both a load balance and a more reliable system for fulfilling Web requests.
APA, Harvard, Vancouver, ISO, and other styles
41

Teo, Chee Chong. "A Study of Moment Recursion Models for Tactical Planning of a Job Shop: Literature Survey and Research Opportunities." 2003. http://hdl.handle.net/1721.1/3920.

Full text
Abstract:
The Moment Recursion (MR) models are a class of models for tactical planning of job shops or other processing networks. The MR model can be used to determine or approximate the first two moments of production quantities and queue lengths at each work station of a job shop. Knowledge of these two moments is sufficient to carry out a variety of performance evaluation, optimization and decision-support applications. This paper presents a literature survey of the Moment-Recursion models. Limitations in the existing research and possible research opportunities are also discussed. Based on the research opportunities discussed, we are in the process of building a model that attempts to fill these research gaps.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
42

Usman, Muneer. "Performance Analysis of Emerging Solutions to RF Spectrum Scarcity Problem in Wireless Communications." Thesis, 2014. http://hdl.handle.net/1828/5713.

Full text
Abstract:
Wireless communication is facing an increasingly severe spectrum scarcity problem. Hybrid free space optical (FSO)/ millimetre wavelength (MMW) radio frequency (RF) systems and cognitive radios are two candidate solutions. Hybrid FSO/RF can achieve high data rate transmission for wireless back haul. Cognitive radio transceivers can opportunistically access the underutilized spectrum resource of existing systems for new wireless services. In this work we carry out accurate performance analysis on these two transmission techniques. In particular, we present and analyze a switching based transmission scheme for a hybrid FSO/RF system. Specifically, either the FSO or RF link will be active at a certain time instance, with the FSO link enjoying a higher priority. We consider both a single threshold case and a dual threshold case for FSO link operation. Analytical expressions are obtained for the outage probability, average bit error rate and ergodic capacity for the resulting system. We also investigate the delay performance of secondary cognitive transmission with interweave implementation. We first derive the exact statistics of the extended delivery time, that includes both transmission time and waiting time, for a fixed-size secondary packet. Both work-preserving strategy (i.e. interrupted packets will resume transmission from where interrupted) and non-work-preserving strategy (i.e. interrupted packets will be retransmitted) are considered with various sensing schemes. Finally, we consider a M/G/1 queue set-up at the secondary user and derive the closed-form expressions for the expected delay with Poisson traffic. The analytical results will greatly facilitate the design of the secondary system for particular target application.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
43

Correia, Edgar Vaz. "Mapeamento da cadeia de valor do molde: uma contribuição para reduzir o tempo real de produção." Master's thesis, 2012. http://hdl.handle.net/10071/6539.

Full text
Abstract:
A indústria dos moldes, nomeadamente para injecção de plásticos, é uma das mais desenvolvidas em Portugal e ocupa um lugar activo na balança comercial, exportando a maioria dos seus produtos. De modo a garantir uma posição no mercado internacional, surgiu o projecto Tooling Edge, uma parceria entre empresas da indústria e comunidade científica que tem como objectivo principal desenvolver conhecimento científico e tecnológico, metodologias de trabalho e de organização inovadoras e adaptadas ao sector de Engineering & Tooling que, através de um processo de demonstração e disseminação, permitam incrementar o desempenho global da indústria e o valor acrescentado nos seus processos e produtos. Servindo de apoio ao projecto Tooling Edge, surge este projecto que, de acordo com os princípios lean, se foca na identificação e redução de desperdícios existentes na cadeia de valor da produção de moldes, mais concretamente, as filas de espera. Para tal, foi realizado um estágio no grupo TJ, de modo a acompanhar e realizar o mapeamento da cadeia de valor da produção de três moldes, que através de uma análise cuidada às esperas – que passa por identificar estrangulamentos no processo, que permitam à empresa em estudo ter uma base para efectuar acções de melhoria, de modo a aumentar alguns dos seus factores de competitividade, assim como a redução do lead time e o custo de produção de moldes – conclui que nesta empresa, principalmente a secção de fresagem, é uma das mais críticas e que o aumento da sua capacidade traria benefícios práticos.
The mold industry, especially for plastic injection, is one of the most developed in Portugal and plays an active role in the trade balance, exporting most of its products. In order to ensure a position in the international market, the project Tooling Edge emerged, a partnership between companies and the scientific community which aims to develop scientific and technological knowledge, innovative working and organization methods adapted to the sector of Engineering & Tooling which, through a process of demonstration and sowing, may improve the overall performance and value added industry in the processes and products. Serving to support the Tooling Edge project, this study appears, which, according to the lean principles, focuses on identifying and reducing waste that exists in the mold production value stream, more particularly work-in-progress queues. For this purpose, it was conducted an internship at TJ Group, to monitor and carry out the value stream mapping on the production of three molds, that through a careful analysis to work-in-progress queues – which is to identify bottlenecks in the process, allowing the company under study have a base to carry out improvement actions, so as to increase some of their competitiveness factors, as well as the reduction of lead time and cost of mold production – concluded that in this company, the milling section is one of the most critical and that increasing it's capacity would bring practical benefits.
APA, Harvard, Vancouver, ISO, and other styles
44

Das, Sudipta. "Loss Ratios of Different Scheduling Policies for Firm Real-time System : Analysis and Comparisons." Thesis, 2013. http://etd.iisc.ernet.in/handle/2005/2808.

Full text
Abstract:
Firm real time system with Poisson arrival process, iid exponential service times and iid deadlines till the end of service of a job, operated under the First Come First Served (FCFS) scheduling policy is well studied. In this thesis, we present an exact theoretical analysis of a similar (M/M/1 + G queue) system with exact admission control (EAC). We provide an explicit expression for the steady state workload distribution. We use this solution to derive explicit expressions for the loss ratio and the sojourn time distribution. An exact theoretical analysis of the performance of an M/M/1 + G queue with preemptive deadlines till the end of service, operating under the Earliest Deadline First (EDF) scheduling policy, appears to be difficult, and only approximate formulas for the loss ratio are available in the literature. We present in this thesis similar approximate formulas for the loss ratio in the present of an exit control mechanism, which discards a job at the epoch of its getting the server if there is no chance of completing it. We refer to this exit control mechanism as the Early job Discarding Technique (EDT). Monte Carlo simulations of performance indicate that the maximum approximation error is reasonably small for a wide range of arrival rates and mean deadlines. Finally, we compare the loss ratios of the First Come First Served and the Earliest Deadline First scheduling policies with or without admission or exit control mechanism, as well as their counterparts with deterministic deadlines. The results include some formal equalities, inequalities and some counter-examples to establish non-existence of an order. A few relations involving loss ratios are posed as conjectures, and simulation results in support of these are reported. These results lead to a complete picture of dominance and non-dominance relations between pairs of scheduling policies, in terms of loss ratios.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography