Дисертації з теми "Network management models"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Network management models.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Network management models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yao, Zhonghui. "ATM network models for traffic management." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq23559.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Frank, Simon James. "Predicting corporate credit ratings using neural network models." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/913.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (MBA (Business Management))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: For many organisations who wish to sell their debt, or investors who are looking to invest in an organisation, company credit ratings are an important surrogate measure for the marketability or risk associated with a particular issue. Credit ratings are issued by a limited number of authorised companies – with the predominant being Standard & Poor’s, Moody’s and Fitch – who have the necessary experience, skills and motive to calculate an objective credit rating. In the wake of some high profile bankruptcies, there has been recent conjecture about the accuracy and reliability of current ratings. Issues relating specifically to the lack of competition in the rating market have been identified as possible causes of the poor timeliness of rating updates. Furthermore, the cost of obtaining (or updating) a rating from one of the predominant agencies has also been identified as a contributing factor. The high costs can lead to a conflict of interest where rating agencies are obliged to issue more favourable ratings to ensure continued patronage. Based on these issues, there is sufficient motive to create more cost effective alternatives to predicting corporate credit ratings. It is not the intention of these alternatives to replace the relevancy of existing rating agencies, but rather to make the information more accessible, increase competition, and hold the agencies more accountable for their ratings through better transparency. The alternative method investigated in this report is the use of a backpropagation artificial neural network to predict corporate credit ratings for companies in the manufacturing sector of the United States of America. Past research has shown that backpropagation neural networks are effective machine learning techniques for predicting credit ratings because no prior subjective or expert knowledge, or assumptions on model structure, are required to create a representative model. For the purposes of this study only public information and data is used to develop a cost effective and accessible model. The basis of the research is the assumption that all information (both quantitive and qualitative) that is required to calculate a credit rating for a company, is contained within financial data from income statements, balance sheets and cash flow statements. The premise of the assumption is that any qualitative or subjective assessment about company creditworthiness will ultimately be reflected through financial performance. The results show that a backpropagation neural network, using 10 input variables on a data set of 153 companies, can classify 75% of the ratings accurately. The results also show that including collinear inputs to the model can affect the classification accuracy and prediction variance of the model. It is also shown that latent projection techniques, such as partial least squares, can be used to reduce the dimensionality of the model without making any assumption about data relevancy. The output of these models, however, does not improve the classification accuracy achieved using selected un-correlated inputs.
AFRIKAANSE OPSOMMING: Vir baie organisasies wat skuldbriewe wil verkoop, of beleggers wat in ʼn onderneming wil belê is ʼn maatskappy kredietgradering ’n belangrike plaasvervangende maatstaf vir die bemarkbaarheid van, of die risiko geassosieer met ʼn betrokke uitgifte. Kredietgraderings word deur ʼn beperkte aantal gekeurde maatskappye uitgereik – met die belangrikste synde Standard & Poor’s, Moody’s en Fitch. Hulle het almal die nodige ervaring, kundigheid en rede om objektiewe kredietgraderings te bereken. In die nadraai van ʼn aantal hoë profiel bankrotskappe was daar onlangs gissings oor die akkuraatheid en betroubaarheid van huidige graderings. Kwessies wat spesifiek verband hou met die gebrek aan kompetisie in die graderingsmark is geïdentifiseer as ‘n moontlike oorsaak vir die swak tydigheid van gradering opdatering. Verder word die koste om ‘n gradering (of opdatering van gradering) van een van die dominante agentskappe te bekom ook geïdentifiseer as ʼn verdere bydraende faktor gesien. Die hoë koste kan tot ‘n belange konflik lei as graderingsagentskappe onder druk kom om gunstige graderings uit te reik om sodoende volhoubare klante te behou. As gevolg van hierdie kwessies is daar voldoende motivering om meer koste doeltreffende alternatiewe vir die skatting van korporatiewe kredietgraderings te ondersoek. Dit is nie die doelwit van hierdie alternatiewe om die toepaslikheid van bestaande graderingsagentskappe te vervang nie, maar eerder om die inligting meer toeganklik te maak, mededinging te verhoog en om die agentskappe meer toerekenbaar vir hul graderings te maak deur beter deursigtigheid. Die alternatiewe manier wat in hierdie verslag ondersoek word, is die gebruik van ‘n kunsmatige neurale netwerk om die kredietgraderings van vervaardigingsmaatskappye in die VSA te skat. Vorige navorsing het getoon dat neurale netwerke doeltreffende masjienleer tegnieke is om kredietgraderings te skat omdat geen voorafkennis of gesaghebbende kundigheid, of aannames oor die modelstruktuur nodig is om ‘n verteenwoordigende model te bou. Vir doeleindes van hierdie navorsingsverslag word slegs openbare inligting en data gebruik om ʼn kostedoeltreffende en toeganklike model te bou. Die grondslag van hierdie navorsing is die aanname dat alle inligting (beide kwantitatief en kwalitatief) wat benodig word om ʼn kredietgradering vir ʼn onderneming te bereken, opgesluit is in die finansiële data in die inkomstestate, balansstate en kontantvloei state. Die aanname is dus dat alle kwalitatiewe of subjektiewe assessering oor ‘n maatskappy se kredietwaardigheid uiteindelik in die finansiële prestasie sal reflekteer. Die resultate toon dat ʼn neurale netwerk met 10 toevoer veranderlikes op ‘n datastel van 153 maatskappye 75% van die graderings akkuraat klassifiseer. Die resultate toon ook dat die insluiting van kollineêre toevoere tot die model die klassifikasie akkuraatheid en die variansie van die skatting kan beïnvloed. Daar word verder getoon dat latente projeksietegnieke, soos parsiële kleinste kwadrate, die dimensies van die model kan verminder sonder om enige aannames oor data toepaslikheid te maak. Die afvoer van hierdie modelle verhoog egter nie die klassifikasie akkuraatheid wat behaal is met die gekose ongekorreleerde toevoere nie. 121 pages.
3

Yang, Xi. "Applying stochastic programming models in financial risk management." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research studies two modelling techniques that help seek optimal strategies in financial risk management. Both are based on the stochastic programming methodology. The first technique is concerned with market risk management in portfolio selection problems; the second technique contributes to operational risk management by optimally allocating workforce from a managerial perspective. The first model involves multiperiod decisions (portfolio rebalancing) for an asset and liability management problem and deals with the usual uncertainty of investment returns and future liabilities. Therefore it is well-suited to a stochastic programming approach. A stochastic dominance concept is applied to control the risk of underfunding. A small numerical example and a backtest are provided to demonstrate advantages of this new model which includes stochastic dominance constraints over the basic model. Adding stochastic dominance constraints comes with a price: it complicates the structure of the underlying stochastic program. Indeed, new constraints create a link between variables associated with different scenarios of the same time stage. This destroys the usual tree-structure of the constraint matrix in the stochastic program and prevents the application of standard stochastic programming approaches such as (nested) Benders decomposition and progressive hedging. A structure-exploiting interior point method is applied to this problem. Computational results on medium scale problems with sizes reaching about one million variables demonstrate the efficiency of the specialised solution technique. The second model deals with operational risk from human origin. Unlike market risk that can be handled in a financial manner (e.g. insurances, savings, derivatives), the treatment of operational risks calls for a “managerial approach”. Consequently, we propose a new way of dealing with operational risk, which relies on the well known Aggregate Planning Model. To illustrate this idea, we have adapted this model to the case of a back office of a bank specialising in the trading of derivative products. Our contribution corresponds to several improvements applied to stochastic programming modelling. First, the basic model is transformed into a multistage stochastic program in order to take into account the randomness associated with the volume of transaction demand and with the capacity of work provided by qualified and non-qualified employees over the planning horizon. Second, as advocated by Basel II, we calculate the probability distribution based on a Bayesian Network to circumvent the difficulty of obtaining data which characterises uncertainty in operations. Third, we go a step further by relaxing the traditional assumption in stochastic programming that imposes a strict independence between the decision variables and the random elements. Comparative results show that in general these improved stochastic programming models tend to allocate more human expertise in order to hedge operational risks. The dual solutions of the stochastic programs are exploited to detect periods and nodes that are at risk in terms of expertise availability.
4

Wolff, Janik. "IT-Security Investment Models." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-6390.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Haskose, Ahmed. "Queueing network models for workload control in the make-to-order sector." Thesis, Lancaster University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Marufuzzaman, Mohammad. "Models for a carbon constrained, reliable biofuel supply chain network design and management." Thesis, Mississippi State University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3631817.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

This dissertation studies two important problems in the field of biomass supply chain network. In the first part of the dissertation, we study the impact of different carbon regulatory policies such as carbon cap, carbon tax, carbon cap-and-trade and carbon offsetmechanism on the design and management of a biofuel supply chain network under both deterministic and stochastic settings. These mathematical models identify locations and production capacities for biocrude production plants by exploring the trade-offs that exist between transportations costs, facility investment costs and emissions. The model is solved using a modified L-shaped algorithm. We used the state of Mississippi as a testing ground for our model. A number of observations are made about the impact of each policy on the biofuel supply chain network.

In the second part of the dissertation, we study the impact of intermodal hub disruption on a biofuel supply chain network. We present mathematical model that designs multimodal transportation network for a biofuel supply chain system, where intermodal hubs are subject to site-dependent probabilistic disruptions. The disruption probabilities of intermodal hubs are estimated by using a probabilistic model which is developed using real world data. We further extend this model to develop a mixed integer nonlinear program that allocates intermodal hub dynamically to cope with biomass supply fluctuations and to hedge against natural disasters. We developed a rolling horizon based Benders decomposition algorithm to solve this challenging NP-hard problem. Numerical experiments show that this proposed algorithm can solve large scale problem instances to a near optimal solution in a reasonable time. We applied the models to a case study using data from the southeast region of U.S. Finally, a number of managerial insights are drawn into the impact of intermodal-related risk on the supply chain performance.

7

Wang, Shuo. "Optimization Models for Network-Level Transportation Asset Preservation Strategies." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1416578565.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wilson, Cynthia M. (Cynthia Marie). "Development of operations based long range network capacity planning models." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66039.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, June 2011.
"June 2011." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 77-80).
Planning for vaccines manufacturing capacity is both a complex task requiring many inputs and an important function of manufacturers to ensure the supply of vaccines that prevent life-threatening illnesses. This thesis explores the development of an operations based long range capacity planning model to facilitate the annual strategic capacity planning review at Novartis Vaccines. This model was developed in conjunction with process owners at Novartis Vaccines and utilizes operations principles, non-linear optimization, and process data to efficiently calculate the capacity of the vaccine manufacturing network. The resulting network capacity is then compared to the long range demand for vaccine production to determine capacity deficits and surpluses in the current manufacturing network as well as analyzing options for more efficient capacity usage. Although this model was developed specifically with respect to the Novartis Vaccines manufacturing network, the capacity calculation and gap analysis tools for single and multiproduct facilities as well as batch allocation for in multi-product, multi-facility networks are also applicable to other companies and industries that utilize batch processing. The model was validated utilizing process information from a production line that was already operating near capacity and showed a 95% agreement with the data from this line. Additionally, this operations based planning model was able to achieve buy-in from both process owners and the global strategy organization allowing it to be implemented in the planning cycle. Use of this tool enables efficiency and transparency in capacity analysis as well as the tools to examine the impact of a range of scenarios on the manufacturing network.
by Cynthia M. Wilson.
S.M.
M.B.A.
9

PONCANO, VERA M. L. "Estudo de organização em rede na metrologia em química." reponame:Repositório Institucional do IPEN, 2007. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11659.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Made available in DSpace on 2014-10-09T12:54:25Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:07:57Z (GMT). No. of bitstreams: 1 12761.pdf: 16017152 bytes, checksum: 54635689e6f56c65e84f7fe9bf404148 (MD5)
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energéticas e Nucleares - IPEN/CNEN-SP
10

Bsaybes, Sahar. "Models and algorithms for fleet management of autonomous vehicles." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC114/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Résumé indisponible
The VIPAFLEET project aims at developing a framework to manage a fleet of IndividualPublic Autonomous Vehicles (VIPA). We consider a fleet of cars distributed at specifiedstations in an industrial area to supply internal transportation, where the cars can beused in different modes of circulation (tram mode, elevator mode, taxi mode). The goalis to develop and implement suitable algorithms for each mode in order to satisfy all therequests either under an economic point aspect or under a quality of service aspect, thisby varying the studied objective functions.We model the underlying online transportation system as a discrete event basedsystem and propose a corresponding fleet management framework, to handle modes,demands and commands. We consider three modes of circulation, tram, elevator andtaxi mode. We propose for each mode appropriate online algorithms and evaluate theirperformance, both in terms of competitive analysis and practical behavior by computationalresults. We treat in this work, the pickup and delivery problem related to theTram mode and the Elevator mode the pickup and delivery problem with time windowsrelated to the taxi mode by means of flows in time-expanded networks
11

Dornas, Guilherme Costa Valle. "The relation of strategic management models and learning networks to performance increase : lessons from a Brazilian learning network of SMEs." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/5207/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The objective of this thesis is to investigate the relationships between performance change or increase and strategic management or strategy implementation, based on Learning Networks Groups of SMEs in Brazil. The research is based on a group of medium-sized South American companies that participate in the “Learning Network Programme” (or LNC, Learning Networks Companies). The LNC Programme encourages the exchange of experiences while discussing management models, putting management and strategic tools into practice, and training the participating companies’ employees in managerial instruments. A hypothetical Global Performance model based on Strategic Management Elements and also on Learning Network Elements was developed and, subsequently, tested through a field survey with 300 Brazilian SMEs, being 150 from companies that have experimented the LNC Programme and 150 organizations that have never gone through a similar project. In order to test the empirical validity of the model, structural equation modelling was used, with reference to both main and unfolded hypotheses, and analysed the variables of strategic management and learning networks and their possible impact on the companies’ global performance. The proposed model demonstrated to be able to predict 62% of the variance in the global performance construct considering the LNC Group and 5.6% considering the Non-LNC Group.
12

Liu, Youfei, and 劉有飛. "Network and temporal effects on strategic bidding in electricity markets." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36895763.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tillman, Dorothy Hamlin. "Coupling of ecological and water quality models for improved water resource and fish management." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2334.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Haerian, Laila. "Airline Revenue Management: models for capacity control of a single leg and a network of flights." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1181839192.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chen, Xiaoliang. "Neural network based models for value-at-risk analysis with applications in emerging markets /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ms-b23749209f.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Management Sciences in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves 94-104)
16

Suharko, Arief Bimantoro. "Tactical Network Flow and Discrete Optimization Models and Algorithms for the Empty Railcar Transportation Problem." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/26405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prior to 1980, the practice in multilevel autorack management was to load the railcars at various origin points, ship them to the destination ramps, unload them, and then return each car to the loading point where it originated. Recognizing the inefficiency of such a practice with respect to the fleet size that had to be maintained, and the associated poor utilization due to the excessive empty miles logged, a consolidation of the railcars was initiated and completed by February 1982. Under this pooling program, a central management was established to control the repositioning of three types of empty railcars for eight principal automobile manufacturers. Today, the practice is to consolidate the fleets of all automobile manufacturers for each equipment type, and to solve the distribution problem of repositioning empty multilevel autoracks of each type from points at which they are unloaded to automobile assembly facilities where they need to be reloaded. Each such problem is referred to in the railroad industry as a repositioning scenario. In this dissertation, we present two tactical models to assist in the task of centrally managing the distribution of empty railcars on a day-to-day basis for each repositioning scenario. These models take into account various practical issues such as uncertainties, priorities with respect to time and demand locations, multiple objectives related to minimizing different types of latenesses in delivery, and blocking issues. It is also of great practical interest to the central management team to have the ability to conduct various sensitivity analyses in its operation. Accordingly, the system provides for the capability to investigate various what-if scenarios such as fixing decisions on running a specified block of cars (control orders) along certain routes as dictated by business needs, and handling changes in supplies, demands, priorities, and transit time characteristics. Moreover, the solution methodology provides a flexible decision-making capability by permitting a series of runs based on a sequential decision-fixing process in a real-time operational mode. A turn-around response of about five minutes per scenario (on a Pentium PC or equivalent) is desired in practice. This dissertation begins by developing several progressive formulations that incorporate many practical considerations in the empty railroad car distribution planning system. We investigate the performance of two principal models in this progression to gain more insights into the implementation aspects of our approach. The first model (TDSS1: Tactical Decision Support System-1) considers all the identified features of the problem except for blocking, and results in a network formulation of the problem. This model examines various practical issues such as time and demand location-based priorities as well as uncertainty in data within a multiple objective framework. In the second model (TDSS2: Tactical Decision Support System-2), we add a substantial degree of complexity by addressing blocking considerations. Enforcement of block formation renders the model as a network flow problem with side-constraints and discrete side-variables. We show how the resulting mixed-integer-programming formulation can be enhanced via some partial convex hull constructions using the Reformulation-Linearization Technique (RLT). This tightening of the underlying linear programming relaxation is shown to permit the solution of larger problem sizes, and enables the exact solution of certain scenarios having 5,000 - 8,000 arcs. However, in order to accommodate the strict run-time limit requirements imposed in practice for larger scenarios having about 150,000 arcs, various heuristics are developed to solve this problem. In using a combination of proposed strategies, 23 principal heuristics, plus other hybrid variants, are composed for testing. By examining the performance of various exact and heuristic procedures with respect to speed of operation and the quality of solutions produced on a test-bed of real problems, we prescribe recommendations for a production code to be used in practice. Besides providing a tool to aid in the decision-making process, a principal utility of the developed system is that it provides the opportunity to conduct various what-if analyses. The effects of many of the practical considerations that have been incorporated in TDSS2 can be studied via such sensitivity analyses. A special graphical user interface has been implemented that permits railcar distributors to investigate the effects of varying supplies, demands, and routes, retrieving railcars from storage, diverting en-route railcars, and exploring various customer or user-driven fixed dispositions. The user has the flexibility, therefore, to sequentially compose a decision to implement on a daily basis by using business judgment to make suggestions and studying the consequent response prompted by the model. This system is currently in use by the TTX company, Chicago, Illinois, in order to make distribution decisions for the railroad and automobile industries. The dissertation concludes by presenting a system flowchart for the overall implemented approach, a summary of our research and provides recommendations for future algorithmic enhancements based on Lagrangian relaxation techniques. NOTE: (03/2011) An updated copy of this ETD was added after there were patron reports of problems with the file.
Ph. D.
17

Blanchard, Monica R. "Using Network Models to Predict Steelhead Abundance, Middle Fork John Day, OR." DigitalCommons@USU, 2015. https://digitalcommons.usu.edu/etd/4477.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the management of threatened and endangered species, informed population estimates are essential to gage whether or not recovery goals are being met. In the case of Pacific salmonids, this evaluation often involves sampling a small subset of the population and scaling up to estimate larger distinct populations segments. This is made complicated by the fact that fish populations are not evenly distributed along riverscapes but respond to physical and biological stream properties at varying spatial extents. We used rapid assessment survey methods and the River Styles classification to explore fish-habitat relationships at a continuous network scale. Semi-continuous surveys were conducted across nine streams in the upper Middle Fork John Day River watershed and increased the number of sites surveyed eight-fold over other monitoring methods within the watershed. Using this increased sample size and continuous habitat metrics we improved watershed-wide steelhead (Oncorhynchus mykiss) abundance models. We first validated the distinctions among River Styles through a classification analysis using physical metrics measured at the rapid assessment sites. Overall classification accuracy, using a combination of reach and landscape scale metrics, was 88.3% and suggested that River Style classification was identifying variations in physical morphology within the watershed that was quantifiable at the reach scale. Leveraging the continuous River Styles classification of physical habitat and a continuous model of primary production improved the prediction of steelhead abundance across the network. Using random forest regressions, a model that included only habitat metrics resulted in R2 = 0.34, while using the continuous variables improved the model accuracy greatly to R2 = 0.65. Random forest allowed for further investigation into the predictor variables through the analysis of the partial dependence plots and identified a gross primary production threshold, below which production might be limiting steelhead populations. This method also identified the rarest River Style surveyed within the watershed, Confined-Valley Step Cascade, as the morphology that had the largest marginal effect on steelhead. The inherent physical properties and boundary conditions unique to each River Style has the potential to inform fish-habitat relationships across riverscapes and improve abundance estimates on a continuous spatial scale.
18

Carbajal, Orozco Jose Antonio. "Transportation resource management in large-scale freight consolidation networks." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42758.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation proposes approaches that enable effective planning and control of mobile transportation resources in large-scale consolidation networks. We develop models, algorithms, and methodologies that are applied to fleet sizing and fleet repositioning. Three specific but interrelated problems are studied. The first two relate to the trade-offs between fleet size and repositioning costs in transportation resource management, while the third involves a dynamic empty repositioning problem with explicit consideration of the uncertainty of future requirements that will be revealed over time. Chapter 1 provides an overview of freight trucking, including the consolidation trucking systems that will be the focus of this research. Chapter 2 proposes an optimization modeling approach for analyzing the trade-off between the cost of a larger fleet of tractors and the cost of repositioning tractors for a trucking company operating a consolidation network, such as a less-than-truckload (LTL) company. Specifically, we analyze the value of using extra tractor repositioning moves (in addition to the ones required to balance resources throughout the network) to attain savings in the fixed costs of owning or leasing a tractor fleet during a planning horizon. The primary contributions of the research in this chapter are that (1) we develop the first optimization models that explore the impact of fleet size reductions via repositioning strategies that have regularity and repeatability properties, and (2) we demonstrate that substantial savings in operational costs can be achieved by repositioning tractors in anticipation of regional changes in freight demand. Chapter 3 studies the optimal Pareto frontiers between the fleet size and repositioning costs of resources required to perform a fixed aperiodic or periodic schedule of transportation requests. We model resource schedules in two alternative ways: as flows on event-based, time-expanded networks; and as perfect matchings on bipartite networks. The main contributions from this chapter are that (1) we develop an efficient re-optimization procedure to compute adjacent Pareto points that significantly reduces the time to compute the entire Pareto frontier of fleet size versus repositioning costs in aperiodic networks, (2) we show that the natural extension to compute adjacent Pareto points in periodic networks does not work in general as it may increase the fleet size by more than one unit, and (3) we demonstrate that the perfect matching modeling framework is frequently intractable for large-scale instances. Chapter 4 considers robust models for dynamic empty-trailer repositioning problems in very large-scale consolidation networks. We investigate approaches that deploy two-stage robust optimization models in a rolling horizon framework to address a multistage dynamic empty repositioning problem in which information is revealed over time. Using real data from a national package/parcel express carrier, we develop and use a simulation to evaluate the performance of repositioning plans in terms of unmet loaded requests and execution costs. The main contributions from this chapter are that (1) we develop approaches for embedding two-stage robust optimization models within a rolling horizon framework for dynamic empty repositioning, (2) we demonstrate that such approaches enable the solution of very large-scale instances, and (3) we show that less conservative implementations of robust optimization models are required within rolling horizon frameworks. Finally, Chapter 5 summarizes the main conclusions from this dissertation and discusses directions for further research.
19

Ayad, Fady. "How is AI research applied in the field of network fault management." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The internet growth rapidly increased by the years, and the traffic is increasing daily. The management of the network is becoming more and more complexed for humans to handle on their own, with that being said a new direction of using Artificial Intelligence (AI) technologies is being implemented in the direction of network fault management. In order to keep up with the development network, new solutions need to be implemented. Traditional network fault management are dependent of system administrators and there is too much human error that can happen during operations. That’s why AI is a great tool to be used in future network fault management. There are currently many challenges within network fault management, and this makes an opportunity for AI to be implemented. The studies shows that AI subpart “supervised learning” is the most popular used in network fault management. AI have shown that there is potential to tackle problems such as detection, prediction and also improve the system as whole.
20

Pyo, Tae-Hyung. "Three essays on social networks and the diffusion of innovation models." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1383.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Bass model has been used extensively and globally to forecast the first purchases of new products. It has been named by INFORMS as one of the top 10 most influential papers published in the 50-year history of Management Science. Most models for the diffusion of innovation are deeply rooted in the work of Bass (1969). His work provides a framework to model the underlying process of innovation adaption among first-time customers. Potential customers may be connected to one another in some sort of network. Prior research has shown that the structure of a network affects adoption patterns (Dover et al. 2012; Hill et al. 2006; Katona and Sarvary 2008; Katona et al. 2011; Newman et al. 2006; Shaikh et al. 2010; Van den Bulte and Joshi 2007). One approach to addressing this issue is to incorporate network information into the original Bass model. The focus of this study is to explore how to incorporate network information and other micro-level data into the Bass model. First, I prove that the Bass Model assumes all potential customers are linked to all other customers. Through simulations of individual adoptions and connections among individuals using a Random Network , I show that the estimate of q in the Bass Model is biased downward in the original Bass model. I find that biases in the Bass Model depend on the structure of the network. I relax the assumption of the fully connected network by proposing a Network-Based Bass model (NBB), which incorporates the network structure into the traditional Bass model. Using the proposed model (NBB), I am able to recover the true parameters. To test the generalizability and to enhance the applicability of my NBB model, I tested my NBB model on the various network types with sampled data from the population network. I showed that my NBB model is robust across different types of networks, and it is efficient in terms of sample size. With a small fraction of data from the population, it accurately recovered the true parameters. Therefore, the NBB model can be used when we do not have complete network information. The last essay is the first attempt to incorporate heterogeneous peer influence into the NBB model, based on individuals' preference structures. Besides the significant extension of the NBB (Bass) Model, incorporating high-quality data on individual behavior into the model leads to new findings on individuals' adoption behaviors, and thus expands our knowledge of the diffusion process.
21

Bhat, Aniket Anant. "Stochastic Petri Net Models of Service Availability in a PBNM System for Mobile Ad Hoc Networks." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/10000.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Policy based network management is a promising approach for provisioning and management of quality of service in mobile ad hoc networks. In this thesis, we focus on performance evaluation of this approach in context of the amount of service received by certain nodes called policy execution points (PEPs) or policy clients from certain specialized nodes called the policy decision points (PDPs) or policy servers. We develop analytical models for the study of the system behavior under two scenarios; a simple Markovian scenario where we assume that the random variables associated with system processes follow an exponential distribution and a more complex non-Markovian scenario where we model the system processes according to general distribution functions as observed through simulation. We illustrate that the simplified Markovian model provides a reasonable indication of the trend of the service availability seen by policy clients and highlight the need for an exact analysis of the system without relying on Poisson assumptions for system processes. In the case of the more exact non-Markovian analysis, we show that our model gives a close approximation to the values obtained via empirical methods. Stochastic Petri Nets are used as performance evaluation tools in development and analysis of these system models.
Master of Science
22

Skolpadungket, Prisadarng. "Portfolio management using computational intelligence approaches : forecasting and optimising the stock returns and stock volatilities with fuzzy logic, neural network and evolutionary algorithms." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6306.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Portfolio optimisation has a number of constraints resulting from some practical matters and regulations. The closed-form mathematical solution of portfolio optimisation problems usually cannot include these constraints. Exhaustive search to reach the exact solution can take prohibitive amount of computational time. Portfolio optimisation models are also usually impaired by the estimation error problem caused by lack of ability to predict the future accurately. A number of Multi-Objective Genetic Algorithms are proposed to solve the problem with two objectives subject to cardinality constraints, floor constraints and round-lot constraints. Fuzzy logic is incorporated into the Vector Evaluated Genetic Algorithm (VEGA) to but solutions tend to cluster around a few points. Strength Pareto Evolutionary Algorithm 2 (SPEA2) gives solutions which are evenly distributed portfolio along the effective front while MOGA is more time efficient. An Evolutionary Artificial Neural Network (EANN) is proposed. It automatically evolves the ANN's initial values and structures hidden nodes and layers. The EANN gives a better performance in stock return forecasts in comparison with those of Ordinary Least Square Estimation and of Back Propagation and Elman Recurrent ANNs. Adaptation algorithms for selecting a pair of forecasting models, which are based on fuzzy logic-like rules, are proposed to select best models given an economic scenario. Their predictive performances are better than those of the comparing forecasting models. MOGA and SPEA2 are modified to include a third objective to handle model risk and are evaluated and tested for their performances. The result shows that they perform better than those without the third objective.
23

Jonsson, Josefine. "Change And Version Management Of Transport Network Data Between Different Database Models : A Case Study On The Swedish National Road Database." Thesis, KTH, Geoinformatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254520.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Swedish Road Administration wants to compile all the national road database data from The Swedish Mapping, Cadastral and Land Registration Authority using a Geographical Information System compiler in order to increase the efficiency of data flow between their respective databases. The objective of this master’s thesis has been to build a software solution containing changed private road data input from The Swedish Mapping, Cadastral and Land Registration Authority and processing it into the OpenTNF standard format. This would enable automatic processing and of private road data to the national road database at the Swedish Road Administration. The work is divided into four parts; 1. Researching standards for databases and version control. 2. Plan the methodology using different resources. 3. Development of a software solution. 4. Analysis. The chosen software is FME by Safe Software. A number of shortcomings such as lack of information on the practical input for the future ANDA system were discovered, therefor some assumptions and simplifications had to be made. Using the assumptions and examples, a functioning solution was created according to the OpenTNF and INSPIRE standards. The examples to fills that gap in knowledge and provide a greater understanding of the usage of the INSPIRE and OpenTNF standards for transport networks. An analysis and a discussion about the existing solution, bottlenecks, faults with the existing database and version management between the databases related to found research is presented. Workflows on different examples for the software solution can be seen in the results. The national road database suffers from low implementation rate and creates issues for making new applications and the ability to adapt to ever-changing nature of planning. Creating a software for automatic update on network data is crucial for the Swedish Road Administration for implementing technologies that are dependent on frequent updates, such as self-driving vehicles.
24

Nam, Kyungdoo T. "A Heuristic Procedure for Specifying Parameters in Neural Network Models for Shewhart X-bar Control Chart Applications." Thesis, University of North Texas, 1993. https://digital.library.unt.edu/ark:/67531/metadc278815/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study develops a heuristic procedure for specifying parameters for a neural network configuration (learning rate, momentum, and the number of neurons in a single hidden layer) in Shewhart X-bar control chart applications. Also, this study examines the replicability of the neural network solution when the neural network is retrained several times with different initial weights.
25

Guisse, Amadou Wane. "Spatial model development for resource management decision making and strategy formulation : application of neural network (Mounds State Park, Anderson, Indiana)." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/864949.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
An important requirement of a rational policy for provision of outdoor recreation opportunities is some understanding of natural processes and public concern and /or preferences. Computerized land use suitability mapping is a technique which can help find the best location for a variety of developmental actions given a set of goals and other criteria. Over the past two decades, the methods and techniques of land use planning have been engaged in a revolution on at least two fronts as to shift the basic theories and attitudes of which land use decisions are based. The first of these fronts is the inclusion of environmental concerns, and the second is the application of more systematic methods or models. While these automated capabilities have shed new light on environmental issues, they, unfortunately, have failed to develop sufficient intelligence and adaptation to accurately model the dynamics of ecosystems.The work reported proceeds on the belief that neural network models can be used to assess and develop resource management strategies for Mounds State Park, Anderson, Indiana. The study combines a photographic survey technique with a geographic information system (GIS) and artificial neural networks (NN) to investigate the perceived impact of park management activities on recreation opportunities and experiences. It is unique in that it incorporates both survey data with spatial data and an optimizing technique to develop a model for predicting perceived management values for short and long term recreation management.According to Jeannette Stanley and Evan Bak (1988) a neural network is a massively parallel, dynamic systems of highly interconnected interacting parts based on neurobiological models. The behavior of the network depends heavily on the connection details. The state of the network evolves continually with time. Networks are considered clever and intuitive because they learn by example rather than following simple programming rules. They are defined by a set of rules or patterns based on expertise or perception for better decision making. With experience networks become sensitive to subtle relationships in the environment which are not obvious to humans.The model was developed as a counter-propagation network with a four layer learning network consisting of an input layer, a normalized layer, a kohonen layer, and an output layer. The counter-propagation network is a feed-forward network which combines Kohonen and Widrow-Hoff learning rules for a new type of mapping neural network. The network was trained with patterns derived by mapping five variables (slope, aspect, vegetation, soil, site features) and survey responses from three groups. The responses included, for each viewshed, the preference and management values, and three recreational activities each group associated with a given landscape. Overall the model behaves properly in learning the different rules and generalizing in cases where inputs had not been shown to the network apriori. Maps are provided to illustrate the different responses obtained from each group and simulated by the model. The study is not conclusive as to the capabilities of the combination of GIS techniques and neural networks, but it gives a good flavor of what can be achieved when accurate mapping information is used by an intelligent system for decision making.
Department of Landscape Architecture
26

Nyamugure, Philimon. "Modification, development, application and computational experiments of some selected network, distribution and resource allocation models in operations research." Thesis, University of Limpopo, 2017. http://hdl.handle.net/10386/1930.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2017
Operations Research (OR) is a scientific method for developing quantitatively well-grounded recommendations for decision making. While it is true that it uses a variety of mathematical techniques, OR has a much broader scope. It is in fact a systematic approach to solving problems, which uses one or more analytical tools in the process of analysis. Over the years, OR has evolved through different stages. This study is motivated by new real-world challenges needed for efficiency and innovation in line with the aims and objectives of OR – the science of better, as classified by the OR Society of the United Kingdom. New real-world challenges are encountered on a daily basis from problems arising in the fields of water, energy, agriculture, mining, tourism, IT development, natural phenomena, transport, climate change, economic and other societal requirements. To counter all these challenges, new techniques ought to be developed. The growth of global markets and the resulting increase in competition have highlighted the need for OR techniques to be improved. These developments, among other reasons, are an indication that new techniques are needed to improve the day-to-day running of organisations, regardless of size, type and location. The principal aim of this study is to modify and develop new OR techniques that can be used to solve emerging problems encountered in the areas of linear programming, integer programming, mixed integer programming, network routing and travelling salesman problems. Distribution models, resource allocation models, travelling salesman problem, general linear mixed integer ii programming and other network problems that occur in real life, have been modelled mathematically in this thesis. Most of these models belong to the NP-hard (non-deterministic polynomial) class of difficult problems. In other words, these types of problems cannot be solved in polynomial time (P). No general purpose algorithm for these problems is known. The thesis is divided into two major areas namely: (1) network models and (2) resource allocation and distribution models. Under network models, five new techniques have been developed: the minimum weight algorithm for a non-directed network, maximum reliability route in both non-directed and directed acyclic network, minimum spanning tree with index less than two, routing through 0k0 specified nodes, and a new heuristic to the travelling salesman problem. Under the resource allocation and distribution models section, four new models have been developed, and these are: a unified approach to solve transportation and assignment problems, a transportation branch and bound algorithm for the generalised assignment problem, a new hybrid search method over the extreme points for solving a large-scale LP model with non-negative coefficients, and a heuristic for a mixed integer program using the characteristic equation approach. In most of the nine approaches developed in the thesis, efforts were done to compare the effectiveness of the new approaches to existing techniques. Improvements in the new techniques in solving problems were noted. However, it was difficult to compare some of the new techniques to the existing ones because computational packages of the new techniques need to be developed first. This aspect will be subject matter of future research on developing these techniques further. It was concluded with strong evidence, that development of new OR techniques is a must if we are to encounter the emerging problems faced by the world today. Key words: NP-hard problem, Network models, Reliability, Heuristic, Largescale LP, Characteristic equation, Algorithm.
27

ACAR, MARIA E. D. "Modelagem sociotécnica de uma organização nuclear: estudo de caso aplicado ao laboratório Nacional de Metrologia das Radiações Ionizantes." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/25358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2015-12-22T09:33:02Z No. of bitstreams: 0
Made available in DSpace on 2015-12-22T09:33:02Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
28

Lambert, Quentin. "Business Models for an Aggregator : Is an Aggregator economically sustainable on Gotland?" Thesis, KTH, Industriella informations- och styrsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-98482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Under the determined impulse of the European Union to limit the environmental impact of energy-related services, the electricity sector will face several challenges in coming years. Integrating renewable energy sources in the distribution networks is certainly one of the most urging issues to be tackled with. The current grid and production structure cannot absorb the high penetration shares anticipated for 2020 without putting at risk the entire system. The innovative concept of smart grid offers promising solutions and interesting implementation possibilities. The objective of the thesis is to specifically study the technical and economic benefits that the creation of an aggregator on the Swedish island of Gotland would imply. Comparing Gotland's power system characteristics to the broad variety of solutions offered by demand side management, wind power integration enhancement by demand response appeared particularly suited. A business case, specifically oriented towards the minimisation of transmission losses by adapting the electric heat load of private households to the local wind production was designed. Numerical simulations have been conducted, evaluating the technical and economic outcomes, along with the environmental benets, under the current conditions on Gotland. Sensitivity analyses were also performed to determine the key parameters for a successful implementation. A prospective scenario for 2020, with the addition of electric vehicles, has finally been simulated to estimate the long term profitability of an aggregator on the island. The simulation results indicate that despite patent technical benefits for the distribution network, the studied service would not be profitable in the current situation on Gotland. This, because the transmission losses through the HVDC-cable concern limited amounts of power that are purchased on a market characterized by relatively cheap prices and low volatility. Besides, the high fixed costs the aggregator has to face to install technical equipment in every household constitutes another barrier to its setting up.
29

Sozgen, Burak. "Neural Network And Regression Models To Decide Whether Or Not To Bid For A Tender In Offshore Petroleum Platform Fabrication Industry." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610820/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, three methods are presented to model the decision process of whether or not to bid for a tender in offshore petroleum platform fabrication. A sample data and the assessment based on this data are gathered from an offshore petroleum platform fabrication company and this information is analyzed to understand the significant parameters in the industry. The alternative methods, &ldquo
Regression Analysis&rdquo
, &ldquo
Neural Network Method&rdquo
and &ldquo
Fuzzy Neural Network Method&rdquo
, are used for modeling of the bidding decision process. The regression analysis examines the data statistically where the neural network method and fuzzy neural network method are based on artificial intelligence. The models are developed using the bidding data compiled from the offshore petroleum platform fabrication projects. In order to compare the prediction performance of these methods &ldquo
Cross Validation Method&rdquo
is utilized. The models developed in this study are compared with the bidding decision method used by the company. The results of the analyses show that regression analysis and neural network method manage to have a prediction performance of 80% and fuzzy neural network has a prediction performance of 77,5% whereas the method used by the company has a prediction performance of 47,5%. The results reveal that the suggested models achieve significant improvement over the existing method for making the correct bidding decision.
30

Fares, Rasha H. A. "Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints. An investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent traffic." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5435.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have
Ministry of Higher Education in Egypt and the Egyptian Cultural Centre and Educational Bureau in London
31

Moura, Giovane Cesar Moreira. "Uma proposta para medição de complexidade e estimação de custos de segurança em procedimentos de tecnologia da informação." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/13651.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Segurança de TI tornou-se nos últimos anos uma grande preocupação para empresas em geral. Entretanto, não é possível atingir níveis satisfatórios de segurança sem que estes venham acompanhados tanto de grandes investimentos para adquirir ferramentas que satisfaçam os requisitos de segurança quanto de procedimentos, em geral, complexos para instalar e manter a infra-estrutura protegida. A comunidade científica propôs, no passado recente, modelos e técnicas para medir a complexidade de procedimentos de configuração de TI, cientes de que eles são responsáveis por uma parcela significativa do custo operacional, freqüentemente dominando o total cost of ownership. No entanto, apesar do papel central de segurança neste contexto, ela não foi objeto de investigação até então. Para abordar este problema, neste trabalho aplica-se um modelo de complexidade proposto na literatura para mensurar o impacto de segurança na complexidade de procedimentos de TI. A proposta deste trabalho foi materializada através da implementação de um protótipo para análise de complexidade chamado Security Complexity Analyzer (SCA). Como prova de conceito e viabilidade de nossa proposta, o SCA foi utilizado para avaliar a complexidade de cenários reais de segurança. Além disso, foi conduzido um estudo para investigar a relação entre as métricas propostas no modelo de complexidade e o tempo gasto pelo administrador durante a execução dos procedimentos de segurança, através de um modelo quantitativo baseado em regressão linear, com o objetivo de prever custos associados à segurança.
IT security has become over the recent years a major concern for organizations. However, it doest not come without large investments on both the acquisition of tools to satisfy particular security requirements and complex procedures to deploy and maintain a protected infrastructure. The scientific community has proposed in the recent past models and techniques to estimate the complexity of configuration procedures, aware that they represent a significant operational cost, often dominating total cost of ownership. However, despite the central role played by security within this context, it has not been subject to any investigation to date. To address this issue, we apply a model of configuration complexity proposed in the literature in order to be able to estimate security impact on the complexity of IT procedures. Our proposal has been materialized through a prototypical implementation of a complexity scorer system called Security Complexity Analyzer (SCA). To prove concept and technical feasibility of our proposal, we have used SCA to evaluate real-life security scenarios. In addition, we have conducted a study in order to investigate the relation between the metrics proposed in the model and the time spent by the administrator while executing security procedures, with a quantitative model built using multiple regression analysis, in order to predict the costs associated to security.
32

Nsoh, Stephen Atambire. "Resource allocation in WiMAX mesh networks." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2012, 2012. http://hdl.handle.net/10133/3371.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The IEEE 802.16 standard popularly known as WiMAX is at the forefront of the technological drive. Achieving high system throughput in these networks is challenging due to interference which limits concurrent transmissions. In this thesis, we study routing and link scheduling inWiMAX mesh networks. We present simple joint routing and link scheduling algorithms that have outperformed most of the existing proposals in our experiments. Our session based routing and links scheduling produced results approximately 90% of a trivial lower bound. We also study the problem of quality of service (QoS) provisioning in WiMAX mesh networks. QoS has become an attractive area of study driven by the increasing demand for multimedia content delivered wirelessly. To accommodate the different applications, the IEEE 802.16 standard defines four classes of service. In this dissertation, we propose a comprehensive scheme consisting of routing, link scheduling, call admission control (CAC) and channel assignment that considers all classes of service. Much of the work in the literature considers each of these problems in isolation. Our routing schemes use a metric that combines interference and traffic load to compute routes for requests while our link scheduling ensures that the QoS requirements of admitted requests are strictly met. Results from our simulation indicate that our routing and link scheduling schemes significantly improve network performance when the network is congested.
ix, 77 leaves : ill. ; 29 cm
33

Nimmatoori, Praneeth. "Comparison of Several Project Level Pavement Condition Prediction Models." University of Toledo / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1578491583921183.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Fares, Rasha Hamed Abdel Moaty. "Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints : an investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent traffic." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5435.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have.
35

Bigaton, Ademir Durrer. "Diversidade de bactérias e arquéias em solos cultivados com cana-de-açúcar: um enfoque biogeográfico." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/11/11138/tde-10042015-111904/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A cana-de-açúcar é atualmente a cultura de maior importância agrícola do Estado de São Paulo e tem papel de destaque entre as principais culturas do Brasil. Dentro de um contexto de maior produtividade unida a sustentabilidade, o papel da comunidade microbiana presente nos solos pode ter fundamental importância, auxiliando no melhor desenvolvimento da planta, suprindo a mesma com nutrientes ou diminuindo a ocorrência de doenças e pragas. Contudo, pouco se sabe sobre a comunidade microbiana existente nos solos cultivados com cana-de-açúcar, sendo que um conhecimento da distribuição espacial desta comunidade pode auxiliar para uma melhor compreensão dos processos aos quais estes microrganismos estão envolvidos. Dessa forma, este trabalho teve como objetivo estudar, em um enfoque biogeográfico, a diversidade de bactérias e arquéias existente em solos de cana-de-açúcar do Estado de São Paulo, focando nos grupos de arquéias e bactérias. Uma análise de 285 amostras de solos, obtidas em 10 regiões produtoras distintas, foi realizada utilizando técnicas independentes de cultivo como: quantificação da abundância total por meio da aplicação de PCR em tempo real (qPCR), análises da estrutura da comunidade por polimorfismo de comprimento de fragmentos de restrição terminal (T-RFLP), e determinação da sua afiliação filogenética por sequenciamento em larga escala de genes ribossomais. Os resultados obtidos demonstraram que o principal modulador destas comunidades foram as características física e química do solo (pH, granulometria, matéria orgânica). Além disso, a comunidade de arquéias demonstrou ser influenciada por práticas de manejo (colheita mecanizada e adição de vinhaça e torta de filtro). Adicionalmente, foi observada uma relação inesperada da estruturação destas comunidades com a distribuição geográfica das amostras analisadas. Os resultados demonstram a complexidade da comunidade de bactérias e arquéias ao longo de um gradiente espacial, sugerindo que estudos posteriores devem considerar uma amostragem mais ampla em distintas regiões. Este trabalho é embasador de estudos futuros que visem desenvolver práticas agrícolas baseadas na exploração da funcionalidade dos microbiomas dos solos.
Sugarcane is currently the most important culture of the State of São Paulo and has a prominent role among the crops in Brazil. Into the context of a better productivity with greater sustainability, the role of the microbial community present in the soil could have huge importance, aiding a better plant development, supplying it with nutrients or reducing the occurrence of diseases and pests. However, little is known about the microbial community existing in soils cultivated with sugarcane, where a knowledge of the spatial distribution of this community could be helpful to a better understanding of the processes that these organisms are involved. This project aimed to study in a biogeographic approach, the bacteria and archaea diversity in soils of sugarcane in the São Paulo State, focusing on the groups of archaea and bacteria. Analyses of a total of 285 soil samples, obtained in 10 producing distinct regions was performed using independent cultivation techniques such as quantification of total abundance by applying quantitative PCR (qPCR), analysis of the community structure by terminal restriction of length polymorphism (T-RFLP) and determination of its phylogenetic affiliation by high-throughput sequencing of 16S ribosomal genes. The results showed that the main drivers of these communities were the physical and chemical characteristics of the soil (pH, granulometry and organic matter). In addition, the results have shown that archaea community was influenced by management practices (mechanical harvest, vinasse and filter cake adding). Additionally, an unexpected relationship between the structures of these communities with the geographic distribution of the samples was observed. The results demonstrate the complexity of the community of bacteria and archaea along a spatial gradient, suggesting that future studies should consider a broader sampling of the distinct regions. This work supports upcoming studies that aim at developing agricultural practices exploring the soil microbiomes functionality.
36

Naidoo, Vaughn. "Policy Based Network management of legacy network elements in next generation networks for Voice Services." Thesis, University of the Western Cape, 2002. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_5830_1370595582.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Massana, i. Raurich Joaquim. "Data-driven models for building energy efficiency monitoring." Doctoral thesis, Universitat de Girona, 2018. http://hdl.handle.net/10803/482148.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nowadays, energy is absolutely necessary all over the world. Taking into account the advantages that it presents in transport and the needs of homes and industry, energy is transformed into electricity. Bearing in mind the expansion of electricity, initiatives like Horizon 2020, pursue the objective of a more sustainable future: reducing the emissions of carbon and electricity consumption and increasing the use of renewable energies. As an answer to the shortcomings of the traditional electrical network, such as large distances to the point of consumption, low levels of flexibility, low sustainability, low quality of energy, the difficulties of storing electricity, etc., Smart Grids (SG), a natural evolution of the classical network, has appeared. One of the main components that will allow the SG to improve the traditional grid is the Energy Management System (EMS). The EMS is necessary to carry out the management of the power network system, and one of the main needs of the EMS is a prediction system: that is, to know in advance the electricity consumption. Besides, the utilities will also require predictions to manage the generation, maintenance and their investments. Therefore, it is necessary to dispose of the systems of prediction of the electrical consumption that, based on the available data, forecast the consumption of the next hours, days or months, in the most accurate way possible. It is in this field where the present research is placed since, due to the proliferation of sensor networks and more powerful computers, more precise prediction systems have been developed. Having said that, a complete study has been realized in the first work, taking into account the need to know, in depth, the state of the art, in relation to the load forecasting topic. On the basis of acquired knowledge, the installation of sensor networks, the collection of consumption data and modelling, using Autoregressive (AR) models, were performed in the second work. Once this model was defined, in the third work, another step was made, collecting new data, such as building occupancy, meteorology and indoor ambience, testing several paradigmatic models, such as Multiple Linear Regression (MLR), Artificial Neural Network (ANN) and Support Vector Regression (SVR), and establishing which exogenous data improves the prediction accuracy of the models. Reaching this point, and having corroborated that the use of occupancy data improves the prediction, there was the necessity of generating techniques and methodologies, in order to have the occupancy data in advance. Therefore, several attributes of artificial occupancy were designed, in order to perform long-term hourly consumption predictions, in the fourth work.
A dia d’avui l’energia és un bé completament necessari arreu del món. Degut als avantatges que presenta en el transport i a les necessitats de les llars i la indústria, l’energia és transformada en energia elèctrica. Tenint en compte la total expansió i domini de l’electricitat, iniciatives com Horitzó 2020, tenen per objectiu un futur més sostenible: reduint les emissions de carboni i el consum i incrementant l’ús de renovables. Partint dels defectes de la xarxa elèctrica clàssica, com són gran distància al punt de consum, poca flexibilitat, baixa sostenibilitat, baixa qualitat de l’energia, dificultats per a emmagatzemar energia, etc. apareixen les Smart Grid (SG), una evolució natural de la xarxa clàssica. Un dels principals elements que permetrà a les SG millorar les xarxes clàssiques és l’Energy Management System (EMS). Així doncs, per a que l’EMS pugui dur a terme la gestió dels diversos elements, una de les necessitats bàsiques dels EMS serà un sistema de predicció, o sigui, saber per endavant quin consum hi haurà en un entorn determinat. A més, les empreses subministradores d’electricitat també requeriran de prediccions per a gestionar la generació, el manteniment i fins i tot les inversions a llarg termini. Així doncs ens calen sistemes de predicció del consum elèctric que, partint de les dades disponibles, ens subministrin el consum que hi haurà d’aquí a unes hores, uns dies o uns mesos, de la manera més aproximada possible. És dins d’aquest camp on s’ubica la recerca que presentem. Degut a la proliferació de xarxes de sensors i computadors més potents, s’han pogut desenvolupar sistemes de predicció més precisos. A tall de resum, en el primer treball, i tenint en compte que s’havia de conèixer en profunditat l’estat de la qüestió en relació a la predicció del consum elèctric, es va fer una anàlisi completa de l’estat de l’art. Un cop fet això, i partint del coneixement adquirit, en el segon treball es va dur a terme la instal•lació de les xarxes de sensors, la recollida de dades de consum i el modelatge amb models lineals d’auto-regressió (AR). En el tercer treball, un cop fets els models es va anar un pas més enllà recollint dades d’ocupació, de meteorologia i ambient interior, provant diferents models paradigmàtics com Multiple Linear Regression (MLR), Artificial Neural Network (ANN) i Support Vector Regression (SVR) i establint quines dades exògenes milloren la predicció dels models. Arribat a aquest punt, i havent corroborat que l’ús de dades d’ocupació millora la predicció, es van generar tècniques per tal de disposar de les dades d’ocupació per endavant, o sigui a hores vista. D’aquesta manera es van dissenyar diferents atributs d’ocupació artificials, permetent-nos fer prediccions horàries de consum a llarg termini. Aquests conceptes s’expliquen en profunditat al quart treball.
38

Faria, Thiago Tortorelli de. "Multilink para determina??o da taxa de ocupa??o em ambientes internos utilizando uma rede de sensores sem fio." Pontif?cia Universidade Cat?lica de Campinas, 2015. http://tede.bibliotecadigital.puc-campinas.edu.br:8080/jspui/handle/tede/557.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Made available in DSpace on 2016-04-04T18:31:44Z (GMT). No. of bitstreams: 1 Thiago Tortorelli Faria.pdf: 1966946 bytes, checksum: 3abdd0be8f7b49f61b8e3b7907afa6e6 (MD5) Previous issue date: 2015-06-23
This work aims to obtain a mathematical model to determinate the occupancy rate in indoor environment based on signal strength of a wireless sensor network (WSN) in 915 MHz using four links at the environment. It was set a network with one sink and four node sensors in a laboratory in order to collect the RSSI (Received Signal Strength Indication) of each individual link for every different space occupation for each group of people. Based on the data collected, it was calculated the mean, standard deviation and the variance of each individual link and the overall average of links for each group of people. The data was used as input in mathematical models to determine the occupancy rate of the environment. Three mathematical models have been proposed to estimate the occupancy rate. All of them proved capable to estimate the number of people using the overall average of links. The results showed that the tendency is to decrease the RSSI and increase the standard deviation the greater the number of people in the environment. Some links analyzed individually showed a large variation and not following entirely the tendency mentioned. Nevertheless the overall average of links follows this tendency showing that despite a link shows a great variation, the other links tended to compensate this variation and with the overall average of links is possible to get a small error between the real number of people and the estimated number of people. To choose the best model it was used the MAE (Mean Absolute Error). The second order model was the best model with a MAE slightly below of a half people. The results obtained using multilink were compared with a work that used single-link to predict the number of people in a indoor environment. Multilink had a smaller error compared with single-link, which obtained an error of about two people.
Este trabalho tem por objetivo a obten??o de um modelo matem?tico para a determina??o da taxa de ocupa??o de ambientes internos baseado na intensidade de sinal de uma Rede de Sensores sem Fio em 915 MHz utilizando quatro links r?dio no ambiente. Foi montada uma rede com uma base e quatro n?s sensores em um laborat?rio como intuito de coletar a RSSI (Received Signal Strenght Indication) de cada link para cada diferente ocupa??o do espa?o para cada grupo de pessoas. Com base nos dados coletados, foram calculados a m?dia, desvio padr?o e vari?ncia de cada link e cada grupo de pessoas. Esses dados foram utilizados como entrada em modelos matem?ticos para a determina??o da taxa de ocupa??o do ambiente. Foram propostos tr?s modelos matem?ticos para tal estima??o. Os tr?s modelos se mostraram aptos a estimar o n?mero de pessoas utilizando a m?dia geral dos links. Os resultados iniciais mostraram que a tend?ncia ? de diminui??o da RSSI e o aumento do desvio padr?o quanto maior o n?mero de pessoas no ambiente. Alguns links analisados de forma individual se mostraram com uma varia??o grande e n?o seguindo inteiramente a tend?ncia mencionada, mas apesar disso a m?dia geral dos links segue essa tend?ncia, ou seja, apesar de um link demonstrar uma grande varia??o, os outros links tenderam a compensar essa varia??o. Com a m?dia geral ? poss?vel chegar a um erro pequeno entre o n?mero real de pessoas e o n?mero estimado de pessoas. Para a escolha do melhor modelo foi utilizado o RAM (Res?duo Absoluto M?dio) e para a m?dia geral dos links o modelo de segunda ordem foi o que se mostrou melhor com um res?duo abaixo de meia pessoa. Por fim os resultados obtidos com multilink foram comparados com resultados obtidos em um trabalho em que foi utilizado single-link para a obten??o da taxa de ocupa??o em um ambiente interno. Multilink, com um RAM de aproximadamente 0,5 pessoas, se mostrou com um erro menor comparado com o single-link, que obteve um erro de aproximadamente duas pessoas.
39

Arulselvan, Ashwin. "Network model for disaster management." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024855.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Draai, Kevin. "A model for assessing and reporting network performance measurement in SANReN." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/16131.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The performance measurement of a service provider network is an important activity. It is required for the smooth operation of the network as well as for reporting and planning. SANReN is a service provider tasked with serving the research and education network of South Africa. It currently has no structure or process for determining network performance metrics to measure the performance of its network. The objective of this study is to determine, through a process or structure, which metrics are best suited to the SANReN environment. This study is conducted in 3 phases in order to discover and verify the solution to this problem. The phases are "Contextualisation", "Design",and "Verification". The "Contextualisation" phase includes the literature review. This provides the context for the problem area but also serves as a search function for the solution. This study adopts the design science research paradigm which requires the creation of an artefact. The "Design" phase involves the creation of the conceptual network performance measurement model. This is the artefact and a generalised model for determining the network performance metrics for an NREN. To prove the utility of the model it is implemented in the SANReN environment. This is done in the "Verification" phase. The network performance measurement model proposes a process to determine network performance metrics. This process includes getting NREN requirements and goals, defining the NRENs network design goals through these requirements, define network performance metrics from these goals, evaluating the NRENs monitoring capability, and measuring what is possible. This model provides a starting point for NRENs to determine network performance metrics tailored to its environment. This is done in the SANReN environment as a proof of concept. The utility of the model is shown through the implementation in the SANReN environment thus it can be said that it is generic.The tools that monitor the performance of the SANReN network are used to retrieve network performance data from. Through understanding the requirements, determining network design goals and performance metrics, and determining the gap the retrieving of results took place. These results are analysed and finally aggregated to provide information that feeds into SANReN reporting and planning processes. A template is provided to do the aggregation of metric results. This template provides the structure to enable metrics results aggregation but leaves the categories or labels for the reporting and planning sections blank. These categories are specific to each NREN. At this point SANReN has the aggregated information to use for planning and reporting. The model is verified and thus the study’s main research objective is satisfied.
41

Немировский, М. В., та M. V. Nemirovsky. "Развитие сетевого взаимодействия в системе среднего образования: анализ муниципальных практик и технологии совершенствования : магистерская диссертация". Master's thesis, б. и, 2020. http://hdl.handle.net/10995/93318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Магистерская диссертация посвящена изучению муниципальных практик сетевого взаимодействия в общеобразовательных организациях г. Екатеринбурга и разработке технологий их развития. В работе рассмотрены концептуальные и нормативно-правовые основы развития сетевого взаимодействия в системе среднего образования в Российской Федерации. В ней представлены результаты эмпирического исследования, посвященного практикам сетевого взаимодействия в среднем общем образовании г. Екатеринбурга. Автор диссертации предлагает рекомендации и проект по совершенствованию организации сетевого взаимодействия в среднем общем образовании г. Екатеринбурга.
The master's dissertation is devoted to the study of municipal practices of network interaction in education organizations in Yekaterinburg and development of technologies for their development. The dissertation considers the conceptual and regulatory framework for the development of network interaction in the secondary education system in the Russian Federation. It presents the results of an empirical study on the practices of networking in secondary education in Yekaterinburg. The author of the dissertation offers recommendations and a project to improve the organization of network interaction in secondary education in Yekaterinburg.
42

Singh, Aameek. "Secure Management of Networked Storage Services: Models and Techniques." Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-04092007-004039/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2008.
Liu, Ling, Committee Chair ; Aberer, Karl, Committee Member ; Ahamad, Mustaque, Committee Member ; Blough, Douglas, Committee Member ; Pu, Calton, Committee Member ; Voruganti, Kaladhar, Committee Member.
43

Li, Hailong. "Analytical Model for Energy Management in Wireless Sensor Networks." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367936881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

San, Martín Ramas Mauro Adolfo. "A model for social networks data management." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/111467.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Doctor en Ciencias, Mención Computación
En el contexto de la administración de datos para redes sociales, esta tesis aborda sus necesidades de manipulación de datos proponiendo un modelo de datos basado en un conjunto exhaustivo de casos de uso tomados del dominio de las redes sociales (SN, del inglés social networks ), y en el trabajo teórico existente sobre modelos de datos, bases de datos, y lenguajes de consulta. Un modelo para la administración de datos de redes sociales debe permitir compartir los datos de redes sociales, así como su reutilización e integración, con apoyo para esquemas flexibles y metadatos apropiados para datos con estructura de grafos. El lenguaje de consulta deseado debe proveer la expresividad adecuada bajo límites factibles de complejidad, siendo además accesible y atractivo para los usuarios. Un requisito encontrado frecuentemente en los casos de uso de SN es la necesidad de reestructurar una red, por ejemplo creando nuevos nodos a partir de grupos existentes, o a partir de valores de atributos. Los lenguajes de consulta tradicionales que son capaces de crear valores u objetos suelen tener la capacidad de expresar todas las consultas computables, por lo tanto la evaluación de las consultas se vuelve computacionalmente costosa. Para abordar estos requisitos se introduce un modelo de datos (SNDM), y un lenguaje de consulta (SNQL). La estructura de de datos utilizada es semiestructurada y está basada en un modelo de triples. SNQL se ha diseñado siguiendo las líneas de lenguajes de consulta ampliamente conocidos, usando como punto de partida una versión de Datalog con una extensión que facilita el cómputo de nuevos valores e identificadores de acuerdo a los requisitos de la manipulación de SN. Dicha extensión se basa en las "second-order tuple-generating dependencies", originalmente propuestas en el contexto de intercambio de datos para capturar la composición de asignaciones entre esquemas. El lenguaje así definido resuelve, con una complejidad computacional eficiente, los requisitos de los casos de uso típicos del análisis de redes sociales. En efecto, su poder expresivo abarca todas las operaciones de SN relevantes, y su evaluación permanece en NLOGSPACE. Se muestra que las características de este lenguaje satisfacen estas metas demostrando sus propiedades formales y con implementaciones prototípicas del modelo, así como con traducciones desde y hacia a otros modelos.
45

Syed, Mofazzal. "Data driven modelling for environmental water management." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54592/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Management of water quality is generally based on physically-based equations or hypotheses describing the behaviour of water bodies. In recent years models built on the basis of the availability of larger amounts of collected data are gaining popularity. This modelling approach can be called data driven modelling. Observational data represent specific knowledge, whereas a hypothesis represents a generalization of this knowledge that implies and characterizes all such observational data. Traditionally deterministic numerical models have been used for predicting flow and water quality processes in inland and coastal basins. These models generally take a long time to run and cannot be used as on-line decision support tools, thereby enabling imminent threats to public health risk and flooding etc. to be predicted. In contrast, Data driven models are data intensive and there are some limitations in this approach. The extrapolation capability of data driven methods are a matter of conjecture. Furthermore, the extensive data required for building a data driven model can be time and resource consuming or for the case predicting the impact of a future development then the data is unlikely to exist. The main objective of the study was to develop an integrated approach for rapid prediction of bathing water quality in estuarine and coastal waters. Faecal Coliforms (FC) were used as a water quality indicator and two of the most popular data mining techniques, namely, Genetic Programming (GP) and Artificial Neural Networks (ANNs) were used to predict the FC levels in a pilot basin. In order to provide enough data for training and testing the neural networks, a calibrated hydrodynamic and water quality model was used to generate input data for the neural networks. A novel non-linear data analysis technique, called the Gamma Test, was used to determine the data noise level and the number of data points required for developing smooth neural network models. Details are given of the data driven models, numerical models and the Gamma Test. Details are also given of a series experiments being undertaken to test data driven model performance for a different number of input parameters and time lags. The response time of the receiving water quality to the input boundary conditions obtained from the hydrodynamic model has been shown to be a useful knowledge for developing accurate and efficient neural networks. It is known that a natural phenomenon like bacterial decay is affected by a whole host of parameters which can not be captured accurately using solely the deterministic models. Therefore, the data-driven approach has been investigated using field survey data collected in Cardiff Bay to investigate the relationship between bacterial decay and other parameters. Both of the GP and ANN models gave similar, if not better, predictions of the field data in comparison with the deterministic model, with the added benefit of almost instant prediction of the bacterial levels for this recreational water body. The models have also been investigated using idealised and controlled laboratory data for the velocity distributions along compound channel reaches with idealised rods have located on the floodplain to replicate large vegetation (such as mangrove trees).
46

Chiang, Nhan Tu. "Mesh network model for urban area." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44698.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2008.
Includes bibliographical references (p. 52, 2-7 (2nd group)).
Decreasing population, high crime rate, and limited economic opportunities are all symptoms of urban decline. These characteristics are, unfortunately, evident in major cities and small towns. Local municipalities in these cities and towns with the aid of state and federal government have attempted to reverse urban decline through the traditional approach of urban renewal. Their idea was to create low cost housing to attract people back to urban areas. Their approach has shown mixed results with most attempts having no effect on the deterioration. The goal of this thesis is to propose a higher system approach to answer urban decline through the application of new technology, wireless mesh networks. A wireless mesh network can provide improved security, public safety, new economic opportunities, and a bridge that crosses the digital divide. Married to the appropriate applications, a wireless mesh network creates a business model that is both favorable and sustainable. More importantly, the business model brings about the human capital necessary for urban revitalization.
by Nhan Tu Chiang.
S.M.
47

Jelínek, Tomáš. "Model znalostního managementu." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221782.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
My master’s thesis is focused on knowledge management. The main point is one of the most important topics now, the knowledge and its management. The introduction of my master’s thesis provides basic definitions from the field of knowledge management, namely data, information and knowledge, and gives definitions of most important factors that influences knowledge management. Thesis gives a survey of current development in information systems and products supporting knowledge management.
48

Paul, Daniel. "Decision models for on-line adaptive resource management." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13559.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Li, Zhi. "Autoregression Models for Trust Management in Wireless Ad Hoc Networks." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20288.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we propose a novel trust management scheme for improving routing reliability in wireless ad hoc networks. It is grounded on two classic autoregression models, namely Autoregressive (AR) model and Autoregressive with exogenous inputs (ARX) model. According to this scheme, a node periodically measures the packet forwarding ratio of its every neighbor as the trust observation about that neighbor. These measurements constitute a time series of data. The node has such a time series for each neighbor. By applying an autoregression model to these time series, it predicts the neighbors future packet forwarding ratios as their trust estimates, which in turn facilitate it to make intelligent routing decisions. With an AR model being applied, the node only uses its own observations for prediction; with an ARX model, it will also take into account recommendations from other neighbors. We evaluate the performance of the scheme when an AR, ARX or Bayesian model is used. Simulation results indicate that the ARX model is the best choice in terms of accuracy.
50

Scanlan, James Patrick. "A network model for the management of complex design projects." Thesis, University of the West of England, Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300917.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A review of techniques that support Concurrent Engineering or Simultaneous Engineering (CE/SE) is presented. It is shown that the management of projects consistent with the principles of CE/SE is hampered by the lack of a suitable activity network modelling tool. The limitations of existing methods such as the Critical Path Analysis Method (CPM) and the related Program Evaluation and Review Technique (PERT) for the management of complex design projects are demonstrated. Recent enhancements and alternatives to CPMlPERT are reviewed. A network model is proposed that supports CE/SE and is capable of representing uncertain task outcomes, partial dependencies and task iterations characteristic of complex design projects. Discrete-event simulation is used to evaluate the network and show the effect of resources constraints, communications efficiency and activity control logic on project completion timescales and product quality. The proposed model is designed so that the activity network can be derived from and directly related to a Quality Function Deployment (QFD) matrix. This allows project completion to be expressed in terms of customer requirements and priorities. The network model is illustrated by showing how it can be applied to an aerospace design project.

До бібліографії