Дисертації з теми "Model of intermediate complexity"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Model of intermediate complexity.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Model of intermediate complexity".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hosoe, Taro. "Stability of the global thermohaline circulation in an intermediate complexity ocean model." Thesis, University of Southampton, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401832.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tolwinski-Ward, Susan E. "Inference on Tree-Ring Width and Paleoclimate Using a Proxy Model of Intermediate Complexity." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/241975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Forward and inverse modeling studies of the relationship between tree ring width and bivariate climate are performed using a model called VS-Lite. The monthly time-step model incorporates two simple but realistic nonlinearities in its description of the transformation of climate variability into ring width index. These features ground VS-Lite in scientific principles and make it more complex than empirically-derived statistical models commonly used to simulate tree ring width. At the same time, VS-Lite is vastly simpler and more efficient than pre-existing numerical models that simulate detailed biological aspects of tree growth. A forward modeling validation study shows that VS-Lite simulates a set of observed chronologies across the continental United States with comparable or better skill than simulations derived from a standard, linear regression based approach. This extra skill derives from VS-Lite's basis in mechanistic principles, which makes it more robust than the statistical methodology to climatic nonstationarity. A Bayesian parameterization approach is also developed that incorporates scientific information into the choice of locally optimal VS-Lite parameters. The parameters derived using the scheme are found to be interpretable in terms of the climate controls on growth, and so provide a means to guide applications of the model across varying climatologies. The first reconstructions of paleoclimate that assimilate scientific understanding of the ring width formation process are performed using VS-Lite to link the proxy data to potential climate histories. Bayesian statistical methods invert VS-Lite conditional on a given dendrochronolgy to produce probabilistic estimates of local bivariate climate. Using VS-Lite in this manner produces skillful estimates, but does not present advantages compared another set of probabilistic reconstructions that invert a simpler, linear, empirical forward model. This result suggests that future data-assimilation based reconstructions will need to integrate as many data sources as possible, both across space and proxy types, in order to benefit from information provided by mechanistic models of proxy formation.
3

Angeloni, Michela <1993&gt. "Climate variability in an Earth system Model of Intermediate Complexity: from interannual to centennial timescales." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10152/1/plasim.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis explores the climate mean state and climate variability repro- duced by atmosphere-ocean coupled configurations of the Planet Simulator (PlaSim), an Earth-system Model of Intermediate Complexity (EMIC). In particular, the sensitivity to variations in oceanic parameters is explored in three atmosphere-ocean coupled configurations: using a simple mixed-layer (ML) ocean at two horizontal resolutions (T21 - 600 km and T42 - 300 km) or a more complex dynamical ocean, the Large Scale Geostrophic (LSG) ocean, at T21 atmospheric horizontal resolution. Sensitivity experiments allow to identify a reference oceanic diffusion coefficient in the ML ocean and a vertical oceanic diffusion profile in LSG, which ensure a simulated climate in good agreement with the present climate. For each model configuration, the Equilibrium Climate Sensitivity (ECS) is estimated from simulations with an increased CO2 concentration compared to pre-industrial simulations. The resulting ECS values are higher than values estimated in other EMICs or models of the Coupled Intercomparison Project Phase 5 (CMIP5) and Phase 6 (CMIP6), especially in the PlaSim-ML configurations. The climate variability of the model is then explored on different timescales, from the centennial to the interannual one.
4

Biro, Daniel. "Towards intermediate complexity systems biology models of bacterial growth and evolution." Thesis, Yeshiva University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10798623.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Modern biological research is currently canalized into two main modes of research: detailed, mechanistic descriptions, or big data collection and statistical descriptions. The former has the advantage of being conceptually tractable and fitting into an existing scientific paradigm. However, these detailed descriptions can suffer from an inability to be understood in the larger context of biological phenomena. On the other hand, the big data approaches, while closer to being able to capture the full depth of biological complexity, are limited in their ability to impart conceptual understanding to researchers. We put forward examples of an intermediate approach. The goal of this approach is to develop models which can be understood as abstractions of biological phenomena, while simultaneously being conducive to modeling and computational approaches. Firstly, we attempt to examine the phenomenon of modularity. Modularity is an ubiquitous phenomenon in biological systems, but its etiology is poorly understood. It has been previously shown that organisms that evolved in environments with lower levels of stability tend to display more modular organization of their gene regulatory networks, although theoretical predictions have failed to account for this. We put forward a neutral evolutionary model, where we posit the process of genome expansion through gene duplications acts as a driver for the evolution of modularity. This process occurs through the duplication of regulatory elements alongside the duplication of a gene, causing sub-networks to be generated which are more tightly coupled internally than externally, which gives rise to a modular architecture. Finally, we also generate an experimental system by which we can verify our model of the evolution of modularity. Using a long term experimental evolution setup, we evolve E. coli under fluctuating temperature environments for 600 generations in order to test if there is a measurable increase in the modularity of the gene regulatory networks of the organisms. This data will also be used in the future to test other hypotheses related to evolution under fluctuating environments. The second such model is a computational model of the properties of bacterial growth as a function of temperature. We describe a model composed of a chain of enzyme like actions, where the output of each enzyme in the chain becomes the substrate of the following enzyme. Using well known temperature dependence curves for enzyme activity and no further assumptions, we are then able to replicate the salient properties of bacterial growth curves at varying temperatures, including lag time, carrying capacity, and growth rate. Lastly, we extend these models to attempt to describe the ability of cancer cells to alter their phenotypes in ways that would be impossible for normal cells. We term this model the phenotypically pliant cells model and show that it can encapsulate important aspects of cancer cell behavior.

5

Grancini, Carlo. "Initial validation of an agile coupled atmosphere-ocean general circulation model." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25439/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mathematical models based on physics, chemistry and biology principles are one of the main tools to understand climate interactions, variability and sensitivity to forcings. Model performance must be validated checking that results are consistent with actual/observed climate. This work describes the initial validation of a new intermediate complexity, coupled climate model based on a set of existing atmosphere, ocean and sea-ice models. The model, developed and made available by the International Centre for Theoretical Physics (ICTP), is based on the widely used SPEEDY atmospheric model. Limited literature is available for its version, coupled to the NEMO ocean model referred to as SPEEDY-NEMO. The focus of this study is on the adaptation and validation of this model. A long-term spin-up run with constant present-day forcing has been performed to achieve a steady-state climate. The simulated climate has then been compared with observations and reanalyses of the recent past. The initial validation has shown that simulations spanning a thousand years can be easily run. The model does not require many h/w resources and therefore significant size samples can be generated if needed. Our results prove that long timescale, stable simulations are feasible. The model reproduces the main features of Earth’s mean climate and variability, despite the use of a fairly limited resolution grid, simple parameterizations and a limited range of physical processes. Ocean model outputs have not been assessed. However, a clear El Niño signal in the simulated Sea Surface Temperatures (SSTs) data and arctic sea ice extent show that the ocean model behaviour is close to observations. According to the results, the model is a promising tool for climate studies. However, to understand its full potential the validation should be improved and extended with an analysis of ocean variables and targeted simulations with modified conditions to evaluate model behaviour under different conditions
6

Simmons, Christopher. "An investigation of carbon cycle dynamics since the last glacial maximum using a climate model of intermediate complexity." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121260.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The University of Victoria Earth System Climate Model (UVic ESCM) v. 2.9 is used in this thesis to investigate two important topics in paleoclimate research: the glacial-to-interglacial rise in CO2 and the Holocene carbon cycle. The UVic ESCM belongs to a class of models known as Earth system Models of Intermediate Complexity (EMICs) (Claussen et al. 2002) and provides a simplified yet comprehensive representation of the climate system and carbon cycle dynamics, including a three-dimensional ocean model, a dynamic-thermodynamic sea ice model, a dynamic global vegetation model, ocean sediments, and a fully-interactive inorganic and organic carbon cycle in the ocean. First, a suite of transient simulations were conducted to cover the period from the Last Glacial Maximum (LGM) to the present (2000 A.D). Simulations including only prescribed orbital forcing and continental ice sheet changes failed to produce an increase in atmospheric CO2 for the simulation period, although they demonstrated significant long-term sensitivity (10-15 ppm) to small (1.9 Tmol yr-1) variations in the weathering rate. Modelling experiments incorporating the full CO2 radiative forcing effect since the Last Glacial Maximum, however, resulted in much higher CO2 concentrations (a 20 ppm increase over those without CO2 radiative forcing) due to a greater ventilation of deep-ocean DIC and decreased oceanic CO2 uptake, related in part to a larger decrease in southern hemisphere sea ice extent. The more thorough ventilation of the deep ocean in simulations with CO2 radiative forcing also caused a larger net alkalinity decrease during the late deglacial and interglacial, allowing atmospheric CO2 to increase by an additional 10 ppm in the simulations presented here. The inclusion of a high latitude terrestrial carbon reservoir provided a net release of carbon to the atmosphere, mostly during the early deglacial, increasing atmospheric CO2 levels to 240-250 ppm. This terrestrial release also provided better agreement with observed changes in carbonate concentrations in the deep ocean since the LGM (Yu et al. 2010). The addition of freshwater fluxes from ice sheet melting in North America added emphasis on the importance of a lower weathering rate during the LGM and early deglacial and indicated that deep water in the North Pacific may become more positively buoyant during freshwater fluxes in the Atlantic due to greater diffusion of heat to the deep ocean by enhanced Pacific intermediate water formation.Second, our results for the Holocene carbon cycle indicate that atmospheric CO2 should decrease between 6000 B.C. and 2000 A.D. without some kind of external forcing not represented in the model. However, the amount of the decrease (8-15 ppm) varied for different ocean circulation states. Furthermore, our simulations demonstrated significant sensitivity to Antarctic marine ice shelves, and these results indicate that more extensive marine ice shelves during the Holocene (relative to previous interglacials) may increase atmospheric CO2 levels by ~5 ppm (from purely physical mechanisms) and as much as 10 ppm when different ocean circulation states or alkalinity changes are included. Adding various anthropogenic land use scenarios to the Holocene carbon cycle were unable to explain the CO2 trend, accounting for only a third of the ice core CO2 increase by 1 A.D. in our most extreme scenario. However, the results imply that external mechanisms leading to a decrease in alkalinity during the Holocene (such as declining weathering rates, more extensive marine ice shelves, terrestrial uptake, more calcifiers, coral reef expansion, etc.) may prevent the ocean from absorbing more of the anthropogenic terrestrial release, allowing the deforestation flux to balance a greater fraction of the Holocene peatland uptake (not modelled) and permitting CO2 to increase from oceanic processes that are normally overwhelmed by northern peatlands.
Cette thèse détaille l'application du modèle du système climatique terrestre de l'Université de Victoria (version 2.9) dans le cadre de deux importants champs de recherche en modélisation paléoclimatique : l'augmentation du niveau de dioxyde de carbone (CO2) dans l'atmosphère durant la plus récente transition glaciaire-interglaciaire, ainsi que l'évolution du cycle du carbone durant l'Holocène. Le modèle utilisé dans cette étude est répertorié comme modèle de complexité intermédiaire (Claussen et al. 2002), offrant un traitement à la fois simplifié et exhaustif de la dynamique du système climatique terrestre et du cycle du carbone. Celui-ci comprend un modèle océanique tridimensionnel, un modèle de glace marine dynamique/thermodynamique, un modèle dynamique et global de la végétation, les sédiments océaniques ainsi qu'un traitement interactif du cycle du carbone organique et inorganique.Premièrement, une série de simulations transitoires sont effectuées afin de couvrir la période s'étendant du plus récent maximum glaciaire (LGM) jusqu'à aujourd'hui (2000 apr. J.-C.). Les simulations fondées uniquement sur une prescription des paramètres orbitaux et des calottes glaciaires ne reproduisent pas l'augmentation du CO2 dans l'atmosphère durant la période transitoire tel que mentionné ci-haut, mais exposent toutefois une certaine sensibilité (10-15 ppm) à de faibles (1.9 Tmol/an) variations dans le taux d'érosion. Dans le cas de simulations prenant en compte la gamme complète des effets radiatifs associés au CO2, par contre, la concentration du CO2 dans l'atmosphère s'avère beaucoup plus élevée (une augmentation de 20 ppm par rapport à celles sans effets radiatifs). Cette différence est causée par une plus importante ventilation de carbone inorganique dissous en eaux profondes ainsi qu'une diminution du taux d'absorption de CO2 par l'océan, qui s'explique en partie par une fonte accélérée de la glace marine dans l'hémisphère Sud. Le changement du régime de ventilation en profondeur a également pour effet de diminuer l'alcalinité marine à partir de la fin de la période de déglaciation, augmentant de 10ppm la concentration de CO2 dans l'atmosphère. La présence d'un réservoir de carbone terrestre an hautes latitudes fournit une source additionnelle de carbone, principalement durant les stages initiaux de la période de déglaciation, permettant ainsi aux niveaux de CO2 dans l'atmosphère d'atteindre les 240-250 ppm. En outre, ceci facilite la validation de nos résultats par rapport aux changements dans la concentration de carbonate observées depuis le dernier maximum glaciaire dans les profondeurs marines (Yu et al. 2010). Le faible taux d'érosion terrestre durant le maximum glaciaire et la période de déglaciation qui a suivi est d'autant plus significatif en raison d'un apport accru d'eau douce de fonte en provenance des calottes glaciaires Nord-Américaines. Deuxièmement, nos résultats quant au cycle du carbone durant l'Holocène pointent vers une certaine diminution du niveau de CO2 dans l'atmosphère se manifestant vers 6000 av. J.-C. et qui, en l'absence de forçage externe au modèle, devrait se maintenir jusqu'à aujourd'hui ; celle-ci semble toutefois varier (8-15 ppm) en fonction du mode de circulation océanique. De plus, la concentration atmosphérique de CO2 dans nos simulations démontre une importante sensibilité à l'étendue des barrières de glace en Antarctique, d'où notre conclusion qu'une présence accrue de glace marine durant l'Holocène (par rapport aux autres périodes interglaciaires) pourrait augmenter le niveau de CO2 atmosphérique de près de 5 ppm (effets physiques directs), et de pas moins de 10 ppm en considérant la gamme de modes de circulation océanique ainsi que les changements dans l'alcalinité marine.
7

Hoar, Mark Robert. "Statistical downscaling from an earth system model of intermediate complexity to reconstruct past climate gradients across the British Isles." Thesis, University of East Anglia, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396707.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gomes, Hélène. "Gestion écosystémique et durabilité des pêcheries artisanales tropicales face aux changements globaux." Electronic Thesis or Diss., Guyane, 2022. http://www.theses.fr/2022YANE0004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les changements globaux induisent de grandes pressions sur les écosystèmes marins, leurbiodiversité et les pêcheries qui en dépendent. Dans ce contexte, de nombreux scientifiquesprônent l'utilisation de l'approche écosystémique pour la gestion de pêches (AEGP). Néanmoins,la manière d’opérationnaliser l’AEGP reste controversée. La thèse apporte des éclairages surl’opérationnalisation de cette dernière pour les pêcheries côtières tropicales.Pour ce faire, nous proposons un modèle de complexité intermédiaire (MICE) multi-espèces,multi-flottilles et multi-critères, prenant en compte les impacts des changements globaux. Lemodèle est calibré pour la pêcherie artisanale côtière guyanaise. A l'échelle guyanaise, leréchauffement climatique, l'augmentation de population humaine et les variations de surface dela mangrove sont considérés comme les déterminants majeurs du changement global. A partirdu modèle calibré, plusieurs stratégies de pêche et scénarios environnementaux sont comparésà long terme. Dans ce cadre, des premiers résultats publiés (chapitre 3) montrent le rôle négatifdu changement climatique à la fois sur la biodiversité marine et la production halieutique. Cestravaux mettent également en évidence le rôle majeur de la compétition entre espèces depoissons. Puis dans le chapitre 4 en comparant les résultats bio-économiques obtenus pourchaque stratégie de pêche, ces travaux mettent en lumière l’intérêt de stratégies d’éco-viabilitéen termes de durabilité et de réconciliation écologico-économique. Les derniers résultatsprésentés dans cette thèse, au chapitre 5, soulignent l’impact positif de la mangrove, bienqu’insuffisant pour compenser l’impact négatif du réchauffement climatique, sur la durabilitéécologico-économique de la pêche côtière. Au-delà de ces résultats, cette thèse apporte unesérie de contributions transversales importantes. En premier lieu, sur le plan méthodologique,ces travaux permettent d'exposer les avantages des MICE pour la mise en place de l’AEGP.Ensuite, en mettant en évidence les facteurs écologiques majeurs de l'écosystème avec d'unepart l'interaction de compétition et d'autre part les filtres environnementaux, les travauxéclairent les complexités écologiques nécessaires à l’AEGP. Enfin, en évaluant et comparant lesperformances écologico-économiques de différentes stratégies de pêche, ces travauxpermettent d'ébaucher des politiques publiques pour avancer vers la durabilité de la pêcheriecôtière guyanaise et vers l’AEGP face aux changements globaux
Global changes induce high pressure on marine ecosystems, biodiversity and fisheries. In thatregard many scientists advocate the use of an ecosystem-based fisheries management (EBFM).However, the operationalization of such an ecosystem-based approach remains challenging. Thisthesis gives insight into the operationalization of EBFM for tropical coastal fisheries. To achievethat we propose a multi-species, multi-fleet and multi-criteria model of intermediate complexity(MICE), taking into account the impacts of global changes. The model is calibrated for theGuyanese small-scale coastal fishery. At local scale, global warming, the increase of populationand the variations of mangrove surface are considered as the main drivers of global changes.From the calibrated model, several fishing management strategies and environmental scenariosare compared in the long-run. In this context, the first results published (chapter 3) show thedetrimental impact of climate change on both marine biodiversity and fishery production. Thispaper also highlights the major role of ecological competition between species. Then, in thechapter 4, by comparing the bio-economic results obtained under each fishing managementstrategy, this research demonstrates the interest of Ecoviability strategies in terms ofsustainability and ecologico-economic reconciliation. The last results displayed in this thesis, inchapter 5, underline the positive impact of mangrove on ecologico-economic sustainability of thecoastal fishery, even if it is insufficient to balance the negative impact of warming. Beyond theseresults, this thesis brings a series of important transverse contributions. First, methodologically,this research permits to show the benefits of MICE to operationalize EBFM. Then, by highlightingthe major ecological factors of the ecosystem with on the one hand the interaction ofcompetition and on the other hand the environmental filters, the work sheds light on theecological complexities necessary for the EBFM. Finally, by evaluating and comparing theecologico-economic performances of several fishing strategies, this research permits to outlinepolicy recommendations to move towards the sustainability of the Guyanese coastal fishery andtowards EBFM, in the face of global changes
9

Schuster, Swetlana. "Lexical gaps and morphological complexity : the role of intermediate derivational steps." Thesis, University of Oxford, 2018. http://ora.ox.ac.uk/objects/uuid:41346813-951f-4284-9fe1-39bc2231999b.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis, we present a multi-method investigation of how lexical gaps, defined here as morphologically and phonologically viable formations, are processed in derivational chains. Due to a focus on the processing of single-affixed words in the experimental literature, little is known about the role of intermediate steps of derivation during morphological decomposition. In a series of four behavioural experiments, we show that while all morphologically well-formed items activate a base word that is two derivational steps away, speakers are sensitive to the internal composition of visually matched novel forms. Items like *Spitzung (spitz > spitzen > *Spitzung) primed their stem more than pseudowords containing two lexical gaps in their derivational chain such as hübsch > *hübschen > *Hübschung. Similar patterns emerged in an ERP (Event-related potentials) experiment using cross-modal priming: novel forms in the *Spitzung set displayed significantly stronger attenuation of the N400 response to the target spitz than items for which the intermediate position in the derivational chain is a lexical gap such as *Hübschung, thereby demonstrating a stronger link between pairs without a lexical gap in the intermediate position. Building on previous neuroimaging research on the processing of derivational depth in morphological complexity (cf. Meinzer et al., 2009; Pliatsikas et al., 2014), we subsequently turned to a functional magnetic resonance imaging investigation of the neural correlates of morphological complexity processing with lexical gaps. Both sets of pseudowords showed greater activation in the left inferior frontal gyrus relative to existing complex words as an index of prolonged lexical search. A direct comparison between the two sets of novel forms revealed stronger activation in the right superior parietal lobule and precuneus for pseudowords with lexical gaps in the intermediate position. These findings lend support to the idea that morphological decomposition involves the inspection of intermediate levels of morphological composition as a stepwise procedure that is informed by the structural rules of the language.
10

Laurence, Harold A. IV. "An exploratory study of cognitive complexity at a military intermediate service school." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20515.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Doctor of Philosophy
Educational Leadership
Sarah Jane Fishback
The military devotes significant resources and time in the development of officers through education. Recently, there has been a great deal of emphasis placed on military Intermediate Service Schools (ISS’s) to enhance the ability of graduates to think with greater cognitive complexity in order to solve the kinds of problems they may face after graduation. The military environment in which these mid-career officer students will serve is highly complex and requires a significant ability to generate solutions to unique and complex problems. One hallmark of a developmental adult educational experience is the advancement of the student to higher levels of cognitive complexity. The purpose of this research was to determine if there was a relationship between the cognitive complexity of faculty, students, and expectations for student graduates, at a military Intermediate Service School. Along with the simultaneous measure of cognitive complexity, via a survey administration of the LEP instrument, the researcher also developed a technique for translating learning objectives from Blooms taxonomy into a corresponding Perry position. This translation method was used to translate the college learning objectives into an expected Perry position for graduates of the college. The study also included demographic data to look for significant results regarding a number of independent variables. For faculty only these included teaching department, years of teaching experience, age, and military status. For both populations the variables studied included education level, gender, combat experience and combat trauma, branch of service, commissioning source, and years of active duty service. The study found that the mean cognitive complexity of entering students (CCI = 360) was lower than the cognitive complexity required of graduates (CCI = 407). However, the faculty mean cognitive complexity (CCI = 398) was not significantly different from a student graduate. The faculty results indicated that there were no statistically significant relations between the independent variables studied and the measured cognitive complexity. For students there was a statistically significant relation between measured cognitive complexity and gender.
11

Hawker, Craig Jon. "Model studies on the spiro intermediate." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293833.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Gomaa, Walid. "Model theory and complexity theory." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
13

Addy, Robert. "Cost of complexity : mitigating transition complexity in mixed-model assembly lines." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (page 72).
The Nissan Smyrna automotive assembly plant is a mixed-model production facility which currently produces six different vehicle models. This mixed-model assembly strategy enables the production level adjustment of different vehicles to match changing market demand, but it necessitates a trained workforce who are familiar with the different parts and processes required for each vehicle. Currently, the mixed-model production process is not batched; assembly line technicians might switch between assembling different vehicles several times every hour. When a switch or 'transition' occurs between different models, variations in the defect rate could occur as technicians must familiarize themselves with a different set of parts and processes. This thesis identifies this confusion as the consequence of 'transition' complexity, which results not only from variety but also familiarity; how quickly can a new situation be recognized, and how quickly can associates remember what to do and recover the skills needed to succeed. Recommendations follow to mitigate the impact of transition complexity on associate performance, thereby improving vehicle production quality. Transition complexity is an important factor in determining the performance of the assembly system (with respect to defect rates) and could supplement existing models of complexity measurement in assembly systems. Several mitigation measures at the assembly plant level are recommended to limit the impact of transition complexity on system performance. These measures include improvements to the offline kitting system to reduce errors such as reconfiguring the physical layout and implementing a visual error detection system. Additionally, we recommend altering the production scheduling system to ensure low volume models are produced at more regular intervals and with consistently low sequence gaps.
by Robert Addy.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
14

Villegas, Miguel E. "A quality management system complexity model." Thesis, Birmingham City University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433968.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Adamu-Fika, Fatimah. "LnCm fault model : complexity and validation." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/92013/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computer systems are ubiquitous in most aspects of our daily lives, as such the reliance of end users upon their correct and timely functioning is on the rise. With technology advancement, the functionality of these systems is increasingly being defined in software. On the other hand, feature sizes have drastically decreased, while feature density has increased. These hardware trends will keep happening as technology continues to advance. Consequently, power supply voltage is ever-decreasing and clock frequency and temperature hotspots are increasing. This steady reduction of integration scales is increasing the sensitivity of computer systems to different kinds of hardware faults. In particular, the likelihood of a single high-energy ion to cause double bit upsets (DBUs, due to its energy) or multiple bit upsets (MBUs, due to the incident angle) instead of single bits upsets (SBUs) is increasing. Furthermore, the likelihood of perturbations occurring in the logic circuits is also increasing. Owing to these hardware trends it has been projected that computer systems will expose such hardware faults to the software-level and accordingly the software is expected to tolerate such perturbations to maintain correct operations, i.e., the software needs to be dependable. Thus, defining and understanding the potential impact of such faults is required to propose the right mechanisms to tolerate their occurrence. To ascertain that software is dependable, it is important to validate the software system. This is achieved through the emulation of the type of faults that are likely to occur in the field during execution of the system, and through studying the effects of these faults on the system. Often, this validation process is achieved through a technique called fault injection that artificially perturbs the execution of the system through the emulation of hardware faults. Traditionally, the single bit-flip (SBF) model is used for emulating single event upsets (SEUs) and single event transients (SETs) in dependability validation. The model assumes that only an SEU or SET occurs during a single execution of the system. However, with MBUs becoming more prominent, the accuracy of the SBF model is limited. Hence, the need for including MBUs in software system dependability validation. MBUs may occur as multiple bit errors (MBEs) is a single location (memory word or register) or as single bits errors (SBEs) in several locations. Likewise, they may occur as MBEs in several locations. In the context of software-implemented fault injection (SWIFI), the injection of MBUs in all variables is infeasible due to the exponential size of the fault space, thereby making it necessary to carefully select those fault injection points that maximises the probability of causing a failure. A fault space, is the set of all possible fault under a given fault model. Consequently, research have started looking at a more tractable model, double bit upsets (DBU) in the form of double bit-flips within a single location, L1C2. However, with evidence of the possibility of corruption occurring chip wide, the applicability and accuracy of L1C2 is restricted. Following, this research focuses on MBUs occurring across multiple locations whilst seeking to address the exponential fault space problem associated with multiple fault injections. In general, the thesis analyses the complexity of selecting efficient fault-injection locations for injecting multiple MBUs. In particular, it formalises the problem of multiple bit-flip injections and found that the problem is NP-complete. There are various ways of addressing this complexity: (i) look for specific cases, (ii) look for heuristic and/or (iii) weaken the problem specification. Next, the thesis presents one approach for each of the aforementioned means of addressing complexity: - for the specific cases approach, the thesis presents a novel DBU fault model, that manifest as two single bit-flips across two locations. In particular, the research examines the relevance of the L2C1 fault model for system validation. It is found that the L2C1 fault model induces failure profile that is different from profiles induced by existing fault models - for the heuristic approach, the thesis uses an approach towards dependency aware fault injection strategies to extend the L2C1 fault model and the existing L1C2 fault model into LnCm (multiple location, multiple corruption) fault model, where n is the number of locations to target and m the maximum number of corruptions to inject in a given location. It proposes two heuristics to achieve this: first, select the set of potential locations and then select the subset of variables within these locations, and it examines the applicability of the proposed framework. - for the weakening of the problem specification approach, the thesis further refines the fault space and proposes a data mining approach to reduce the cost of multiple fault injections campaigns (in terms of number of multiple fault injections experiments performed). It presents an approach to refine the multiple fault injection points by identifying a subset of these points, whereby injection into this subset alone would be as efficient as injection into the entire set. These contributions are instrumental to advance multiple fault injections and make it an effective and practical approach for software system validation.
16

Goodrich, David Charles. "Basin Scale and Runoff Model Complexity." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1990. http://hdl.handle.net/10150/614028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Distributed Rainfall-Runoff models are gaining widespread acceptance; yet, a fundamental issue that must be addressed by all users of these models is definition of an acceptable level of watershed discretization (geometric model complexity). The level of geometric model complexity is a function of basin and climatic scales as well as the availability of input and verification data. Equilibrium discharge storage is employed to develop a quantitative methodology to define a level of geometric model complexity commensurate with a specified level of model performance. Equilibrium storage ratios are used to define the transition from overland to channel -dominated flow response. The methodology is tested on four subcatchments in the USDA -ARS Walnut Gulch Experimental Watershed in Southeastern Arizona. The catchments cover a range of basins scales of over three orders of magnitude. This enabled a unique assessment of watershed response behavior as a function of basin scale. High quality, distributed, rainfall -runoff data was used to verify the model (KINEROSR). Excellent calibration and verification results provided confidence in subsequent model interpretations regarding watershed response behavior. An average elementary channel support area of roughly 15% of the total basin area is shown to provide a watershed discretization level that maintains model performance for basins ranging in size from 1.5 to 631 hectares. Detailed examination of infiltration, including the role and impacts of incorporating small scale infiltration variability in a distribution sense, into KINEROSR, over a range of soils and climatic scales was also addressed. The impacts of infiltration and channel losses on runoff response increase with increasing watershed scale as the relative influence of storms is diminished in a semiarid environment such as Walnut Gulch. In this semiarid environment, characterized by ephemeral streams, watershed runoff response does not become more linear with increasing watershed scale but appears to become more nonlinear.
17

Browning, Alexander P. "Model complexity in biology and bioengineering." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/227787/1/Alexander_Browning_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In biology and bioengineering, mathematical and statistical analysis provides an understanding of biological systems that enables their control and manipulation. Tailoring mathematical and experimental complexity to the biological question of interest is crucial to avoid issues relating to parameter identifiability. We develop models and tools to bring new data-based insights to a range of contemporary problems in biology and bioengineering. These include data-focused stochastic models that describe complex cell interactions and decision making, incorporating biological systems into new engineered materials, and new tools to diagnose parameter identifiability and guide model complexity for stochastic differential equation models.
18

Chen, Yijia. "Model-checking problems, machines and parameterized complexity." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972341285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Stein, Teia N. "Border security: a conceptual model of complexity." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/39015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
CHDS State/Local
This research applies complexity and system dynamics theory to the idea of border security, culminating in the development of a conceptual model that can be used to expand exploration of unconventional leverage points, better understand holistic implications of border policies, and improve sense making for homeland security. How can border security be characterized to better understand what it is, and why are so many divergent opinions being voiced on whether it can be achieved? By demonstrating the border as a complex adaptive system (CAS) through the use of graphic system dynamics models, exploring by way of example the influences surrounding the movement of trade and transnational terrorists across borders, four policy-centric pillars became evident: 1) institutional capacity, 2) criminal capacity, 3) ability to move people and goods across borders rapidly, and 4) operational capacity. Culture, identity, adversarial adaptation, enforcement, and moral values influence and are influenced by, perceptions of what are seen as threats. This research illustrates the value of thinking in systems (instead of missions or programs), challenges assumptions of what borders and border security are thought to be, and intends to inspire creativity in thinking about 21st century borders: what they represent and the challenges they pose.
20

Tierno, Jose Andres Martin Alain J. "An energy-complexity model for VLSI computations /." Diss., Pasadena, Calif. : California Institute of Technology, 1995. http://resolver.caltech.edu/CaltechETD:etd-10252007-094408.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Fan, Yun. "ENSO prediction and predictability in an intermediate coupled model." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390461.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Mena, Carlos H. "Complexity in organisations : a conceptual model : executive summary." Thesis, University of Warwick, 2003. http://wrap.warwick.ac.uk/1257/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Industrial organisations face uncertainty created by consumers, suppliers, competitors and other environmental factors. To deal with this uncertainty, managers have to coordinate the resources of the organisation to produce a variety of behaviours that can cope with environmental change. An organisation that does not have sufficient internal complexity to adapt to the environment cannot survive, while, an organisation with excessive complexity would waste resources and might lose its ability to react to the environment. The main objective of the research was to create a model for dealing with complexity and uncertainty in organisations. The initial ideas for the model originated from the literature, particularly in the fields of systems and complexity theory. These initial ideas were developed through a series of five case studies with four companies, namely British Airways, British Midlands International (BMI), HS Marston and the Ford Motor Company. Each case study contributed to the development of the model, as well as providing immediate benefits for the organisations involved. The first three case studies were used in the development of the model, by analysing the way managers made decisions in situations of complexity and uncertainty. For the final two case studies, the model was already developed and it was possible to apply it, using these cases as a means of validation. A summary of the case studies is presented here, highlighting their contributions to the creation and testing of the model. The main innovation of the research was the creation and application of the Complexity-Uncertainty model, a descriptive framework that classifies generic strategies for dealing with complexity and uncertainty in organisations. The model considers five generic strategies: automation, simplification, planning, control and self-organisation, and indicates when each of these strategies can be more effective according to the complexity and uncertainty of the situation. This model can be used as a learning tool to help managers in industry to conceptualise the nature of complexity in their organisation, in relation to the uncertainty in the environment. The model shows managers the range of strategic options that are available under a particular situation, and highlights the benefits and limitations of each of these strategic options. This is intended to help managers make better decisions based on a more holistic understanding of the organisation, its environment and the strategies available.
23

Ciucanu, Radu. "Cross-model queries and schemas : complexity and learning." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10056/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La spécification de requêtes est généralement une tâche difficile pour les utilisateurs non-experts. Le problème devient encore plus difficile quand les utilisateurs ont besoin d'interroger des bases de données de grande taille et donc difficiles à visualiser. Le schéma pourrait aider à cette spécification, mais celui-ci manque souvent ou est incomplet quand les données viennent de sources hétérogènes. Dans cette thèse, nous abordons le problème de la spécification de requêtes pour les utilisateurs non-experts. Nous identifions deux approches pour attaquer ce problème : apprendre les requêtes à partir d'exemples ou transformer les données dans un format plus facilement interrogeable par l'utilisateur. Nos contributions suivent ces deux directions et concernent trois modèles de données parmi les plus populaires : XML, relationnel et orienté graphe. Cette thèse comprend deux parties, consacrées à (i) la définition et la transformation de schémas, et (ii) l'apprentissage de schémas et de requêtes. Dans la première partie, nous définissons des formalismes de schémas pour les documents XML non-ordonnés et nous analysons leurs propriétés computationnelles; nous étudions également la complexité du problème d'échange de données entre une source relationnelle et une cible orientée graphe. Dans la deuxième partie, nous étudions le problème de l'apprentissage à partir d'exemples pour les schémas XML proposés dans la première partie, ainsi que pour les requêtes de jointures relationnelles et les requêtes de chemins sur les graphes. Nous proposons notamment un scénario interactif qui permet d'aider des utilisateurs non-experts à définir des requêtes dans ces deux classes
Specifying a database query using a formal query language is typically a challenging task for non-expert users. In the context of big data, this problem becomes even harder because it requires the users to deal with database instances of large size and hence difficult to visualize. Such instances usually lack a schema to help the users specify their queries, or have an incomplete schema as they come from disparate data sources. In this thesis, we address the problem of query specification for non-expert users. We identify two possible approaches for tackling this problem: learning queries from examples and translating the data in a format that the user finds easier to query. Our contributions are aligned with these two complementary directions and span over three of the most popular data models: XML, relational, and graph. This thesis consists of two parts, dedicated to (i) schema definition and translation, and to (ii) learning schemas and queries. In the first part, we define schema formalisms for unordered XML and we analyze their computational properties; we also study the complexity of the data exchange problem in the setting of a relational source and a graph target database. In the second part, we investigate the problem of learning from examples the schemas for unordered XML proposed in the first part, as well as relational join queries and path queries on graph databases. The interactive scenario that we propose for these two classes of queries is immediately applicable to assisting non-expert users in the process of query specification
24

Haase, Christoph. "On the complexity of model checking counter automata." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:f43bf043-de93-4b5c-826f-88f1bd4c191d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Theoretical and practical aspects of the verification of infinite-state systems have attracted a lot of interest in the verification community throughout the last 30 years. One goal is to identify classes of infinite-state systems that admit decidable decision problems on the one hand, and which are sufficiently general to model systems, programs or protocols with unbounded data or recursion depth on the other hand. The first part of this thesis is concerned with the computational complexity of verifying counter automata, which are a fundamental and widely studied class of infinite-state systems. Counter automata consist of a finite-state controller manipulating a finite number of counters ranging over the naturals. A classic result by Minsky states that reachability in counter automata is undecidable already for two counters. One restriction that makes reachability decidable and that this thesis primarily focuses on is the restriction to one counter. A main result of this thesis is to show that reachability in one-counter automata with counter updates encoded in binary is NP-complete, which solves a problem left open by Rosier and Yen in 1986. We also consider parametric one-counter automata, in which counter updates can be parameters ranging over the naturals. Reachability for this class asks whether there are values of the parameters such that a target configuration can be reached from an initial configuration. This problem is also shown to be NP-complete. Subsequently, we establish decidability and complexity results of model checking problems for one- counter automata with and without parameters for specifications written in EF, CTL and LTL. The second part of this thesis is about the verification of programs with pointers and linked lists in the framework of separation logic. We consider the fragment of separation logic introduced by Berdine, Calcagno and O'Hearn in 2004 and the problem of deciding entailment between formulae of this fragment. We improve the known coNP upper bound and show that this problem can actually be solved in polynomial time. This result is based on a novel approach in which we represent separation logic formulae as graphs and decide entailment between them by checking for the existence of a graph homomorphism. We complement this result by considering various natural extensions of this fragment which make entailment coNP-hard.
25

Damasiotis, Vyron. "Modelling software project management complexity : an assessment model." Thesis, Staffordshire University, 2018. http://eprints.staffs.ac.uk/4834/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
During the last years, more and more business use projectised organisation as an organisation structure to tackle complex problems needed for the implementation of their strategic objectives. A significant number of these projects were/are challenged or even failed to meet their initial requirements in terms of cost, time and quality. This phenomenon is more intense in software projects due their special characteristics sourcing from the dynamic and continuous changing environment they operate and the nature of the software itself. Most of these failures were attributed to complexity that exists in various forms and levels at all projects. Many studies attempted to identify the sources of project complexity and define an appropriate complexity typology for capturing it. However, most of these studies are theoretical and only a limited number is proposing models capable to evaluate or measure project complexity. This research, acknowledges the endogenous character of complexity in projects but instead of trying to identify complexity dimensions of this complexity in projects, focuses on the complexity in the interfaces between project processes, project management processes and project managers, which consists of the critical point for successful project execution. The proposed framework can be used in order to highlight the most significant complexity areas either organisation specific or project specific, providing in that way the necessary awareness for better, efficient and effective project management. The approach followed in framework design, identifies the variation of perception of complexity between different organisations. Allow organisations to evaluate complexity of projects and provide them with an important information that will assist project selection process. Identifies the significance of peoples’ knowledge and experience and generally the maturity/capabilities of an organisation in management in order to handle complexity, as this was revealed through the findings of this research. Furthermore, considers complexity as variable that can be measured and propose a model for it. To implement this framework, an extended literature review was initially performed, for identifying the complexity factors sourcing from project management aspects. Subsequently, statistical methods for processing and refining the identified factors were used, resulting to the final set of measures used in the framework. Finally, the proposed model was validated through the appliance of case study methodology.
26

Marchut, Alexander Joseph. "Simulation of Polyglutamine Aggregation With An Intermediate Resolution Protein Model." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-01062006-142134/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The pathological manifestation of nine hereditary neurodegenerative diseases including Huntington's disease is the presence within the brain of aggregates of disease-specific proteins that contain polyglutamine tracts longer than a critical length. The molecular level mechanisms by which these proteins aggregate are still unclear. In an effort to shed light on this important phenomenon, we are investigating the aggregation of model fibril-forming peptides using molecular-level computer simulation. A simplified model of polyglutamine, the protein that is known to form fibrils (ordered aggregates of proteins in beta-sheet conformations) in the brains of victims of Huntington's disease, has been developed. This model accounts for the most important types of intra- and inter-molecular interactions - hydrogen bonding and hydrophobic interactions - while allowing the folding process to be simulated in a reasonable time frame. The model utilizes discontinuous potentials such as hard spheres and square wells in order to take advantage of discontinuous molecular dynamics (DMD), a fast simulation technique that is very computationally efficient. DMD is used to examine the folding and aggregation of systems of model polyglutamine peptides ranging in size from isolated peptides to 96 peptides. In our simulations we observe the spontaneous formation of aggregates and annular structures that are made up of beta sheets starting from random configurations of random coils. The effect of chain length on the behavior of our model peptides was examined by simulating the folding of isolated polyglutamine peptides 16, 32, and 48 residues long and the folding and aggregation of systems of twenty-four model polyglutamine peptides 16, 32, 36, 40, and 48 residues long. In our multi-peptide simulations we observed that the optimal temperature for the formation of beta sheets increases with chain length up to 36 glutamine residues but not beyond. Our finding of this critical chain length of 36 glutamine residues is interesting because a critical chain length of 37 glutamine residues has been observed experimentally.
27

Budin, Garry R. "An intermediate model of the tropical oceans and the atmosphere." Thesis, University of Oxford, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276560.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Herszon, Leon. "The complexity of projects : an adaptive model to incorporate complexity dimensions into the cost estimation process." Thesis, University of Huddersfield, 2017. http://eprints.hud.ac.uk/id/eprint/33747/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Most projects fail to deliver the required product, on time, or within the budget; and complex projects have additional challenges due to the impact of complexity factors, henceforth called dimensions. Cost overruns are common occurrences with projects, especially on complex ones, which points to a better understanding of the cost-estimation process. Accordingly, it is important to identify the factors affecting the project complexities and their impact on costestimation process. Although project complexities and cost-estimation practices have been discussed in literature, there is a clear gap in the existing body of knowledge regarding how complexity dimensions are linked with cost estimation of project-based industries and how to give due consideration to such complexity dimensions in cost estimation practices. The dynamic nature of complexity calls for a model that considers these dimensions and supports practitioners in the cost estimation process, including guidelines to deal with such complexities. This research aims to develop a model that incorporates complexity dimensions into the costestimation process for complex projects. For that to happen, there is a need to explore the concept of complexity, the dimensions of complexity, and in what context these should be considered in the cost-estimation process. An investigation of how these complexity dimensions impact the cost-estimation process precedes the development of the proposed model. Philosophically this research is positioned in the middle of the ontological, epistemological, and axiological spectra leaning towards idealism, interpretivism, and subjectivism respectively. Considering the use of survey and case studies as research strategies, the research mode is better positioned as inductive with the research choice based on a mixed method of quantitative and qualitative analysis. Empirical data has been collected from a database of complex projects through documentary analysis, and from a survey and interviews that have been used to develop and enhance the proposed model. An analysis of the existing literature on project complexity, along with a documentary analysis of 27 complex projects in a database, provided a list of 23 dimensions that are relevant to project complexity. Based on this list, a survey of 54 practitioners was conducted to gather expert views about the complexity dimensions and their impact on project cost estimation. The 23 dimensions were then prioritized using the Relative Importance Index, which revealed that different industries have distinct views on some dimensions and aligned on others. The survey was followed by a series of 10 in-depth interviews with subject experts. A final analysis of the survey and interviews results helped to eliminate dimensions, reducing the list of complexity dimensions to 15. Once the list of 15 dimensions was established, the model was drafted and divided into an assessment table where practitioners would assess each dimension on a scale of 1 to 4, the mapping of these results on a radar graph for better visualization, and a list of guidelines for cost estimators on how to deal with these complexities. The contribution to knowledge and society will be that such model could support practitioners on creating awareness for complexity dimensions, which would generate more accurate and reliable cost estimates for complex projects.
29

Hunter, Christine M., and n/a. "Demography of Procellariids: model complexity, chick quality, and harvesting." University of Otago. Department of Zoology, 2001. http://adt.otago.ac.nz./public/adt-NZDU20070518.110942.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Many challenges still exist in the empirical measurement of population size or density of burrow-nesting procellariiforms. Although reasonable precision of burrow occupancy estimates can be achieved with 10-15 transect (20 entrances per transect) per site, unknown levels of bias in burrow occupancy estimates currently prevents reliable estimation of burrow-nesting procellariiform abundance or harvest rates. Because it is unlikely that biases in burrow occupancy are similar among colonies, valid comparisons among sites may require estimates of absolute abundance rather than relative measures of burrow occupancy. The reliability and precision of matrix models for procellariids will depend primarily on the reliability and precision of adult survival estimates. Sensitivities, elasticities and uncertainties of population growth rate to demographic parameters for models with differing structures and parameterisations showed an overwhelming importance of adult survival in determining population growth rate and results of perturbation analyses. Estimates of adult survival should be a primary focus of any procellariid research program involving assessment of population status, or questions of population response to perturbations. Juvenile survival, pre-breeder survival and emigration rates were also shown to be relatively important in determining population growth rate and perturbation analyses. The sensitivity and elasticity of population growth rate to survival rates for all immature stages combined were similar in magnitude to the sensitivity and elasticity of population growth rate to survival rates for fecund birds. Estimation of survival rates for immature birds should also be given high priority in procellariid research programs. The variability in these parameters among populations needs to be assessed if results are to be generalised beyound specific colonies from which parameters are estimated. There is evidence that selective harvest of heavier Titi chicks occurs on at least some islands. However, analyses of a demographic model incorporating different quality chicks showed even extremely high degrees of selective harvest had little influence on population growth rate or perturbation analyses unless overall harvest levels were very high. Comparison of population growth rate and perturbation analyses of models differing in the level of detail in parameterisation or in the number of stages included in the model, showed negligible differences in results. This suggests that simple models, even if based on only sparse data, are adequate to set research priorities and evaluate population response to perturbations such as for the assessment of conservation management options, evaluation of possible causes of population change and assessment of the effects of harvest.
30

Potechin, Aaron H. "Analyzing monotone space complexity via the switching network model." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99066.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 177-179).
Space complexity is the study of how much space/memory it takes to solve problems. Unfortunately, proving general lower bounds on space complexity is notoriously hard. Thus, we instead consider the restricted case of monotone algorithms, which only make deductions based on what is in the input and not what is missing. In this thesis, we develop techniques for analyzing monotone space complexity via a model called the monotone switching network model. Using these techniques, we prove tight bounds on the minimal size of monotone switching networks solving the directed connectivity, generation, and k-clique problems. These results separate monotone analgoues of L and NL and provide an alternative proof of the separation of the monotone NC hierarchy first proved by Raz and McKenzie. We then further develop these techniques for the directed connectivity problem in order to analyze the monotone space complexity of solving directed connectivity on particular input graphs.
by Aaron H. Potechin.
Ph. D.
31

Amaechi, Austin Oguejiofor. "A conceptual system design and managerial complexity competency model." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/8555.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Complex adaptive systems are usually difficult to design and control. There are several particular methods for coping with complexity, but there is no general approach to build complex adaptive systems. The challenges of designing complex adaptive systems in a highly dynamic world drive the need for anticipatory capacity within engineering organizations, with a goal of enabling the design of systems that can cope with an unpredictable environment. This thesis explores this question of enhancing anticipatory capacity through the study of a complex adaptive system design methodology and complexity management competencies. A general introduction to challenges and issues in complex adaptive systems design is given, since a good understanding of the industrial context is considered necessary in order to avoid oversimplification of the problem, neglecting certain important factors and being unaware of important influences and relationships. In addition, a general introduction to complex thinking is given, since designing complex adaptive systems requires a non-classical thought, while practical notions of complexity theory and design are put forward. Building on these, the research proposes a Complex Systems Life-Cycle Understanding and Design (CXLUD) methodology to aid system architects and engineers in the design and control of complex adaptive systems. Starting from a creative anticipation construct - a loosening mechanism to allow for more options to be considered, the methodology proposes a conceptual framework and a series of stages to follow to find proper mechanisms that will promote elements to desired solutions by actively interacting among themselves. To illustrate the methodology, a financial systemic risks infrastructure systems architecture development case study is presented. The final part of this thesis develops a conceptual model to analyse managerial complexity competency model from a qualitative phenomenological study perspective. The model developed in this research is called Understanding-Perception-Action (UPA) managerial complexity competency model. The results of this competency model can be used to help ease project manager’s transition into complex adaptive projects, as well as serve as a foundation to launch qualitative and quantitative research into this area of project complexity management.
32

TU, SHANSHAN. "Case Influence and Model Complexity in Regression and Classification." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1563324139376977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Drummond, Anne. "New educationists in Quebec Protestant model and intermediate schools, 1881-1926." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study of Quebec Protestant superior and secondary education in the late-nineteenth, early-twentieth century focuses on the professsionalization of the principalship of model schools which were subsidized from the Protestant share of the Quebec Superior Education Fund. The dissertation tries to make a conceptual and historical link between a regulation which prohibited principals from providing the official academy grade curriculum to pupils enrolled in model schools and a series of school consolidation campaigns which the Protestant Committee of the Council of Public Instruction planned and implemented between 1906 and 1926. The dissertation proposes that the "educationists" of the Protestant Committee and the Provincial Association of Protestant Teachers of Quebec created the pre-conditions for the late nineteenth century Protestant rural school problem and subsequently conceptualized school consolidation and pupil transportation as solutions to this problem. The thesis argues that teacher professionalization regulations forced pupils at early ages out of one-room schools into graded, secondary, and graded, secondary, consolidated schools. Those school boards, principals, and pupils who were left out of the network of Protestant graded schools faced the loss of their Superior Education fund grants, their jobs, and their access to school leaving examinations respectively. The nineteenth century model school--a relatively inexpensive and flexible provider of secondary education--was transformed by Protestant Committee initiatives to classify pupils by age-grade, consolidate rural schools, and obtain enabling pupil transportation legislation for the boards of Protestant school municipalities. Professionally certified men teachers developed a graded elementary and secondary system in the context of Protestant minority education rights obtained in 1867 with the British North America Act and the British Canadian nationalism and domestic ideology of Montreal's elites. They used Protestant Committee regulation to reshape the right of the school commissioner to become a dissentient trustee into the right of the board of commissioners to create the separate Protestant school municipality. They did not believe that incumbent men and women principals of turn-of-the century model schools were qualified to defend a Protestant school system, and saw the depletion of Protestant school municipality tax revenues as a consequence of the growth of the Catholic school municipality tax base. However, with their devaluation of the model schools, they limited the possibilities of secondary school provision for principals, teachers, and pupils.
34

Andina, Elisa. "Complexity and Conservatism in Linear Robust Adaptive Model Predictive Control." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questa tesi presenta uno schema di controllo robusto adattativo basato sulla tecnica di controllo avanzato model predictive control (MPC) per sistemi lineari soggetti a disturbi additivi e incertezze parametriche, costanti e variabili. L'approccio proposto fornisce uno schema di controllo efficiente dal punto di vista computazionale con stima dei parametri online per ottenere un aumento delle prestazioni e una diminuzione progressiva del conservatismo. L'insieme dei parametri è estimato usando una tecnica di identificazione a finestra mobile per ottenere un insieme con complessità limitata. Il soddisfacimento robusto dei vincoli è ottenuto tramite la tecnica di controllo robusto tube based MPC, mentre la stabilità L2 dello schema ad anello chiuso è assicurata utilizzando una stima dei parametri ottenuta con l'algoritmo least mean squares (LMS) nella funzione di costo. Con un esempio infine viene studiato il compromesso tra complessità e conservatismo di tale schema di controllo efficiente dal punto di vista computazionale.
35

Sorensen, Michael Elliott. "Functional Consequences of Model Complexity in Hybrid Neural-Microelectronic Systems." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6908.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Hybrid neural-microelectronic systems, systems composed of biological neural networks and neuronal models, have great potential for the treatment of neural injury and disease. The utility of such systems will be ultimately determined by the ability of the engineered component to correctly replicate the function of biological neural networks. These models can take the form of mechanistic models, which reproduce neural function by describing the physiologic mechanisms that produce neural activity, and empirical models, which reproduce neural function through more simplified mathematical expressions. We present our research into the role of model complexity in creating robust and flexible behaviors in hybrid systems. Beginning with a complex mechanistic model of a leech heartbeat interneuron, we create a series of three systematically reduced models that incorporate both mechanistic and empirical components. We then evaluate the robustness of these models to parameter variation, and assess the flexibility of the models activities. The modeling studies are validated by incorporating both mechanistic and semi-empirical models in hybrid systems with a living leech heartbeat interneuron. Our results indicate that model complexity serves to increase both the robustness of the system and the ability of the system to produce flexible outputs.
36

Vollmer, Sascha. "Development of a Complexity Management Model for Strategic Business Units." Thesis, KTH, Industriell produktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-216158.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Knudstrup, Timothy A. "A model for minimizing numeric function generator complexity and delay." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Dec%5FKnudstrup.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 2007.
Thesis Advisor(s): Butler, Jon T. ; Frenzen, Chris L. "December 2007." Description based on title screen as viewed on January 22, 2008. Includes bibliographical references (p. 211-213). Also available in print.
38

Chersoni, Emmanuele. "Explaining complexity in human language processing : a distributional semantic model." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0189/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le présent travail aborde le thème de la complexité sémantique dans le langage naturel, et il propose une hypothèse basée sur certaines caractéristiques des phrases du langage naturel qui déterminent la difficulté pour l'interpretation humaine.Nous visons à introduire un cadre théorique général de la complexité sémantique de la phrase, dans lequel la difficulté d'élaboration est liée à l'interaction entre deux composants: la Mémoire, qui est responsable du rangement des représentations d'événements extraites par des corpus, et l'Unification, qui est responsable de la combinaison de ces unités dans des structures plus complexes. Nous proposons que la complexité sémantique depend de la difficulté de construire une représentation sémantique de l'événement ou de la situation exprimée par une phrase, qui peut être récupérée directement de la mémoire sémantique ou construit dynamiquement en satisfaisant les contraintes contenus dans les constructions.Pour tester nos intuitions, nous avons construit un Distributional Semantic Model pour calculer le coût de composition de l'unification des phrases. Les tests sur des bases de données psycholinguistiques ont révélé que le modèle est capable d'expliquer des phénomènes sémantiques comme la mise à jour context-sensitive des attentes sur les arguments et les métonymies logiques
The present work deals with the problem of the semantic complexity in natural language, proposing an hypothesis based on some features of natural language sentences that determine their difficulty for human understanding. We aim at introducing a general framework for semantic complexity, in which the processing difficulty depends on the interaction between two components: a Memory component, which is responsible for the storage of corpus-extracted event representations, and a Unification component, which is responsible for combining the units stored in Memory into more complex structures. We propose that semantic complexity depends on the difficulty of building a semantic representation of the event or the situation conveyed by a sentence, that can be either retrieved directly from the semantic memory or built dynamically by solving the constraints included in the stored representations.In order to test our intuitions, we built a Distributional Semantic Model to compute a compositional cost for the sentence unification process. Our tests on several psycholinguistic datasets showed that our model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and of logical metonymies
39

Zetterlund, Olof. "Optimization of Vehicle Powertrain Model Complexity for Different Driving Tasks." Thesis, Linköpings universitet, Fordonssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122682.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This master thesis has examined how the understanding of different driving tasks can be used to develop a suitable powertrain model to be used in the Sim III simulator at VTI. Studies performed in the simulator have been statistically analyzed using parameters commonly used to describe driving patterns in drive cycles. It has been shown that the studies can be divided into three driving tasks: "High constant velocity", "High velocity with evasive maneuver", and "Mixed driving". Furthermore, a powertrain model from a former master thesis has been further developed. The new model utilizes a 3D torque map that takes engine speed, accelerator pedal position and gear as input. Using measurements, from the chassis dynamometers laboratory at LiU, that resembles the derived driving tasks, it has been shown that the performance of the new model has significantly increased for high velocity driving and during maximum acceleration. However, when using the clutch at low speeds and gears the model still performs poorly and needs further development.
40

Lehtinen, Maria Karoliina. "Syntactic complexity in the modal μ calculus". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29520.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis studies how to eliminate syntactic complexity in Lμ, the modal μ calculus. Lμ is a verification logic in which a least fixpoint operator μ, and its dual v, add recursion to a simple modal logic. The number of alternations between μ and v is a measure of complexity called the formula’s index: the lower the index, the easier a formula is to model-check. The central question of this thesis is a long standing one, the Lμ index problem: given a formula, what is the least index of any equivalent formula, that is to say, its semantic index? I take a syntactic approach, focused on simplifying formulas. The core decidability results are (i) alternative, syntax-focused decidability proofs for ML and Pμ 1 , the low complexity classes of μ; and (ii) a proof that Ʃμ 2 , the fragment of Lμ with one alternation, is decidable for formulas in the dual class Pμ 2 . Beyond its algorithmic contributions, this thesis aims to deepen our understanding of the index problem and the tools at our disposal. I study disjunctive form and related syntactic restrictions, and how they affect the index problem. The main technical results are that the transformation into disjunctive form preserves Pμ 2 -indices but not μ 2 -indices, and that some properties of binary trees are expressible with a lower index using disjunctive formulas than non-deterministic automata. The latter is part of a thorough account of how the Lμ index problem and the Rabin–Mostowski index problem for parity automata are related. In the final part of the thesis, I revisit the relationship between the index problem and parity games. The syntactic index of a formula is an upper bound on the descriptive complexity of its model-checking parity games. I show that the semantic index of a formula Ψ is bounded above by the descriptive complexity of the model-checking games for Ψ. I then study whether this bound is strict: if a formula Ψ is equivalent to a formula in an alternation class C, does a formula of C suffice to describe the winning regions of the model-checking games of Ψ? I prove that this is the case for ML, Pμ 1 , Ʃμ 2 , and the disjunctive fragment of any alternation class. I discuss the practical implications of these results and propose a uniform approach to the index problem, which subsumes the previously described decision procedures for low alternation classes. In brief, this thesis can be read as a guide on how to approach a seemingly complex Lμ formula. Along the way it studies what makes this such a difficult problem and proposes novel approaches to both simplifying individual formulas and deciding further fragments of the alternation hierarchy.
41

Al-Khalili, Jameel Sadik. "Intermediate energy deuteron elastic scattering from nuclei in a three-body model." Thesis, University of Surrey, 1989. http://epubs.surrey.ac.uk/842863/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A study is made of polarized deuteron elastic scattering from 58Ni and 40Ca at the intermediate energies of 400 and 700 MeV. A three-body formalism, based on the Single Folding Model, is used for two sets of Dirac nucleon optical potential parameters. Both potentials are designed to fit the proton elastic scattering observables at half the incident deuteron energy. The two potentials give different predictions for the deuteron scattering observables when used in the Schrodinger equation with relativistic kinematics. Good qualitative agreement with the experimental observables is obtained in both cases for deuteron elastic cross-section, vector (Ay) and tensor (Ayy) analyzing power data of the Saclay group. Quantitative discrepancies between theory and data, particularly in Ayy, suggest mechanisms missing from the simple three-body model. To this end, two sources of spin-dependent effects, Pauli-blocking and breakup of the deuteron to spin-singlet intermediate states, are studied. The role of the spin-dependence associated with Pauli-blocking is studied quantitatively for the d-58 Ni system. The magnitude of the momentum-dependent Tp tensor interaction, is shown to pass through a local maximum in the region of 400 MeV incident deuteron energy. Comparison of numerical calculations with the available experimental data at this energy shows the Pauli mechanism not to be responsible for outstanding discrepancies between theory and data. Breakup effects on the elastic amplitude are studied within a two-step calculation, using two separate high energy methods. The first neglects distortion in the initial, final and intermediate states. Use is made of the Adiabatic approximation, which allows closure over the intermediate breakup states. The effect on the elastic amplitude due to breakup to both triplet and singlet intermediate spin states are calculated. The inclusion of spin-singlet breakup in the model has a very large effect on Ayy, compared with that of spin-triplet breakup. This is attributed to a large contribution from a TL-like tensor interaction in the case of singlet breakup, which is negligibly small in the triplet case. Second order breakup effects are also calculated in Glauber theory, using central potentials. Continuum-continuum coupling effects are found to be negligible at intermediate energies, and thus the two-step calculation is adequate. Glauber theory shows, however, that distortion effects are important at these energies, and suggests the need for a more accurate treatment of spin-singlet breakup effects in future calculations.
42

Lowndes, Erik M. "Development of an Intermediate DOF Vehicle Dynamics Model for Optimal Design Studies." NCSU, 1998. http://www.lib.ncsu.edu/theses/available/etd-19981022-201805.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

The demands imposed by the optimal design process form a unique set of criteriafor the development of a computational model for vehicle simulation. Due to the largenumber of simulations that must be performed to obtain an optimized design the modelmust be computationally efficient. A competing criterion is that the computational modelmust realistically model the vehicle.Current trends in vehicle simulation codes have tackled the problem of realism byconstructing elaborate full vehicle models containing dozens if not hundreds of distinctbodies. Each body in a model of this type is associated with six degrees of freedom.Numerous constraint equations are applied to the bodies to represent the physicalconnections. While the formulation of the equations is not particularly difficult, and in facthas been automated in several software packages, the resulting model requires aconsiderable amount of computational time to run. This makes the model unsuitable forthe application of computational optimal design techniques.Past research in the field of vehicle dynamics has produced numerouscomputational models which are small enough and fast enough to satisfy the speeddemands of the optimal design process. These models typically use less than a dozendegrees of freedom to model the vehicle. They do a good job of predicting the generalmotion of the vehicle and they are useful as design tools but they lack the accuracyrequired for optimal design.A model that bridges the gap between these two existing classes of models and issuitable for performing optimal design was developed. The model possesses twenty-eightdegrees of freedom and consists of eight bodies which represent the sprung mass, the rearsuspension, the left front spindle, the right front spindle, and the four wheels. A drivercontrol algorithm was developed which is capable of driving the car near its handlinglimits. The NCSU Legends race car was modeled and an attempt was made to optimizethe vehicle setup for the Kenley, NC race track.

43

Zeileis, Achim, Torsten Hothorn, and Kurt Hornik. "Evaluating Model-based Trees in Practice." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1484/1/document.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A recently suggested algorithm for recursive partitioning of statistical models (Zeileis, Hothorn and Hornik, 2005), such as models estimated by maximum likelihood or least squares, is evaluated in practice. The general algorithm is applied to linear regression, logisitic regression and survival regression and applied to economical and medical regression problems. Furthermore, its performance with respect to prediction quality and model complexity is compared in a benchmark study with a large collection of other tree-based algorithms showing that the algorithm yields interpretable trees, competitive with previously suggested approaches.
Series: Research Report Series / Department of Statistics and Mathematics
44

Jin, Xin. "Coal Electrolysis to Produce Hydrogen at Intermediate Temperatures." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1250785769.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

ZOTKIEWICZ, Mateusz. "Robust routing optimization in resilient networks : Polyhedral model and complexity issues." Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00997659.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the thesis robust routing design problems in resilient networks are considered. In the first part computational complexity of such problems are discussed. The following cases are considered: - path protection and path restoration - failure-dependent and failure-independent restoration - cases with and without stub-release - single-link failures and multiple-link failures (shared risk link group) - non-bifurcated (unsplittable) flows and bifurcated flows For each of the related optimization cases a mixed-integer (in the non-bifurcated cases) or linear programming formulation (in all bifurcated cases) is presented, and their computational complexity is investigated. For the NP-hard cases original NP-hardness proofs are provided, while for the polynomial cases compact linear programming formulations (which prove the polynomiality in the question) are discussed. Moreover, pricing problems related to each of the considered NP-hard problems are discussed. The second part of the thesis deals with various routing strategies in networks where the uncertainty issues are modeled using the polyhedral model. In such networks two extrema are possible. The simplest in terms of implementation, and simultaneously the least effective strategy, is the robust stable routing. On the other hand, the most effective strategy, i.e., the dynamic routing, is virtually impossible to implement in real world networks. Therefore, the major aim of this part of the thesis is to present novel routing strategies that merge the simplicity of the robust stable routing with the efficiency of the dynamic routing
46

Zotkiewicz, Mateusz. "Robust routing optimization in resilient networks : Polyhedral model and complexity issues." Thesis, Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0001/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans les grands réseaux de transport, certains éléments du réseau peuvent être responsables du traitement d’importants volumes de trafic. Cela rend ces réseaux vulnérables aux pannes telles que les coupures de câbles. Des mécanismes appropriés pour le recouvrement du trafic doivent être mis en oeuvre pour éviter les ruptures de service. Une des meilleures techniques pour protéger les réseaux de transport consiste à prévoir des mécanismes de restauration au niveau de la couche transport elle-même afin que chaque opérateur de transport puisse sécuriser son propre réseau et offrir un service de transport fiable aux autres acteurs tels que les opérateurs IP. D’autres mécanismes de protection pourront alors être déployés aux niveaux supérieurs sans interférences avec la restauration au niveau transport. Outre les pannes pouvant touchers ses composantes, un réseau doit aussi faire face à l’incertitude de la matrice de trafic qu’on chercher à acheminer dans le réseau. Cette incertitude est une conséquence de la multiplication des applications et services faisant appel au réseau. La mobilité des usagers ainsi que les pannes touchant le réseau contribuent également à cette incertitude. La thèse se découpe donc en deux parties. Dans la première partie, nous nous intéressons à la complexité des différents mécanismes de sécurisation des réseaux. Dans la seconde partie, nous nous intéressons à l’incertitude de la matrice de trafic et notamment au modèle polyédrique
In the thesis robust routing design problems in resilient networks are considered. In the first part computational complexity of such problems are discussed. The following cases are considered: - path protection and path restoration - failure-dependent and failure-independent restoration - cases with and without stub-release - single-link failures and multiple-link failures (shared risk link group) - non-bifurcated (unsplittable) flows and bifurcated flows For each of the related optimization cases a mixed-integer (in the non-bifurcated cases) or linear programming formulation (in all bifurcated cases) is presented, and their computational complexity is investigated. For the NP-hard cases original NP-hardness proofs are provided, while for the polynomial cases compact linear programming formulations (which prove the polynomiality in the question) are discussed. Moreover, pricing problems related to each of the considered NP-hard problems are discussed. The second part of the thesis deals with various routing strategies in networks where the uncertainty issues are modeled using the polyhedral model. In such networks two extrema are possible. The simplest in terms of implementation, and simultaneously the least effective strategy, is the robust stable routing. On the other hand, the most effective strategy, i.e., the dynamic routing, is virtually impossible to implement in real world networks. Therefore, the major aim of this part of the thesis is to present novel routing strategies that merge the simplicity of the robust stable routing with the efficiency of the dynamic routing
47

Attaallah, Abdulaziz Ahmad. "A Structural Metric Model to Predict the Complexity of Web Interfaces." Diss., North Dakota State University, 2017. http://hdl.handle.net/10365/25918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The complexity of web pages has been widely investigated. Many experimental studies used several metrics to measure certain aspects of the users, tasks or GUIs. In this research, we focusing on the visual structure of web pages and how different users look at them regarding complexity. Several important measures and design elements have rarely been addressed together to study the complex nature of the visual structure. Therefore, we promoted a metric model to clarify this issue by conducting several experiments on groups of participants and using several websites from different genres. The goal is to form a metric model that can assist developers to measure more precisely the complexity of web interfaces under development. From the first experiment, we could draw the guidelines of the major entities in the metric model, and the focus was on two most important aspects of the web interfaces, which are the structural factors and elements. Thus, four main factors and three main elements were more representatives to the concept of complexity. The four factors are size, density, grouping and alignment, and the three elements are text graphics and links. Based on them we developed a structural metric model that relates these factors and elements together, and the results of the metric model are compared to the web interface users? ratings by using statistical analysis to predict the overall complexity of web interfaces. The results of that study are very promising where they show our metric model is capable of predicting the complex nature of web interfaces with high confidence.
48

Wallace, Jack C. "The control and transformation metric: a basis for measuring model complexity." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53089.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The purpose of this report is to develop a complexity metric suitable for discrete event simulation model representations. Current software metrics, based upon graphical analysis or static program characteristics, do not capture the influence on complexity stemming from the inherent dynamics of a model. A study of extant software metrics provides a basis for identifying desirable properties for model application. Various approaches are examined, and a set of characteristics for a model complexity metric is defined. A metric evolves from the recognition of the two types of complexity: transformation and control, both of which appear prominently in model representations. Experimental data are presented to verify that the Control and Transformation (CAT) metric reflects the desired behavior. Experimental evaluation supports the claim that the CAT metric is an improvement over existing software metrics for measuring the complexity of model representations.
Master of Science
49

Cheng, Yuqing. "A Mathematical Model to Predict Fracture Complexity Development and Fracture Length." Thesis, University of Louisiana at Lafayette, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10246182.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Hydraulic fracturing is a commonly used practice in stimulation treatment, especially in low-permeability formation. The fracture complexity usually took place in relation to the interaction between fractures and natural rock fabrics. Despite many studies regarding the production simulation, diagnostic methods, and mathematical models about fracture complexity, research about the local complexity development is still needed for optimized stimulation design. Aiming to predict the local complexity development and stimulation performance, a hierarchy model is designed to make the problem more tractable, and a corresponding mathematical model is developed for numerical simulation. A case study is provided, and the comparison with the result of micro-seismic mapping indicates much discrepancy between field data and simulated result. Considering the many limitations of the model, the discrepancy is tolerable and acceptable. According to the sensitivity analysis, a high injection rate could serve to increase fracture complexity while reducing the maximum length of fractures. The sensitivity analyses regarding bottom-hole net pressure show a weak relationship between the fracture complexity and the bottom-hole net pressure, but a high injection pressure or low in-situ stress can serve to enhance the stimulation performance by increasing the maximum length of fractures. Sensitivity analyses for fluid properties indicate that using the high-viscosity fracturing fluid can add to the local complexity of fractures and reduce the maximum length of fractures, while fluid density has little to do with the fracture complexity and stimulation performance. The parametric study regarding rock surface energy indicates little effect of surface energy of different shale rocks on changing the local fracture complexity and stimulation performance.

50

NINKA, ENIEL. "Complexity in economics a multi-sectoral model with heterogeneous interacting agents." Doctoral thesis, Università Politecnica delle Marche, 2008. http://hdl.handle.net/11566/242433.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії