To see the other types of publications on this topic, follow the link: 3315k.

Dissertations / Theses on the topic '3315k'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic '3315k.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

La, Rocca Antonino. "Thermal analysis of a high speed electrical machine." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33156/.

Full text
Abstract:
This work has analysed, designed, commissioned and validated the performance of a novel cooling system for an innovative high speed, three-phase synchronous permanent magnet machine designed for an aero-engine starter/generator with a power rating of 45 kW and maximum speed of 32,000 rpm. The cooling system designed consisted into inserting a 1 mm non-electrically conductive stator sleeve in the machine airgap, this separates the rotor region from the stationary components letting the rotor running dry at all times; the stator region can then be flooded with oil. Oil enters from one side of the machine through some radial openings impinging directly over the end-winding, it then flows through two rows of equally sized axial ducts located along the inner and outer diameter of the stator to give an even distribution of the coolant, and finally it flows over the surface of the rear end-winding and leaves the machine. The thermal modelling was carried out by the joint use of Computational Fluid Dynamics (CFD) and Lumped Parameter Thermal Network (LPTN); this allowed the investigation of heat transfer phenomena and the optimisation of the cooling design. CFD was primarily employed to investigate the fluid flow and to perform conjugate heat transfer analyses; these allowed the determination of heat transfer coefficients and the prediction of temperature distribution inside the machine. Thermal networks were developed to investigate the heat flow through machine components, to perform the design optimisation and to maximise overall machine performance. A thermal network was also developed by the author to investigate the heat transfer phenomena inside the bearing chambers. An experimental apparatus was designed and commissioned in order experimentally validate the thermal models developed. Temperatures, pressures and torque up to 20,000 rpm were recorded throughout the tests and data collected were compared to quantities predicted analytically and numerically. Maximum winding temperatures measured performing a short circuit test agree well with analytical and numerical prediction with a maximum difference of 10%; mechanical losses measured carrying out a no-load test agree well at speeds over 10,000 rpm with differences between 2 and 12%. Throughout tests, pressure drops were monitored across the machine and an agreement of 13% with prediction were achieved. Design improvements are also proposed to further enhance the cooling of stator slots and of rotor components.
APA, Harvard, Vancouver, ISO, and other styles
2

Donoho, Emily S. "Appeasing the saint in the loch and the physician in the asylum : the historical geography of insanity in the Scottish Highlands and Islands, from the early modern to Victorian eras." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3315/.

Full text
Abstract:
This thesis examines the historical geography of lunacy in the Scottish Highlands and Islands. Using a wide variety of sources, the objective is to construct an expansive picture of the manner in which those labelled as “mad” were treated and managed in this peripheral region of mainland Britain, from the Medieval Period to the late-Victorian period. The scope includes Medieval Celtic manuscripts, nineteenth-century folklore collections, Lunacy Commissioners’ reports, Sheriff Court records, asylum case notes and various other documents besides. These sources open windows on a variety of vocabularies, writings, stories and proclamations through which madness was socially constructed, and then substantively treated, in this remotest of regions. In effect, the thesis sets regional folklore, as a way of accessing the “traditional” worlds of Highland madness from the “bottom-up”, in counterpoint to the likes of Lunacy Commissioners reports, as an instance of the “modernising” of these worlds through medical-institutional means from the “top-down”. The interlocking binaries here are to an extent then scrambled by exploring different dimensions of this interaction between “bottom-up” and “top-down”, charting continuities as well as breaks in attitudes and practices, and thereby constructing a tangled picture of how the Highlands have come to tackle this most challenging of human conditions. The account that follows is thoroughly informed by the historical, social and spatial context of the Highlands, always recognising that madness and its responses must be seen as indelibly placed, contextually shaped and ‘read’ through the region. While the historiography of madness and psychiatry has already considered the Scottish Lowlands experience from various angles, the Highlands have remained all but untouched and their archives unopened. This thesis begins the task of addressing this serious lacuna.
APA, Harvard, Vancouver, ISO, and other styles
3

Bolton, Jacqueline Louise. "Demarcating dramaturgy : mapping theory onto practice." Thesis, University of Leeds, 2011. http://etheses.whiterose.ac.uk/3315/.

Full text
Abstract:
'Dramaturgy' and the 'dramaturg' have entered the discourse of English theatre practitioners over the past two decades. For individuals working within subsidized building-based producing theatres, understandings and applications of dramaturgical practice have been significantly shaped by the structures and objectives of literary management - a role, established within the industry since the 1990s, dedicated to the development of new plays and playwrights. In Germany, the dramaturgical profession dates back to the latter half of the eighteenth century and, since the twentieth century, has held a remit inclined more towards the programming and production of theatre works than the developing and commissioning of new theatre writing. In Germany and across mainland Europe, dramaturgs hold a recognized position at the heart of producing structures; in England, the role and status of the dramaturg are less defined. Despite a decade or so of concerted explanation and exploration, the concept of dramaturgy continues to be met with indifference, principally associated with practices of literary management which, this thesis shall argue, risk eliding the critical and creative scope of dramaturgy as it is practised on the continent. Through an assessment of the cultural, philosophical and economic contexts which inform processes of theatre-making, this thesis seeks to articulate and analyse these contrasting practices of dramaturgy. Chapters One and Two focus upon contemporary definitions of dramaturgy in England, addressing the role of the dramaturg within new play development and analysing the impact that distinctions between 'script-led' and 'non-script-led' approaches to theatre have had upon the reception of dramaturgical practice. Chapters Three and Four then compare those aspects of German and English theatre practice which I believe critically determine the agency of a dramaturg within production processes. These aspects may be summarized respectively as, on a microlevel, the relationship between text and performance and, on a macro-level, the relationship between theatre and society. This thesis regards dramaturgy as a creative practice defined in relation to a shared set of attitudes towards the production and reception of theatre, and argues that a specifically dramaturgical contribution to theatremaking rests in this analysis of the dynamic between performance and spectator.
APA, Harvard, Vancouver, ISO, and other styles
4

Wintcher, Amanda. "Post-palaeolithic rock art of northeast Murcia, Spain : an analysis of landscape and motif distribution." Thesis, Durham University, 2011. http://etheses.dur.ac.uk/3315/.

Full text
Abstract:
Multiple studies demonstrate a connection between landscape and the distribution of rock art in Mediterranean Spain. Looking beyond styles as the primary analytical dimension, and instead focusing on similarities across style boundaries, can deepen our understanding of this connection. While previous studies of the relationship between post-Palaeolithic rock art and landscape have considered different classes of image, including humans, animals, and geometric shapes, they have maintained the primary split into the main styles defined in the Mediterranean region. This is problematic because each style has considerable variability, distinct distributions within the Iberian Peninsula, and different histories of development. Different styles frequently occur together, occasionally superimposed or showing multiple painting episodes. The styles were therefore at least partially contemporary, and did not correspond to distinct territories. Style may have been deliberately used to carry meaning, suggesting that the use of specific types of image was more closely related to landscape than the overall styles. A typology of motifs which transcends styles was created, and the frequency of the appearance of these motif types in specific landscape contexts and the combinations in which they appear together on panels was evaluated. The results suggest that there are indeed patterns beyond style, which may indicate different functions or meanings behind both image and place.
APA, Harvard, Vancouver, ISO, and other styles
5

Collins, Jacqueline A. "From lavender menace to lesbian heroic : representations of lesbian identities in contemporary Spanish fiction and film." Thesis, University of Sunderland, 2011. http://sure.sunderland.ac.uk/3315/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Xinxiang. "Diplomatic immunities ratione materiae under the Vienna Convention on Diplomatic Relations : towards a coherent interpretation." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33152.

Full text
Abstract:
Rules of diplomatic immunity, which nowadays are enshrined in the Vienna Convention on Diplomatic Relations, play an important role in interstate diplomacy because they ensure the efficient performance of diplomatic functions. This thesis investigates a particular form of diplomatic immunity - diplomatic immunity ratione materiae. Unlike diplomatic immunity ratione personae, which pertains to the personal status of a diplomatic agent, diplomatic immunity ratione materiae depends in essence on the official nature of a particular act In practice, however, the determination of diplomatic immunity ratione materiae may meet with many conceptual and practical difficulties. For one, it is not always easy to distinguish the official acts of a diplomatic agent, who represents the sending State in the receiving State, from his or her private acts. In case of disagreement between the two States, questions may also arise as to who has the authority to make a final determination. The Vienna Convention does not offer much guidance on these issues; on the contrary, the Convention complicates them by employing, without adequate explanation, distinct formulas for different kinds of diplomatic immunity ratione materiae. This thesis examines these formulas in detail. On a general level, it is submitted that diplomatic immunity ratione materiae for certain types of activity constitutes not only a procedural bar to court proceedings but also a substantive exemption of individual responsibility. More specifically, it is argued that each formula must be understood in the light of the rationale behind immunity, the type of immunity concerned, and the specific functions or duties performed. In case of controversy, weight should be given to the opinion of the sending State, although the authority to make a decision lies ultimately with the court of the receiving State.
APA, Harvard, Vancouver, ISO, and other styles
7

Barber, Jacob. "Disciplinarity, epistemic friction, and the 'Anthropocene'." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33153.

Full text
Abstract:
This thesis explores the scientific controversy over the 'Anthropocene', a putative new epoch of geological time conceived in 2000 by atmospheric chemist and earth system scientist Paul Crutzen. I trace the conception of the Anthropocene and explore its spread through a range of disciplines from the earth sciences to the humanities. Particular attention is paid to the Anthropocene Working Group (AWG) of the International Commission on Stratigraphy. This group was tasked with considering whether or not the Anthropocene should be subject to stratigraphic formalisation and be made 'real' insofar as the discipline of stratigraphy was concerned. The group's efforts, and the wide-ranging response to them, reveal the challenge of making sense of knowledge as it moves across different disciplines, settings, and contexts. While the AWG was tasked with producing a specifically stratigraphic response to the rising prominence of the Anthropocene, in performing their investigation the group took on board wide-ranging multidisciplinary expertise. As well as raising questions about the appropriate criteria for the group's investigation, the response to the group's efforts from a diverse range of disciplines illustrates the disunity of interdisciplinary work. The movement of the controversy from scholarly journals into an increasingly public sphere reveals further questions about the relationship between scientific authority and society as a whole. While different communities disagreed about the scientific value of the Anthropocene, many shared in their recognition of the role this scientific framing could play in fomenting a political response to anthropogenic global change. This thesis argues that scholarly debates about the Anthropocene illustrate questions about authority, epistemic privilege, and the relationship between disciplines that have ramifications beyond the controversy itself.
APA, Harvard, Vancouver, ISO, and other styles
8

Józsa, Tamás István. "Drag reduction by passive in-plane wall motions in turbulent wall-bounded flows." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33155.

Full text
Abstract:
Losses associated with turbulent flows dissipate a significant amount of generated energy. Such losses originate from the drag force, which is often described as the sum of the pressure drag and the friction drag. This thesis sets out to explore the hypothesis that passive wall motions driven by fluid mechanical forces are able to reduce the friction drag in fully developed turbulent boundary layers. Firstly, the streamwise and spanwise opposition controls proposed by Choi et al. (1994, Journal of Fluid Mechanics) are revisited to identify beneficial wall motions. Near-wall streamwise or spanwise velocity fluctuations are measured along a detection plane parallel to the wall (sensing). For streamwise control, the wall velocities are set to be equivalent to the measured streamwise velocity fluctuations, whereas for spanwise control, the wall velocities are set to have the same magnitude but opposite direction as the measured spanwise velocity fluctuations (actuation). Direct numerical simulations of canonical turbulent channel flows are carried out at low (Reτ ≈ 180) and intermediate (Reτ ≈ 1000) Reynolds numbers to quantify the effect of the distance between the wall and the detection plane. The investigation reveals the primary differences between the mechanisms underlying the two active in-plane controls. The modified flow features and turbulence statistics show that the streamwise control amplifies the most energetic streamwise velocity fluctuations and damps the near-wall vorticity fluctuations. In comparison, the spanwise control induces near-wall vorticity in order to counteract the quasi-streamwise vortices of the near-wall cycle and suppress turbulence production. Although, the working principles of the active controls are fundamentally different, both achieve drag reduction by mitigating momentum transfer between the velocity components. Secondly, two theoretical passive compliant wall models are proposed, the aim being to sustain beneficial wall motions identified by active flow control simulations. In the proposed models, streamwise or spanwise in-plane wall motions are governed by an array of independent one-degree-of-freedom damped harmonic oscillators. Unidirectional wall motions are driven by local streamwise or spanwise wall shear stresses. A weak coupling scheme is implemented to investigate the interaction between the compliant surface models and the turbulent flow in the channel by means of direct numerical simulations. A linear analytical solution of the coupled differential equation system is derived for laminar pulsatile channel flows allowing verification and validation of the numerical model. The obtained analytical solution is utilised to map the parameter space of the passive controls and estimate the effect of the wall motions. It is shown that depending on the control parameters, the proposed compliant walls decrease or increase the vorticity fluctuations at the wall similarly to the active controls. This is confirmed by direct numerical simulations. On the one hand, when the control parameters are chosen appropriately, the passive streamwise control damps the near-wall vorticity fluctuations and sustains the same drag reduction mechanism as the active streamwise control. This leads to modest, 3.7% and 2.3% drag reductions at low and intermediate Reynolds numbers. On the other hand, the spanwise passive control is not capable of increasing the near-wall vorticity fluctuations as dictated by the active spanwise control. For this reason, passive spanwise wall motions can increase the friction drag by more than 50%. The results emphasise the necessity of anisotropy for a practical compliant wall design. The present work demonstrates for the first time that passive wall motions can decrease friction drag in fully turbulent wall-bounded flows. The thesis sheds light on the working principle of an active streamwise control, and proposes a passive streamwise control exploiting the same drag reduction mechanism. An analytical model is developed to give a ready prediction of the statistical behaviour of passive in-plane wall motions. Whereas streamwise passive wall motions are found beneficial when the control parameters are chosen appropriately, solely spanwise passive wall motions lead to a drag penalty.
APA, Harvard, Vancouver, ISO, and other styles
9

Araya, Jose Manuel. "Emotion and predictive processing : emotions as perceptions?" Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33156.

Full text
Abstract:
In this Thesis, I systematize, clarify, and expand the current theory of emotion based on the principles of predictive processing-the interoceptive inference view of emotion-so as to show the following: (1) as it stands, this view is problematic. (2) Once expanded, the view in question can deal with its more pressing problems, and it compares favourably to competing accounts. Thus, the interoceptive inference view of emotion stands out as a plausible theory of emotion. According to the predictive processing (PP) framework, all what the brain does, in all its functions, is to minimize its precision-weighted prediction error (PE) (Clark, 2013, 2016; Hohwy, 2013). Roughly, PE consist in the difference between the sensory signals expected (and generated) from the top-down and the actual, incoming sensory signals. Now, in the PP framework, visual percepts are formed by minimizing visual PE in a specific manner: via visual perceptual inference. That is, the brain forms visual percepts in a top-down fashion by predicting its incoming lower-level sensory signals from higher-level models of the likely (hidden) causes of those visual signals. Such models can be seen as putting forward content-specifying hypotheses about the object or event responsible for triggering incoming sensory activity. A contentful percept is formed once a certain hypothesis achieves to successfully match, and thus supress, current lower-level sensory signals. In the interoceptive inference approach to interoception (Seth, 2013, 2015), the principles of PP have been extended to account for interoception, i.e., the perception of our homeostatic, physiological condition. Just as perception in the visual domain arises via visual perceptual inference, the interoceptive inference approach holds that perception of the inner, physiological milieu arises via interoceptive perceptual inference. Now, what might be called the interoceptive inference theory of valence (ITV) holds that the interoceptive inference approach can be used so as to account for subjective feeling states in general, i.e., mental states that feel good or bad-i.e., valenced mental states. According to ITV, affective valence arises by way of interoceptive perceptual inference. On the other hand, what might be called the interoceptive inference view of emotion (IIE) holds that the interoceptive inference approach can be used so as to account for emotions per se (e.g., fear, anger, joy). More precisely, IIE holds that, in direct analogy to the way in which visual percepts are formed, emotions arise from interoceptive predictions of the causes of current interoceptive afferents. In other words, emotions per se amount to interceptive percepts formed via higher-level, content-specifying emotion hypotheses. In this Thesis, I aim to systematize, clarify, and expand the interoceptive inference approach to interoception, in order to show that: (1) contrary to non-sensory theories of affective valence, valence is indeed constituted by interoceptive perceptions, and that interoceptive percepts do arise via interoceptive perceptual inference. Therefore, ITV holds. (2) Considering that IIE exhibits problematic assumptions, it should be amended. In this respect, I will argue that emotions do not arise via interoceptive perceptual inference (as IIE claims), since this assumes that there must be regularities pertaining to emotion in the physiological domain. I will suggest that emotions arise instead by minimizing interoceptive PE in another fashion. That is, emotions arise via external interoceptive active inference: by sampling and modifying the external environment in order to change an already formed interoceptive percept (which has been formed via interoceptive perceptual inference). That is, emotions are specific strategies for regulating affective valence. More precisely, I will defend the view that a certain emotion E amounts to a specific strategy for minimizing interoceptive PE by way of a specific set of stored knowledge of the counterfactual relations that obtain between (possible) actions and its prospective interoceptive, sensory consequences ("if I act in this manner, interoceptive signals should evolve in such-and-such way"). An emotion arises when such knowledge is applied in order to regulate valence.
APA, Harvard, Vancouver, ISO, and other styles
10

Ruppert, Jan Gustav. "Functional analysis of heterochromatin protein 1-driven localisation and activity of the chromosomal passenger complex." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33158.

Full text
Abstract:
The ultimate goal of mitosis is the equal distribution of chromosomes between the two daughter cells. One of the key players that ensures faithful chromosome segregation is the chromosomal passenger complex (CPC). CPC localisation to mitotic centromeres is complex, involving interactions with Shugoshin and binding to phosphorylated histone H3T3. It was recently reported that Heterochromatin Protein 1 (HP1) has a positive impact on CPC function during mitosis. The interaction between HP1 and the CPC appears to be perturbed in cancer-­‐derived cell lines, resulting in decreased HP1 levels at mitotic centromeres and may be a potential cause for increased chromosome mis-­‐segregation rates. In this study, I tethered HP1α to centromeres via the DNA-­‐binding domain CENP-­‐B. However, instead of improving the rate of chromosome mis-­‐segregation, HP1α tethering resulted in activity of the spindle assembly checkpoint and destabilisation of kinetochore-­‐microtubule attachments, most likely caused by the robust recruitment of the CPC. Tethered HP1α even traps the CPC at centromeres during mitotic exit, resulting in a catalytically active CPC throughout interphase. However, it was not clear whether endogenous HP1 contributes to CPC localisation and function prior to mitosis. Here I also describe a substantial interaction between endogenous HP1 and the CPC during the G2 stage of the cell cycle. The two isoforms HP1α and HP1γ contribute to the clustering of the CPC into active foci in G2 cells, a process that is independent of CDK1 kinase activity. Furthermore, the H3S10ph focus formation in the G2 phase appears to be independent of H3T3ph and H2AT120ph, the two histone marks that determine the CPC localisation in early mitosis. Together, my results indicate that HP1 contributes to CPC concentration and activation at pericentromeric heterochromatin in G2. This novel mode of CPC localisation occurs before the Aurora B-­‐driven methyl/phos switch releases HP1 from chromatin, which possibly enables the H3T3ph and H2AT120ph driven localisation of the CPC during mitosis.
APA, Harvard, Vancouver, ISO, and other styles
11

Xia, Changyou. "Geological risk and reservoir quality in hydrocarbon exploration." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33159.

Full text
Abstract:
In the next 20 years, the global demand for oil is forecast to grow by 0.7% every year, and the demand for natural gas will increase by 1.6% annually. But as we continue to produce oil and gas, the resources of our current oilfields are depleting. To meet the rising global energy demand, it is essential that we can keep discovering more petroleum resources in the future. The primary aim of this PhD project is to deepen our understanding of hydrocarbon reservoirs and enhance our ability to explore. The first project looked at the geological risks in hydrocarbon exploration. It reviewed and statistically analysed the data of 382 unsuccessful boreholes in the UK offshore area. The results suggest that the most significant risk for an exploration well is encountering a thin or absent target reservoir. This risk happened to 27 ± 4% of the past unsuccessful wells. The following most common risks are low-porosity reservoirs (22 ± 4% of all cases) and the lack of a closed trap (23 ± 4%). The probability of a target reservoir having a leaky caprock is 5 ± 2%. The study has calculated the probability of occurrence of all the geological risks in exploration, and this risk data can be applied to predict the potential geological risks in future exploration. One challenge in developing saline aquifers as CO2 storage reservoirs is the lack of subsurface data, unless a well has been drilled. Drawing on the experience of hydrocarbon exploration, a potential CO2 storage site identified on seismic profiles will be subject to many uncertainties, such as thin or low-porosity reservoirs, leaky seals, which are analogue to the geological risks of an undrilled hydrocarbon prospect. Since the workflow of locating CO2 storage reservoirs is similar to the exploration for hydrocarbon reservoirs, the risk data of hydrocarbon exploration wells can be applied to infer the geological risks of the exploration wells for CO2 storage reservoirs. Based on this assumption, the study of Chapter 3 estimated that the probability of a borehole encountering a reservoir suitable for CO2 storage is c. 41-57% (90% confidence interval). For reservoirs with stratigraphic traps within the UKCS, the probability of success is slightly lower, at 39 ± 10% (90% confidence). Chapter 4 studies the porosity and diagenetic process of the Middle Jurassic Pentland Formation in the North Sea. The analysis data come from 21 wells that drilled and cored the Pentland Formation. Petrographic data suggest the content of detrital illite is the most important factor affecting the porosity of the Pentland Sandstone - the porosities of the sandstones with more than 15% of illite (determined by point-count) are invariably low (< 10%). Quartz cement grows at an average rate of 2.3 %/km below the depth of 2km, and it is the main porosity occluding phase in the deep Pentland Sandstone. Petrographic data shows the clean, fine-grained sandstones contain the highest amount of quartz cement. Only 1-2 % of K-feldspar seems to have dissolved in the deep Pentland Sandstone (> 2 km), and petrographic data suggest that K-feldspar dissolution does not have any substantial influence on the sandstone porosity. There is no geochemical evidence for mass transfer between the sandstones and shales of the Pentland Formation. Chapter 5 investigates the high porosity of the Pentland Sandstone in the Kessog Field, Central North Sea. The upper part of the Kessog reservoir displays an anomalously high porosity (c. 25 %, helium porosity) that is 10 % higher than the porosity of other Pentland sandstones at the same depth (c. 15 %, 4.1 - 4.4 km). Petrographic data show these high porosities are predominantly primary porosity. The effects of sedimentary facies, grain coats, secondary porosity and overpressure on the formation of the high porosity are considered to be negligible in this case. Early hydrocarbon emplacement is the only explanation for the high porosity. In addition to less quartz cement, the high-porosity sandstones also contain more K-feldspar and less kaolin than the medium-porosity sandstones of the same field. This indicates that early hydrocarbon emplacement has also inhibited the replacement of K-feldspar. The last chapter studies the potential mass transfer of silica, aluminium, potassium, iron, magenesium and calcium at sandstone-shale contacts. The study samples include 18 groups of sandstones and shales that were collected from five oilfields in the North Sea. The interval space between the samples of each group varies from centimetres to meters. The research aim is to find evidence of mass transfer by studying the samples' variation of mineralogy and chemistry as a function of the distance to the nearest sandstone-shale contact. The sandstones are mostly turbidite sandstones, and the shales are Kimmeridge Clay shales. Petrographic, mineralogical and chemical data do not provide firm evidence for mass transfer within any group of the samples. The result indicates that the scale of mobility of silica, aluminium, potassium, iron, magenesium and calcium in the subsurface may be below the scale of detection of the study method, i.e. < 5 cm.
APA, Harvard, Vancouver, ISO, and other styles
12

Sim, Alisia Mara. "Detection of calcification in atherosclerotic plaques using optical imaging." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33151.

Full text
Abstract:
PET imaging, using the bone tracer Na18F, allows the non-invasive location of atherosclerotic plaques that are at risk of rupture. However, the spatial resolution of PET is only 4-5 mm, limiting the mechanistic information this technique can provide. In this thesis, the use of fluorescence and Raman imaging to elucidate the mechanism of micro-calcification within atherosclerotic plaques has been investigated. A number of fluorescent probes to detect fluoride and calcium have been synthesised. One of the fluoride probes has been shown to be selective for fluoride however, the concentration of fluoride required to activate the probe is order of magnitudes higher than the amount of Na18F used for PET imaging making it problematic to use for future studies. On the other hand, a calcium probe has been shown to: selectively bind to hydroxyapatite (HAP); permit visualisation and quantification of HAP in both vascular and bone cell models; and effectively stain cultured aortic sections and whole mouse aorta for OPT imaging. Building on these preliminary data, fluorescence imaging and immunohistochemistry (IHC) imaging of both healthy and atherosclerotic tissue that were previously subjected to PET imaging, were successfully carried out showing the ability of the probe to detect HAP in human vascular tissue. IHC staining for Osteoprotegerin (OPG) and Osteopontin (OPN), two bone proteins recently detected in vascular tissue, showed the co-localization of OPG with the probe. Conversely, the OPN was shown to localize in areas surrounding high OPG and probe signal. To determine the exact composition of vascular calcification, Raman spectroscopy was also used. It is believed that the biosynthetic pathway to HAP passes through a series of transitional states; each of these has different structural characteristics which can be studied using Raman spectroscopy. In particular, HAP has a strong characteristic Raman peak at 960 cm-1. An increase in HAP concentration has been detected by Raman in both calcified cell models and aortic sections. When human vascular tissue was analysed, an additional peak at 973 cm-1 was present suggesting the presence of whitlockite (WTK) in this tissue as well as HAP.
APA, Harvard, Vancouver, ISO, and other styles
13

Thompson, Bethan. "Date labelling and the waste of dairy products by consumers." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33150.

Full text
Abstract:
The objective of this thesis is to advance our understanding of how consumers use date labels and the implications of date-label use for household dairy product waste. It does this by investigating the effect of psychological, social, and contextual factors on date-label use and willingness to consume dairy products in relation to the expiry date. These effects are tested using structural equation models and survey data gathered from 548 Scottish consumers. The results of this study make two contributions to the literature on date-labelling and food waste. The first contribution is primarily theoretical. By improving our understanding of how consumers use date labels and the implications of date-label use for household dairy product waste, it supports the contention that food waste is best understood, not as a behaviour, but as the outcome of multiple behaviours. It argues that in order to understand why food waste is created, it is important to identify the factors that affect the individual behaviours that lead to it, such as date-label use, and how these behaviours relate to one another. These results also have implications for communications and campaigning around food waste reduction. The second contribution has policy relevance. It provides evidence of the likely limited effect of increasing the number of dairy products labelled with a best-before date rather than a use-by date on food waste. This is an approach recently proposed to reduce household food waste. It finds that better knowledge of the best-before date is associated with a higher willingness to consume products after the best-before date has passed. However, perceived risks about consuming products beyond their best-before date, including not just safety but quality, freshness, and social acceptability, appear to interact with date-label knowledge and dampen its influence. It argues that to be effective, any changes in date-labelling should be accompanied by communication that goes beyond improving date-label knowledge, and addresses the multifaceted nature of related risk perceptions and conceptions of date-label trust.
APA, Harvard, Vancouver, ISO, and other styles
14

Esu, Alberto. "Divided power and deliberation : decision-making procedures in the Greek City-States (434-150 B.C.)." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33154.

Full text
Abstract:
This thesis examines the institutional design and the procedures regulating the decree-making in the poleis of the Classical and Hellenistic periods. The main contention of this thesis is that Greek decree-making is to be conceived as the result of a multi-layered system of interaction and delegation of deliberative authority among different institutions: councils, officials, assemblies and lawcourts. My thesis argues, therefore, that decree-making procedures were specifically designed to implement the concept of 'divided power', a value shared by both democracies and non-democratic regimes, and to shape the collective behaviour of the citizens when acting as decision-makers within the institutions. By adopting models from the political sciences, my thesis bridges the gap between institutional approaches to political decision-making and more recent approaches that have stressed the role of values and ideology as key factors to understand ancient Greek politics. Chapter 1 lays out the methodology of the thesis informed by the New Historical Institutionalism. Chapter 2 analyses the practice of delegation of power from the Athenian Assembly to the Athenian Council in order to enact additional measures. The careful study of the delegation-clauses sheds light on the administrative power of the Council by demonstrating that the Council played a proper policy-making role through the enactment of a decree, which was the product of Council's expertise in defined matters, such as religious affairs, foreign policy and the navy. Chapter 3 builds on the findings of the previous chapter, and shows the workings and development of delegation-clauses to the Council in two examples from outside Athens, Mytilene and Megalopolis over the longue durée. Chapter 4 deals with the deliberative procedures of Hellenistic Sparta. The Spartan 'divided power' envisaged that the Gerousia shared the probouleutic power with the ephors who could independently submit the bill to the Assembly. The Gerousia, however, held the power of nomophylakia and could veto the final decree. This chapter shows that divided power and the need of legal stability were addressed by Spartan institutions, but with different results because of the wider powers of officials in the decree-making. This chapter introduces the important issue of the balance between people's deliberation and stability of the legal order, which form an important focus of chapters 5 and 6. Chapter 5 discusses the role played by legal procedure of the adeia in fifth-century deliberative decision-making in the Assembly. This chapter provides a new comprehensive account of this legal institution. Adeia instituted a pre-nomothetic procedure, according to which the Assembly could change an entrenched piece of legislation or decree without clashing with the nomothetic ideology. Chapter 6 examines the relationship between deliberation and judicial review in the Greek poleis. The first section discusses the Athenian graphe paranomon, the public charge against an illegal decree. A thorough analysis of the legal procedure and of the institutional design shows that deliberative decisions were made within the framework of the rule of law and the graphe paranomon enforced this principle. This did not imply an institutional prominence of the lawcourts in the Athenian decision-making. The lawcourts performed an important role in the deliberative process through providing a safeguard of legal consistency by adding the legal expertise of the judges to the general rationale of the decree-making. The second part of the chapter is dedicated to the discussion of evidence of judicial review from outside Athens and the multifaceted role of the Hellenistic practice of appointing foreign judges in adjudicating public lawsuits, and especially in the judicial review of decrees.
APA, Harvard, Vancouver, ISO, and other styles
15

McGlanaghy, Edel. "Evidenced based psychological interventions : informing best practice and considering adverse effects : Part 1. Adverse effects of psychological therapy: creation of APTMOS outcome measure based on consensus; and, Part 2. A network meta-analysis of psychological interventions for schizophrenia and psychosis." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33157.

Full text
Abstract:
Clinical decision-making about psychological interventions is best supported by robust evidence and informed patient choice. Randomised controlled trials (RCTs) are the current gold standard in evaluating intervention effectiveness and identifying harm. At present, RCTs of psychological intervention are unlikely to include measurement of adverse effects and this is in part due to lack of consensus about this topic. A Delphi study was conducted with a panel of both professionals and people with personal experience of face-to-face psychotherapy across the spectrum of mental health difficulties to seek consensus on what to include on a measure of adverse effects. Fifty-four items derived from an initial list of 147 items generated by the panel, are included on the APTMOS outcome measure, which now in it's preliminary form now requires validation before use in RCTs. To date, the evidence for psychological interventions for psychosis and schizophrenia has not been synthesised, which is important to inform patient choice and decision-making. Network meta-analysis compares multiple interventions using direct evidence from randomised controlled trials (RCTs) and indirect evidence from the network. A systematic review of the literature identified 91 RCTs across 23 different intervention/control group categories. Psychological interventions were more effective at reducing total symptoms of psychosis than control groups. One intervention with a low risk of bias, mindfulness-based psychoeducation, was consistently identified as most effective, with large effect sizes. Subgroup analyses identified differential effectiveness in different settings and for different subgroups. Further high quality RCT evidence of the highest ranked interventions is required to inform updates to clinical guidelines of psychological interventions for psychosis.
APA, Harvard, Vancouver, ISO, and other styles
16

Ferreira, Miguel C. (Miguel Cacela Rosa Lopes Ferreira). "Compression and query execution within column oriented databases." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33150.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 65-66).
Compression is a known technique used by many database management systems ("DBMS") to increase performance[4, 5, 14]. However, not much research has been done in how compression can be used within column oriented architectures. Storing data in column increases the similarity between adjacent records, thus increase the compressibility of the data. In addition, compression schemes not traditionally used in row-oriented DBMSs can be applied to column-oriented systems. This thesis presents a column-oriented query executor designed to operate directly on compressed data. 'We show that operating directly on compressed data can improve query performance. Additionally, the choice of compression scheme depends on the expected query workload, suggesting that for ad-hoc queries we may wish to store a column redundantly under different coding schemes. Furthermore, the executor is designed to be extensible so that the addition of new compression schemes does not impact operator implementation. The executor is part of a larger database system, known as CStore [10].
by Miguel C. Ferreira.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
17

Gözüm, Özge Nadia 1979. "Decision tools for electricity transmission service and pricing : a dynamic programming approach." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/33158.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 104-107).
For a deregulated electricity industry, we consider a general electricity market structure with both long-term bilateral agreements and short-term spot market such that the system users can hedge the volatility of the real-time market. From a Transmission Service Provider's point of view, optimal transmission resource allocation between these two markets poses a very interesting decision making problem for a defined performance criteria under uncertainties. In this thesis, the decision-making is posed as a stochastic dynamic programming problem, and through simulations the strength of this method is demonstrated. This resource allocation problem is first posed as a centrally coordinated dynamic programming problem, computed by one entity at a system- wide level. This problem is shown to be, under certain assumptions, solvable in a deterministic setup. However, implementation for a large transmission system requires the algorithm to handle stochastic inputs and stochastic cost functions. It is observed that the curse of dimensionality makes this centralized optimization infeasible. Thesis offers certain remedies to the computational issues, but motivates a partially distributed setup and related optimization functions for a better decision making in large networks where the intelligent system users drive the use of network resources. Formulations are introduced to reflect mathematical and policy constraints that are crucial to distributed network operations in power systems.
by Özge Nadia Gözüm.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Duran, Randall E. (Randall Eugene). "Reengineering using a data abstraction based specification language." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/33155.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 86-88).
by Randall E. Duran.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
19

Ames, Nicoli M. (Nicoli Margret) 1978. "Design and use of a fixed-end low-load material testing machine." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/33156.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2000.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaf 25).
The purpose of this low-load material testing machine is to provide students an opportunity to perform basic material tests on their own instead of watching a lab technician, thus improving the student's lab experience. The machine proposed is small, low cost, and easy to manufacture, assemble, and operate. Its design is based on a compound flexure mechanism that provides rectilinear motion for uniaxial tension and compression tests. It is actuated by a voice coil and displacement is measured using strain gauges. This thesis outlines some of the basic theory involved in the design and use of this low-load machine. Then it details calibration routines and tension testing procedures. Next, it analyzes results from tension tests. Then it discusses a possible source of error found in the tension tests, a lack of rigidity in the apparatus. Finally, it provides a reasonable solution to the rigidity issue and suggests further testing of the new apparatus before it is available for student use.
by Nicoli M. Ames.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
20

Parikh, Salil (Salil Arvind) 1971. "On the use of erasure codes in unreliable data networks." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/33159.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 62-64).
Modern data networks are approaching the state where a large number of independent and heterogeneous paths are available between a source node and destination node. In this work, we explore the case where each path has an independent level of reliability characterized by a probability of path failure. Instead of simply repeating the message across all the paths, we use the path diversity to achieve reliable transmission of messages by using a coding technique known as an erasure correcting code. We develop a model of the network and present an analysis of the system that invokes the Central Limit Theorem to approximate the total number of bits received from all the paths. We then optimize the number of bits to send over each path in order to maximize the probability of receiving a sufficient number of bits at the destination to reconstruct the message using the erasure correcting code. Three cases are investigated: when the paths are very reliable, when the paths are very unreliable, and when the paths have a probability of failure within an interval surrounding 0.5. We present an overview of the mechanics of an erasure coding process applicable to packet-based transactions. Finally, as avenues for further research, we discuss several applications of erasure coding in networks that have only a single path between source and destination: for latency reduction in interactive web sessions; as a transport layer for critical messaging; and an application layer protocol for high-bandwidth networks.
by Salil Parikh.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Fried, Limor. "Social defense mechanisms : tools for reclaiming our personal space." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33151.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaf 67).
In contemporary Western society, electronic devices are becoming so prevalent that many people find themselves surrounded by technologies they find frustrating or annoying. The electronics industry has little incentive to address this complaint; I designed two counter-technologies to help people defend their personal space from unwanted electronic intrusion. Both devices were designed and prototyped with reference to the culture-jamming "Design Noir" philosophy. The first is a pair of glasses that darken whenever a television is in view. The second is low- power RF jammer capable of preventing cell phones or similarly intrusive wireless devices from operating within a user's personal space. By building functional prototypes that reflect equal consideration of technical and social issues, I identify three attributes of Noir products: Personal empowerment, participation in a critical discourse, and subversion.
by Limor Fried.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Duncan, Joseph 1981. "A global maximum power point tracking DC-DC converter." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33152.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 79-80).
This thesis describes the design, and validation of a maximum power point tracking DC-DC converter capable of following the true global maximum power point in the presence of other local maximum. It does this without the use of costly components such as analog-to-digital converters and microprocessors. It substantially increases the efficiency of solar power conversion by allowing solar cells to operate at their ideal operating point regardless of changes in load, and illumination. The converter switches between a dithering algorithm which tracks the local maximum and a global search algorithm for ensuring that the converter is operating at the true global maximum.
by Joseph Duncan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
23

Richards, John A. (John Alfred). "Target model generation from multiple synthetic aperture radar images." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/33157.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 215-223).
by John A. Richards.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
24

Emery, Kevin E. (Kevin Eric). "Eventing architecture : RFID and sensors in supply chain." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33153.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Unpaged.
Includes bibliographical references (leaf [61]).
We propose data structures to describe and query streaming RFID and sensor data. Furthermore, we propose an architecture built atop these data structures to build arbitrary real-time applications. To understand the nature of these applications, we decompose such systems into four layers: Physical, Data, Filtering, and Application. We describe each layer in terms of our presented data structures, and we discuss architecture optimizations in terms of Bandwidth, Computational Capacity, and Subsystem Transparency. We provide an implementation of Track and Trace and Cold-Chain model applications to demonstrate our architecture.
by Kevin E. Emery.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
25

Weli, Sardar Mohammed. "Regulation of L-type Ca2+ channels by nitric oxide signalling in guinea pig ventricular myocytes." Thesis, University of Leicester, 2015. http://hdl.handle.net/2381/33158.

Full text
Abstract:
Nitric oxide (NO) is constitutively generated by cardiac myocytes and has important roles in cardiac function, including modifying L-type Ca2+ currents (ICa,L). The precise nature of this modification remains elusive with NO reported to increase, reduce or have biphasic effects on ICa,L. Here I explored the effects of NO signalling on ICa,L in both active period and resting period myocytes recorded from guinea pig ventricular myocytes using the perforated whole-cell switched voltage-clamp technique to maintain intracellular signalling pathways. Both cGMP-dependent and S-nitrosylation pathways were investigated. Isoprenaline (100 nM) significantly increased peak ICa,L by about two fold. Subsequent application of NO, using the NO donor SNAP, significantly decreased this enhanced ICa,L but had little effect on basal ICa,L. In contrast to these results obtained from active period myocytes, NO did not inhibit isoprenaline enhanced ICa,L in resting period myocytes. In active period myocytes NO inhibition of isoprenaline enhanced ICa,L was maintained in the presence of ODQ (1H-[1,2,4]Oxadiazolo[4,3-a]quinoxalin-1-one), a soluble guanylyl cyclase (sGC) inhibitor. Direct activation of sGC or peripheral guanylyl cyclase independently of NO by BAY 60-2770 or ANP respectively, however, gave results similar to those observed with NO; although in some cells, BAY 60-2770 did not reduce isoprenaline enhanced ICa,L. Thus direct activation of sGC mimics the effect of NO, yet inhibiting sGC did not abolish NO reduction of isoprenaline enhanced ICa,L. These results suggest that NO modulates ICa,L through more than one mechanism. To investigate the S-nitrosylation pathway, denitrosylation was inhibited using N6022, a blocker of S-nitrosoglutathione reductase, an enzyme involved in denitrosylation. This treatment, either completely abolished or slowed significantly the rate of development of isoprenaline enhancement of ICa,L in cells exposed previously to NO. In conclusion, NO inhibition of isoprenaline enhanced ICa,L involves at least two signalling pathways; a cGMP-dependent and the S-nitrosylation pathway in active period myocytes.
APA, Harvard, Vancouver, ISO, and other styles
26

Ashwin, Andrew Kenneth. "Exploring the problematic nature of GCSE examining in Economics and Business : assessing troublesome knowledge, threshold concepts and learning." Thesis, University of Leicester, 2015. http://hdl.handle.net/2381/33153.

Full text
Abstract:
This thesis focuses upon assessment of learning at General Certificate of Secondary Education (GCSE) level. Approaches to learning, lecturers’ conceptions of teaching, students’ conceptions of learning, threshold concepts and troublesome knowledge have all been the focus of research at higher education but there has been limited work into the relevance of these fields to learning prior to higher education. This thesis surveys the research in higher education and applies some of the concepts to assessment, teaching and learning at lower levels of the education hierarchy. It looks at the extent to which students at GCSE level might be expected to begin the journey of thinking in the subject in the fields of economics and business. Teachers are a key influencer of assessment outcomes at GCSE level but their approach to teaching and their conception of learning may be influenced by the assessment framework in which they are operating. Analysis of student responses to examination questions, the extent to which teachers at this level can agree on evidence of learning and what an assessment is designed to achieve and teachers’ conceptions of learning will be studied at GCSE. The results of this research suggest that a reconceptualisation of the assessment objectives, which frame the specifications at this level and provide a focus for curriculum development, could influence the way students are taught and the way in which teaching and learning programmes are put together. Such a change could help to reduce the asymmetry between students and teachers and encourage teaching and learning which helps students to ‘think in the subject’ and champion deep approaches to learning.
APA, Harvard, Vancouver, ISO, and other styles
27

Singh, Harvinder Pal. "Wrist partial arthrodesis or other motion preserving surgery for degenerative wrist disease : prospective comparative assessment of grip strength, range of motion, function and disability." Thesis, University of Leicester, 2015. http://hdl.handle.net/2381/33156.

Full text
Abstract:
Traumatic osteoarthritis of the wrist is a disabling disease that affects middleaged active adults in the prime of their working life. I set out to assess wrist function and disability in patients with traumatic wrist osteoarthritis before and after surgery. I measured wrist range of motion with flexible electrogoniometer, grip strength with force-time curves using dynamometer, hand function with timed Sollerman hand function test and patient-reported outcome. I first developed these techniques in normal volunteers and then extended them to patients with wrist osteoarthritis before surgery and after four-corner fusion, three-corner fusion, total wrist fusion, and proximal row carpectomy. I used flexible electrogoniometry to generate circumduction curves to measure range, rate and rhythm of circumduction of the wrist. It showed that there was no difference in range of motion parameters in patients with wrist osteoarthritis before surgery and after four-corner fusion and three-corner fusion. Proximal row carpectomy provides better flexion-extension and poorer radio-ulnar deviation than four-corner fusion. Three-corner fusion allows better rate and rhythm of movements in flexion and ulnar deviation compared to four-corner fusion. Grip strength was measured with dynamometer to generate force time curves to measure sustainability of grip. There was no difference between our groups with wrist osteoarthritis before surgery and after wrist fusion, four-corner fusion or three-corner fusion. I developed the Timed Sollerman hand function test by measuring the time taken to complete each of the tasks without summarisation into a 5-point scale. It showed that volunteers completed the tasks quicker with the dominant hand than with the nondominant hand. Women took less time to complete the tasks in the 30-40 years age group than women in the 20-30 years age group and beyond 40 years. The patients with PRC completed the different activities of daily living quicker than the 4CF patients, except for activities requiring wrist torque strength.
APA, Harvard, Vancouver, ISO, and other styles
28

Schulz, Gustavo Justo. "Avaliaçao espectroscópica de pacientes com encefalopatia antes e após transplante hepático." reponame:Repositório Institucional da UFPR, 2013. http://hdl.handle.net/1884/33159.

Full text
Abstract:
Resumo: INTRODUÇÃO - A encefalopatia hepática é uma disfunção neuropsiquiátrica reversível que ocorre freqüentemente em pacientes com doença hepática grave, cujo diagnóstico precoce é essencial para preservação das funções cerebrais. O transplante hepático parece ser o único tratamento capaz de reverter as alterações metabólicas da encefalopatia, evitando disfunções neurológicas futuras. OBJETIVOS - Determinar os níveis dos metabólitos (mio-inositol [MI], colina [Cho], glutamina [Glx], creatina [Cr] e N-acetilaspartato [NAA]) através de espectroscopia por ressonância magnética em portadores de hepatopatia crônica, antes e após o transplante hepático, correlacionando com a avaliação clínica. CASUÍSTICA E MÉTODO - Foram estudados prospectivamente 25 pacientes portadores de hepatopatia crônica do Serviço de Transplante Hepático do Hospital de Clínicas/Universidade Federal do Paraná e Hospital Nossa Senhora das Graças, Curitiba - PR, através de avaliação clínica (exame neurológico e testes neuropsicométricos [TNPS]) e espectroscopia, localizando a área de interesse ("voxel") na região interoccipital (substância branca e cinzenta). Trinta voluntários sadios formaram o grupo controle, sendo submetidos às mesmas avaliações. Dezesseis dos 25 pacientes também foram avaliados após o transplante. RESULTADOS - Antes do transplante hepático reduções significativas (p< 0,05) nos índices de MI/Cr e Cho/Cr e aumento significativo (p< 0,05) no índice de Glx/Cr foram observados nos pacientes portadores de encefalopatia hepática comparados ao grupo controle. No grupo de pacientes sem encefalopatia apenas o índice de Glx/Cr não apresentou diferença estatística em relação aos controles. Os critérios quantitativos de Ross para diagnóstico espectroscópico da encefalopatia hepática (MI/Cr e Cho/Cr < média + 2 desvios-padrão do grupo controle) demonstraram uma sensibilidade de 61,54%, especificidade de 91,67%, valor preditivo positivo de 88,89%, valor preditivo negativo de 68,75% e precisão de 76%, sendo que a Cho/Cr foi o melhor parâmetro isolado, alcançando uma precisão de 80%. A espectroscopia após o transplante mostrou mudanças nos índices metabólicos comparados com o status prétransplante. Nos pacientes que já apresentavam encefalopatia, os índices de MI/Cr e Cho/Cr apresentaram um aumento precoce (30 dias), enquanto o índice de Glx/Cr decresceu tardiamente (90 dias). Naqueles sem encefalopatia, apenas o índice de MI/Cr apresentou uma melhora significativa (p< 0,05). A reversão da encefalopatia hepática também foi mais bem demonstrada pela melhora do índice de MI/Cr. CONCLUSÃO - A espectroscopia permite um diagnóstico preciso da encefalopatia hepática clínica e subclínica. A melhora dos níveis metabólicos após o transplante hepático, acompanhada pela melhora nos testes neuropsicométricos, sugere um importante papel do MI e da Cho no desenvolvimento da encefalopatia hepática.
APA, Harvard, Vancouver, ISO, and other styles
29

Izzudin, Mohd Ali Mohd Yussuf. "Critical investigation of joint ventures by UK contractors with other European partners." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/33156.

Full text
Abstract:
This study investigates the interactions between the partners in the joint ventures between the UK Contractors and other European (EC) partners. The dynamics of interactions were focused at three levels of the joint ventures: Structure, Organisation and Team. The variables of the interactions at these levels were tested for their relationships with the pattern of success. The general study of joint ventures has been concerned with the macro-inter-firms relationships. However, this study is attempting to seek a pattern of success of the EC JVs from the internal micro level of the JV organisation, i.e. the partners' interactions. The pattern of success for the JVs studied was measured based on ten goals. The expectations and the outcomes of achieving these goals were used to identify the pattern of success of the JVs. Eight cases were available as the sample size and the data were collected by structured interviews as well as by telephone. The UK Contractors' perceptions were only taken for this study. The Spearman Correlation and the non-parametric statistics were used to seek the statistical tests of the various relationships of the variables and against the pattern of high and low JV success. The interactions of the partners at the structure and team were strongly correlated with the pattern of success. The organisation level has strong correlation with the decision-making process indicating that high problems in decision-making is associated with high success. Trust was not having statistically significant correlation with all the variables of interaction, but all cases had high level of trust. The study found strong relationships between the pattern of success of the JVs with the structuring of the interaction based on the sharing of expertise and resources as well as the leadership personality and members' characteristics of the JV teams. Further study into deeper areas of these interaction dynamics is greatly recommended.
APA, Harvard, Vancouver, ISO, and other styles
30

Hounsell, Marcelo da Silva. "Feature-based validation reasoning for intent-driven engineering design." Thesis, Loughborough University, 1998. https://dspace.lboro.ac.uk/2134/33152.

Full text
Abstract:
Feature based modelling represents the future of CAD systems. However, operations such as modelling and editing can corrupt the validity of a feature-based model representation. Feature interactions are a consequence of feature operations and the existence of a number of features in the same model. Feature interaction affects not only the solid representation of the part, but also the functional intentions embedded within features. A technique is thus required to assess the integrity of a feature-based model from various perspectives, including the functional intentional one, and this technique must take into account the problems brought about by feature interactions and operations. The understanding, reasoning and resolution of invalid feature-based models requires an understanding of the feature interaction phenomena, as well as the characterisation of these functional intentions. A system capable of such assessment is called a feature-based representation validation system. This research studies feature interaction phenomena and feature-based designer's intents as a medium to achieve a feature-based representation validation system.
APA, Harvard, Vancouver, ISO, and other styles
31

Iliadis, Georgios P. "The effectiveness of argumentative approaches to the design of software." Thesis, Loughborough University, 1999. https://dspace.lboro.ac.uk/2134/33159.

Full text
Abstract:
This thesis investigates the potential of design rationale to support software designers working on error-prone tasks. Throughout the thesis, I dually pursuit theoretical aspects of the design process as well as issues of support environments initially, the literature is reviewed for weak aspects of the design process, i.e. parts not supported by standard software tools. Our focus turns on the 'breakdowns', which correspond to cognitive difficulties faced by designers Interestingly enough, another research strand emphasizes on the recovery from such difficulties seeing it as facilitation of problem re-framing and generation of ideas. It is suggested that in order to facilitate that type of recovery transition, the decision-making process should be assisted through the sharing of expertise among design stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
32

Burkett, Theodore Howard. "An investigation into the use of Word Lists in university foundation programs in the United Arab Emirates." Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/33151.

Full text
Abstract:
There has been increasing interest in research on creating word lists in the past decade with more than 60 separate lists being published along with Nation’s (2016) timely Making and Using Word Lists for Language Learning and Testing. However, this focus on word lists has primarily been on creating them and has not necessarily extended to looking at how they are actually used. In order to help answer the question of how these lists are utilized in practice, this exploratory, interpretive study based on interviews with teachers and assessment/curriculum developers looks at how word lists are used at five tertiary English foundation programs in the United Arab Emirates. The main findings include the following. Insufficient vocabulary knowledge was deemed one of the most significant problems that students faced. Additionally, word lists played a role in all five of the institutions represented in the study, and the Common European Framework (CEFR) was used in conjunction with vocabulary frequency lists to help set expected vocabulary learning in some programs. Furthermore, teacher intuition was used to modify lists in three of the five programs and online applications were used in all five programs. The thesis explores a number of areas in depth including: how vocabulary lists are being used in the programs, the use of the AWL in this context and potential problems related to this, the role of teacher intuition in the customization of lists, the role of CEFR related frameworks in these programs, the use of computer applications to assist with list vocabulary acquisition, what the selected vocabulary acquisition activities tell us about beliefs about vocabulary teaching and learning, and some final comments about utilizing a list. One of the key findings was the development of a novel framework for categorizing the use of word lists into four general areas: course planning, teaching and learning, assessment and materials development with sub-categories for each. This framework and the related examples could be utilized to evaluate the suitability of specific lists and to help set developmental targets for the process of adopting a new list and transforming it into something that could be used to direct and support vocabulary teaching and learning. It could also be developed further as more examples of practice emerge in different contexts and hopefully set the stage for more development about how vocabulary lists are used.
APA, Harvard, Vancouver, ISO, and other styles
33

Hollinger, Keith H. "Trade Liberalization and the Environment: A Study of NAFTA's Impact in El Paso, Texas and Juarez, Mexico." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33159.

Full text
Abstract:
This thesis seeks to promote a clearer understanding of relationships between trade liberalization and environmental quality in a free trade zone along an international border, between countries unevenly matched in development and infrastructure. Specifically, it examines whether theories of environmental degradation provide appropriate models for explaining the impact of NAFTA on the environment in the Paso del Norte. The relationship between trade liberalization and environmental quality is examined through an analysis of environmental indicators in the decade preceding and following NAFTA. Finally, the role of environmental governance is addressed, especially the intricacies involved in multi-jurisdictional governance of the environment. The research indicates that trade liberalization is not necessarily environmentally harmful. The data suggest that NAFTA had little to no direct negative impact on the regionâ s environmental condition, but they also do not provide evidence that NAFTA improved the environment. One factor that could have helped to limit its effects may be local, interstate, and international initiatives that improved the health of the ecosystem along the border before NAFTA was even conceived. Another factor is the environmental governance in place before and after NAFTA. Thus, it may be beneficial for trade liberalization agreements to address environmental concerns as integral parts of the negotiations, and to set requirements for meeting infrastructure demands, as the agreements are implemented. Furthermore, it is important that international environmental institutions established to monitor environmental cooperation be more closely associated with the trade cooperation organizations and be given the authority needed to complete their directives more effectively.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
34

Botha, Deona. "An assessment of the health status of late 19th and early 20th century Khoesan." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33153.

Full text
Abstract:
Since the arrival of the Dutch colonists in the Cape, Khoesan populations were subjected to severe political and economical marginalization and often fell prey to racial conflict and genocide. These circumstances persisted until the early 20th century, during which an astonishing number of Khoesan skeletons were transported from South Africa to various locations in Europe, as at the time, different institutions competed to obtain these valuable remains. Due to the above mentioned circumstances, Southern African Khoesan groups suffered from nutritional stress, as well as substandard living conditions. Such living conditions probably did not allow for health care and medical benefits at the time. It will therefore be interesting to evaluate the health status of this group through palaeopathological assessment. Skeletal remains housed in two different European institutions were studied. The sample comprises of 140 specimens from the Rudolf Pöch Skeletal Collection in Vienna, Austria and 15 specimens from the Musée de l’Homme in Paris, France. These individuals represent both sexes and were aged between newborn and 75 years, with 54 individual being younger than 20 years of age and 101 being adults. The aim was to analyse all skeletal lesions. Results indicated high levels of typical disease conditions associated with groups under stress, such as periostitis, cribra orbitalia and porotic hyperostosis. Treponemal disease, rickets, osteoarthritis and trauma were also encountered amongst other more specific indicators of health and disease. This study provided additional knowledge on the health status and lives of the Khoesan people during the turn of the 20th century, as well as focused new awareness on a group of severely mistreated individuals.
Dissertation (MSc)--University of Pretoria, 2013.
Anatomy
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
35

Murovhi, Phathutshedzo. "Low temperature thermal properties of HTR nuclear fuel composite graphite." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33156.

Full text
Abstract:
Graphite and graphite composite materials are of great importance in various applications; however, they have been widely used in nuclear applications. Primarily in nuclear applications such, as a moderator where its primary aim is to stop the fast neutrons to thermal neutron. The composite graphite (HTR-10) has potential applications as a moderator and other applications including in aerospace field. Structurally the composite shows stable hexagonal form of graphite and no traces of the unstable Rhombohedral patterns. Thermal conductivity indicates the same trends observed and known for nuclear graded graphite. The composite was made as a mixture of 64 wt% of natural graphite, 16 wt% of synthetic graphite binded together by 20 wt% of phenolic resin. The resinated graphite powder was uni-axially pressed by 19.5 MPa to form a disc shaped specimen. The disc was then cut and annealed to 1800 °C. The composite was further cut into two directions (parallel and perpendicular) to the pressing direction. For characterization the samples were cut into 2.5 x 2.5 x 10 mm3. There were exposed to proton irradiation for 3 and 4.5 hrs respectively and characterized both structurally and thermally. Through the study what we have observed was that as the composite is exposed to proton irradiation there is an improvement structurally. Thus, the D peak in the Raman spectroscopy has decreased substantially with the irradiated samples. XRD has indicated that there is no un-stable Rhombohedral phase pattern in both the pristine and the irradiated samples. However this was further confirmed with that thermal conductivity is also increasing with irradiation exposure. This is anomalous to irradiated graphite in which defects are supposedly induced. Looking into the electrical resistivity we have noted that pristine samples have higher resistivity as compared to the irradiated samples. Seebeck coefficient indicates that there is some form of structural perfection and the samples have a phonon drag dip at the known graphite temperature of 35 K. This has shown us there are no impurities induced by irradiation of the samples.
Dissertation (MSc)--University of Pretoria, 2013.
gm2014
Physics
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
36

Muchecheti, Fiona. "Utilization of multipurpose tree prunings as a source of nitrogen for the production of rape (Brassica napus L.) and spinach (Spinacea olearacea L.)." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33152.

Full text
Abstract:
The production of rape and other leaf vegetables for local and export markets by smallholder farmers in sub-Saharan Africa has been constrained by soil fertility depletion associated with continuous cropping with inadequate addition of major nutrients like Nitrogen (N), Phosphorus (P) and Potassium (K). Biomass transfer of multipurpose tree prunings (usually legumes) to croplands has been shown to significantly increase the availability of soil N. Nitrogen mineralization of the leguminous biomass provides a major pathway through which the fixed N becomes available for use by other plants. The extent to which a specific type of plant residue influences soil fertility, crop growth and N recovery is in part determined by its biochemical qualities, decomposition patterns and the concurrent timing of nutrient release and crop nutrient demand. Consequently, the main challenge with the use of biomass from leguminous trees is to ensure that the release of N from mineralization is synchronised with the crop‟s demand. The utilization of multipurpose tree prunings as a source of nitrogen for the production of rape (Brassica napus L.) and spinach (Spinacea olearacea L.) was studied in a series of experiments. Prunings of four leguminous tree prunings commonly found in agroforestry systems namely Leucaena leucocephala, Calliandra calothyrsus, Acacia angustissima and Acacia karoo were used. The objectives of the study were: i) To determine the effect of chemical composition of the various leguminous tree prunings and their decomposition and N release patterns and ii) To evaluate the short term nutrient supply of the various leguminous tree prunings with or without supplemental inorganic nitrogen on the growth and yield responses of rape and spinach, respectively. Results indicated that rates of decomposition and N release decreased in the order: L. leucocephala > A. angustissima > C. calothyrsus > A. karoo. The ratios of lignin-to-N (r = 0.85) and soluble condensed tannins-to-N (r = 0.89) were negatively correlated with N release. The rates of decomposition and nitrogen mineralization of the prunings used as soil ameliorants were best predicted by their (lignin+soluble condensed tannin)-to-N ratios (r = 0.91). Soil amelioration with the various leguminous prunings significantly increased yields (P < 0.05) relative to the yields of plots that did not receive any amelioration. Total biomass, leaf number, area and size as well as saleable leaf yields increased linearly for all treatments. The quality of the prunings used as soil ameliorants significantly affected (P < 0.05) the efficiency of N recovery. Prunings of L. leucocephala which were the most labile had higher nutrient recovery rates and increased yields compared to the other leguminous amendments. Soil amendment with prunings of A. karoo on the other hand, which were the most recalcitrant, resulted in relatively lower N recovery rates. Supplementation of pruning-N with inorganic fertilizer further increased yields over the 0N treatment, indicating improved N recovery by the leafy vegetables. Crop growth and rates of nitrogen recovery of the leafy vegetables were corroborated by the short term nutrient supply capabilities of the leguminous prunings. Leguminous tree prunings can be used as a source of N for vegetable production as evidenced by the higher yields realized from amending the soil with the various prunings relative to the unfertilized plants. However, the rate and amount of N mineralized from the prunings and hence the net benefit obtained by the crop determines their suitability for vegetable production.
Dissertation (MSc Agric)--University of Pretoria, 2013.
gm2014
Plant Production and Soil Science
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
37

Leaver, Elizabeth Bridget. "The Priceless treasure at the bottom of the well : rereading Anne Brontë." Thesis, University of Pretoria, 2013. http://hdl.handle.net/2263/33158.

Full text
Abstract:
Anne Brontë died in 1848, having written two novels, Agnes Grey (1847) and The Tenant of Wildfell Hall (1848). Although these novels, especially The Tenant of Wildfell Hall, initially received a favourable critical response, the unsympathetic remarks of Charlotte Brontë and Elizabeth Gaskell initiated a dismissive attitude towards Anne Brontë’s work. For over a hundred years, she was marginalized and silenced by a critical world that admired and respected the work of her two sisters, Charlotte and Emily, but that refused to acknowledge the substantial merits of her own fiction. However, in 1959 revisionist scholars such as Derek Stanford, Ada Harrison and Winifred Gérin, offered important, more enlightened readings that helped to liberate Brontë scholarship from the old conservatisms and to direct it into new directions. Since then, her fiction has been the focus of a robust, but still incomplete, revisionist critical scholarship. My work too is revisionist in orientation, and seeks to position itself within this revisionist approach. It has a double focus that appraises both Brontë’s social commentary and her narratology. It thus integrates two principal areas of enquiry: firstly, an investigation into how Brontë interrogates the position of middle class women in their society, and secondly, an examination of how that interrogation is conveyed by her creative deployment of narrative techniques, especially by her awareness of the rich potential of the first person narrative voice. Chapter 1 looks at the critical response to Brontë’s fiction from 1847 to the present, and shows how the revisionist readings of 1959 were pivotal in re-invigorating the critical approach to her work. Chapter 2 contextualizes the key legal, social, and economic consequences of Victorian patriarchy that so angered and frustrated feminist thinkers and writers such as Brontë. The chapter also demonstrates the extent to which a number of her core concerns relating to Victorian society and the status of women are reflected in her work. In Chapter 3 I discuss three important biographical influences on Brontë: her family, her painful experiences as a governess, and her reading history. Chapter 4 contains a detailed analysis of Agnes Grey, which includes an exploration of the narrative devices that help to reinforce its core concerns. Chapter 5 focuses on The Tenant of Wildfell Hall, showing how the novel offers a richer and more sophisticated analysis of feminist concerns than those that are explored in Agnes Grey. These are broadened to include an investigation of the lives of married women, particularly those trapped in abusive marriages. The chapter also stresses Brontë’s skilful deployment of an intricate and layered narrative technique. The conclusion points to the ways in which my study participates in and extends the current revisionist trend and suggests some aspects of Brontë’s work that would reward further critical attention.
Thesis (DLitt)--University of Pretoria, 2013.
gm2014
English
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
38

Mosoa, Moleboheng Wilhelmina. "Assessment of approaches to determine the water quality status of South African catchments." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33159.

Full text
Abstract:
The paradigm shift in water quality management of South African water resources was based on current international trends. This significant move was from a previous emphasis on source management to a focus on finding a balance between water resource protection and water use. The current approach requires that water quality and quantity should be maintained for sustainable functioning of both the natural aquatic environment and socioeconomic development. This approach has placed the assessment of water quality status as a key decision tool in water quality management. Various assessment tools have been utilized to quantify the quality of South African water resources. In this study we assessed the compatibility of some of the methodologies that have been used in the Department of Water Affairs to determine and report on the water quality status of the resource. During the assessment the context and manner in which these methodologies can be utilized in water quality management were also addressed The Compliance Evaluation and Fitness for use categorization methodologies are both used to describe the water quality threshold of potential concern when dealing with the resource. Compliance Evaluation methodology uses a pass or fail assessment, while the Fitness for use categorization methodology uses a scaled approach allowing for the assessment of gradual change in the system. The out puts of these two methodologies, the Resource Water Quality Objectives and Fitness for use categories/ classes have both been used in the department as benchmarks to describe the current water quality status The assessment of the two methodologies indicated that there are similarities in the approaches and the principles behind the two processes. The observation of the results, however, indicated differences in the manner of presentation of the results, the interpretation of the outcome and in how water quality management measures that needs to be implemented are linked. Both methodologies are easy to apply when conducting water quality status assessments. However, the two methodologies are not sufficient on their own when making decisions on water quality management. It was concluded that although the compliance evaluation methodology can play a pivotal role when setting end of pipe standards, the process needs to consider the gradual changes of water quality in the river system in order to enable instigation of different water quality management measures at appropriate levels. Further it was recommended that with some modification the two approaches can be applied to assess water quality to support adequate water quality management decisions at various levels.
Dissertation (MSc)--University of Pretoria, 2013.
gm2014
Animal and Wildlife Sciences
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
39

Montabert, Cyril. "Supporting Requirements Reuse in a User-centric Design Framework through Task Modeling and Critical Parameters." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33152.

Full text
Abstract:
Many software systems fail as a direct consequence of errors in requirements analysis. Establishing formal metrics early in the design process, using attributes like critical parameters, enables designers to properly assess software success. While critical parameters alone do not have the potential to drive design, establishing requirements tied to critical parameters helps designers capture design objectives. For the design of interactive systems, the use of scenario-based approaches offers natural user centricity and facilitates knowledge reuse through the generation of claims. Unfortunately, the requirements-analysis phase of scenario-based design does not offer sufficient built-in and explicit techniques needed for capturing the critical-parameter requirements of a system. Because success depends heavily on user involvement and proper requirements, there is a crucial need for a requirements-analysis technique that bridges the gap between scenarios and critical parameters.

Better establishing requirements will benefit design. By adapting task-modeling techniques to support critical parameters within the requirements-analysis phase of scenario-based design, we are able to provide designers with a systematic technique for capturing requirements in a reusable form that enables and encourages knowledge transfer early in the development process. The research work presented concentrates on the domain of notification systems, as previous research efforts led to the identification of three critical parameters.

Contributions of this work include establishment of a structured process for capturing critical-parameter requirements within a user-centric design framework and introduction of knowledge reuse at the requirements phase. On one hand, adapting task models to capture requirements bridges the gap between scenarios and critical parameters, which benefits design from user involvement and accurate requirements. On the other hand, using task models as a reusable component leverages requirements reuse which benefits design by increasing quality while reducing development costs and time-to-market.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
40

Vincent, Amelia A. "Evaluation of Phosphorus Transport and Transformations in GLEAMS 3.0." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33156.

Full text
Abstract:

The overall goal of this research was to improve simulation of soil phosphorus (P) transport and transformations in GLEAMS 3.0, a non-point source model that simulates edge-of-field and bottom-of-root-zone loadings of nutrients from climate-soil-management interactions to assess management alternatives. The objectives of this research were to identify the state of the science for P transport and transformations, determine appropriate relationships for inclusion in GLEAMS, and determine if modifications to GLEAMS improved predictions of P loss in runoff, sediment, and leachate.

The state of the science review revealed numerous equations available to predict dissolved P loss in runoff and leachate from a soilâ s nutrient status. These equations use a single variable to predict P loss and were developed for site-specific conditions based on empirical data. Use of these equations in GLEAMS is not reasonable as transport factors must also be considered when predicting P loss.

Results from the sensitivity analysis showed that GLEAMS prediction of leached P were extremely sensitive to changes in the P partitioning coefficient (CPKD). Runoff PO4-P output was slightly to moderately sensitive, sediment PO4-P was moderately sensitive to sensitive, and sediment organic P was moderately sensitive to changes in CPKD whereas plant uptake of P was insensitive to slightly sensitive. The weakness of GLEAMS to estimate CPKD has been documented. Upon further investigation, it was determined that CPKD was highly over-estimated in GLEAMS as compared to measured values found during the literature review. Furthermore, this over-estimation caused under-estimation of the P extraction coefficient (BETA P); the value of BETA P remained constant at 0.10 and did not vary over the simulation period.

Expressions for CPKD and BETA P were modified in GLEAMS. Data from three published studies (Belle Mina, Gilbert Farm, and Watkinsville) were used in the analyses of three modifications to GLEAMS: GLEAMS BETA P, GLEAMS CPKD, and GLEAMS BETA P+CPKD. GLEAMS BETA P investigated the change in BETA P as a function of soil clay content, GLEAMS CPKD attempted to improve GLEAMSâ estimation of CPKD, and GLEAMS BETA P+CPKD assessed the combined effects of changes to BETA P and CPKD.

Over the respective study periods, GLEAMS over predicted runoff PO4-P for Belle Mina by 193 to 238% while under-predicting runoff PO4-P at Gilbert Farm by 41% and Watkinsville by 81%. Sediment P was over-predicted by GLEAMS for Belle Mina by 225 to 233% and Gilbert Farm by 560%, while sediment P was under-predicted by 62% at Watkinsville. Leached PO4-P was both over- and under-predicted by GLEAMS; Belle Mina was the only data set with observed leached P values.

Simulation results from the model changes were inconclusive. There was no clear evidence supporting use of one model over another. Modifications increased predicted dissolved P in runoff and leachate, while decreasing predicted sediment-bound P in runoff. The original GLEAMS model best predicted runoff and leached PO4-P at the Belle Mina sites. GLEAMS CPKD was the best predictor of runoff PO4-P and sediment P at Gilbert Farm. GLEAMS BETA P+CPKD best predicted runoff PO4-P at Watkinsville. Overall, the proposed improvements to GLEAMS did not improve GLEAMS predictions.

In conclusion, GLEAMS should not be used for quantitative estimates of hydrology, sediment, and nutrient loss for specific management practices. As recommended by the GLEAMS model developers, GLEAMS should only be used to predict relative differences in alternative management systems. It is recommended that future research focus on developing a better correlation between CPKD, clay mineralogy and content, and organic matter content, as CPKD has been identified as a vital component of the GLEAMS P sub-model that requires further examination.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
41

Wolfe, Brian Paul. "Floodplains and the Proximate Principle: A Case for Floodplain Linear Parks in Roanoke, Virginia." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33151.

Full text
Abstract:
The intention of this paper is to argue a position for the use of floodplain linear parks as a means of urban flood mitigation. Current approaches often focus on protecting existing and future structures via the use of costly-engineered solutions such as dams and floodwalls. My argument is that the same money can be used to restore the floodplain by removing such structures and establishing a park system that will serve as a valuable public amenity, while allowing flooding to occur with minimal damage produced. In the long run, such a park will provide a greater return on the investment than other potential solutions. A discussion of the â Proximate Principleâ will describe how this works. From an environmental perspective, the importance of such a park will be discussed by placing it in the context of the green infrastructure concept, which is essentially an umbrella term for ongoing efforts to better integrate human and natural systems. Three case studies are presented that demonstrate examples of such park systems and the effects they had on local economies and communities. These studies begin demonstrating the social connotations for such a project as well. Throughout this paper, ties are made to the city of Roanoke, Virginia (where the project portion of this thesis takes place) to demonstrate the relevance of floodplain linear parks to the city. All arguments made are supported by a conceptual floodplain park plan for the city of Roanoke.
Master of Landscape Architecture
APA, Harvard, Vancouver, ISO, and other styles
42

Holody, Kyle J. "Framing Death: The Use of Frames in Newspaper Coverage of and Press Releases about Death with Dignity." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33154.

Full text
Abstract:
Since passing its Death with Dignity Act into law in 1997, Oregon remains the only state in America to make physician-assisted suicide an explicit legal right. Currently, the legality of physician-assisted suicide falls under the jurisdiction of each individual state. Had the United States Supreme Court ruled differently in a recent case, however, the issue would have transferred to federal jurisdiction. The Death with Dignity National Center (DDNC) takes responsibility for developing the original Death with Dignity Act and has since moved on to proposing similar legislation in other states. It also champions statesâ rights, fearing that placing physician-assisted suicide under federal jurisdiction would severely hinder its goals. The DDNC has led the legal movement for making physician-assisted suicide an end of life choice available in each state, as well as for keeping that decision at the state level. Utilizing a content analysis, this study coded for frames used by the DDNC in its press releases and frames used in newspaper coverage of death with dignity across the same period of time. It was found that press releases about and newspaper coverage of the death with dignity social movement shared significant correlations in terms of the frames each used, as well as the level of substance given to these frames. Few significant correlations were found, however, for frame valence. It seems as though discussion of this social movement utilizes the same substantive or ambiguous frames, but cannot decide whether these frames are positive, neutral, or negative.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
43

Bhardwaj, Yogita. "Reverse Engineering End-user Developed Web Applications into a Model-based Framework." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33150.

Full text
Abstract:
The main goal of this research is to facilitate end-user and expert developer collaboration in the creation of a web application. This research created a reverse engineering toolset and integrated it with Click (Component-based Lightweight Internet-application Construction Kit), an end-user web development tool. The toolset generates artifacts to facilitate collaboration between end-users and expert web developers when the end-users need to go beyond the limited capabilities of Click. By supporting smooth transition of workflow to expert web developers, we can help them in implementing advanced functionality in end-user developed web applications. The four artifacts generated include a sitemap, text documentation, a task model, and a canonical representation of the user interface. The sitemap is automatically generated to support the workflow of web developers. The text documentation of a web application is generated to document data representation and business logic. A task model, expressed using ConcurTaskTrees notation, covers the whole interaction specified by the end-user. A presentation and dialog model, represented in User Interface Markup Language (UIML), describe the user interface in a declarative language. The task model and UIML representation are created to support development of multi-platform user interfaces from an end-user web application. A formative evaluation of the usability of these models and representations with experienced web developers revealed that these representations were useful and easy to understand.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Gaertner, Nathaniel Allen. "Special Cases of Density Theorems in Algebraic Number Theory." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33153.

Full text
Abstract:
This paper discusses the concepts in algebraic and analytic number theory used in the proofs of Dirichlet's and Cheboterev's density theorems. It presents special cases of results due to the latter theorem for which greatly simplified proofs exist.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

Bliss, Michael A. "Procedures to Perform Dam Rehabilitation Analysis in Aging Dams." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33157.

Full text
Abstract:
There are hundreds of existing dams within the State of Virginia, and even thousands more specifically within the United States. A large portion of these dams do not meet the current safety standard of passing the Probable Maximum Flood. Likewise, many of the dams have reached or surpassed the original design lives, and are in need of rehabilitation. A standard protocol will assist dam owners in completing a dam rehabilitation analysis. The protocol provides the methods to complete the hydrologic, hydraulic, and economic analysis. Additionally, alternative augmentation techniques are discussed including the integration of GIS applications and linear programming optimization techniques. The standard protocol and alternative techniques are applied to a case study. The case study includes a set of flood control dams located in the headwaters of the South River watershed in Augusta County, VA. The downstream impacts of the flood control dams on the city of Waynesboro are demonstrated through the hydrologic and hydraulic analysis.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
46

Mueller, Nathan. "Michael Walzer on the Moral Legitimacy of States and the Morality of Killing in War." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33155.

Full text
Abstract:
This thesis is divided into two chapters. In the first chapter, I analyze Michael Walzerâ s account of the moral legitimacy of states. In the second chapter, I analyze his account of the morality of killing in war. I begin the first chapter by contrasting Walzerâ s account of state legitimacy and humanitarian intervention with that of David Luban. Next, I develop a Rawlsian account of state legitimacy and humanitarian intervention and argue that this account is more plausible than both Walzerâ s and Lubanâ s accounts. The second chapter is divided into two parts. In the first part, I argue that Walzerâ s account of the distinction between combatants and noncombatants is misleading because it gives the impression that all and only infantry soldiers are combatants and that all and only civilians are noncombatants. In the second part of the second chapter, I describe an account of the morality of killing in war developed by Jeff McMahan that is based on an analogy with the morality of killing in domestic society and argue that this account is more plausible than Walzerâ s account of the morality of killing in war. I also suggest a way that McMahanâ s account could be improved.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
47

Saleh, Mohamed Ibrahim. "Using Ears for Human Identification." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33158.

Full text
Abstract:
Biometrics includes the study of automatic methods for distinguishing human beings based on physical or behavioral traits. The problem of finding good biometric features and recognition methods has been researched extensively in recent years. Our research considers the use of ears as a biometric for human recognition. Researchers have not considered this biometric as much as others, which include fingerprints, irises, and faces. This thesis presents a novel approach to recognize individuals based on their outer ear images through spatial segmentation. This approach to recognizing is also good for dealing with occlusions. The study will present several feature extraction techniques based on spatial segmentation of the ear image. The study will also present a method for classifier fusion. Principal components analysis (PCA) is used in this research for feature extraction and dimensionality reduction. For classification, nearest neighbor classifiers are used. The research also investigates the use of ear images as a supplement to face images in a multimodal biometric system. Our base eigen-ear experiment results in an 84% rank one recognition rate, and the segmentation method yielded improvements up to 94%. Face recognition by itself, using the same approach, gave a 63% rank one recognition rate, but when complimented with ear images in a multimodal system improved to 94% rank one recognition rate.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Ramírez, Triana Nélida Yaneth. "ESTUDIO DE LA RELACIÓN DISEÑO Y BIENESTAR HUMANO Una propuesta para favorecer a personas en condición de pobreza en Colombia." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/33151.

Full text
Abstract:
Desde la década de los años 80s del siglo XX, ha aumentado el desarrollo de proyectos, productos y estrategias que vinculan el diseño con el bienestar humano, y a partir del año 2000 con la Declaración de los Objetivos del Milenio, se produce un aumento importante en el número de organizaciones que trabajan desde el diseño con población vulnerable por pobreza. Es en este contexto emergente que se desarrolla el proyecto de investigación, el cual, a nivel general se delimita por la relación entre el diseño y el bienestar humano, y a nivel particular, en el estudio de diseño centrado en la pobreza, donde se establece una propuesta para Colombia. El estudio de la evolución de la relación entre diseño y bienestar humano, permitió identificar, las metodologías, las líneas de atención fundamentales y los principales autores, países y escuelas que están a la vanguardia en el tema; para el diseño centrado en la pobreza, se implementó el esquema de Weis (2010), el cual tiene tres componentes, a. ¿Diseño para el desarrollo de las capacidades¿, b. ¿Diseño para la empresa social¿ y c. ¿Diseño para la ayuda al desarrollo¿, siendo el componente de diseño para el desarrollo de las capacidades el que se profundizó en esta investigación, reportando hallazgos relevantes de la interacción entre diseñadores y población vulnerable por pobreza; el perfil de los diseñadores y la conformación del equipo de diseño; los procesos, productos y proyectos y el beneficio para la comunidad. En la propuesta para Colombia, convergen los hallazgos mediante las categorías del modelo histórico propuesto por Bonsiepe (2009). La investigación es de tipo exploratorio y descriptivo, con métodos cuantitativos y cualitativos, en los cuales se usan datos primarios y secundarios, siendo los datos primarios recolectados, a partir del estudio de caso múltiple en el que se analizan cinco casos, localizando tres casos en España: Free Design Bank, Diseño para el Desarrollo y Nanimarquina; y dos casos en Colombia: Artesanías de Colombia y Jorge Montaña Cuellar. También se recolectan datos primarios mediante la aplicación de una consulta a 28 expertos en diseño aplicado al bienestar humano, provenientes de 13 países, vinculados como docentes e investigadores de universidades o como directores de organizaciones. Como hallazgos importantes, se encuentra en primer lugar, que el diseño es factor de innovación no solo para el desarrollo de los productos, sino también para el desarrollo de estrategias y modelos de actuación; en segundo lugar, se identifica que las acciones participativas dan lugar a mejores resultados, porque permiten involucrar la población en las diferentes etapas de desarrollo de los proyectos; otro hallazgo importante, se encuentra al reconocer como las interacciones entre los equipos de diseño y la población vulnerable por pobreza están determinadas por las características del territorio, el cual otorga matices específicas a cada población. Se resalta también, que durante la elaboración del estudio se elaboraron diez publicaciones con el fin de contrastar y divulgar los avances de la investigación con la comunidad académica. Este trabajo cumple con los objetivos planteados y aporta reflexiones para el desarrollo de prácticas de diseño con población vulnerable por pobreza. Palabras clave: Diseño, Bienestar Humano, Pobreza, Diseño Social.
Ramírez Triana, NY. (2013). ESTUDIO DE LA RELACIÓN DISEÑO Y BIENESTAR HUMANO Una propuesta para favorecer a personas en condición de pobreza en Colombia [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33151
TESIS
APA, Harvard, Vancouver, ISO, and other styles
49

Bartolín, Ayala Hugo José. "Confección de modelos de redes de distribución de agua desde un Sig y desarrollo de herramientas de apoyo a la toma de decisiones." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/33152.

Full text
Abstract:
Advances in information technology in the past two decades have seen innovations in the field of domestic and industrial computing that led to a paradigm shift in the management and operation of urban water systems by water utility companies. The traditional public management policy that focused on ensuring a minimum quality of service regardless of the costs associated with the processes of catchment, treatment and distribution of water, in many cases even unknown, have evolved towards more efficient cost sensitive models. These new wholly or partly public funded management systems improve not only the quality of service offered to users, but also optimize resources by reducing the cost and causing the minimum environmental impact. The new challenges raised by the European Water Framework1 Directive by imposing cost recovery to improve water efficiency and environmental sustainability have led to a significant change at all levels of water management. Consequently, new priorities have been established in terms of infrastructure management that require the reduction of water losses and the improvement of the water efficiency in urban networks for human consumption. Likewise, in a broader context which includes the water--energy binomial, it is also desirable to improve the energy efficiency and carbon emissions of these systems. Today, network sectoring is the most commonly used strategy to improve management and increase network performance. It basically consists of dividing the network into several smaller hydraulic sectors, where water inlets and outlets are perfectly controlled. This simplifies the task of carrying out periodic water balances in each of the sectors, and allows water loss volume to be assessed for a given period of time. As configuring network sectors is not a trivial task, it is therefore important to have appropriate tools to perform the task efficiently and effectively. Mathematical models can play an important role as decision support tools to help water managers assess the performance of water network distribution systems. This thesis aims to address the current problems of managing urban water networks by combining new information-processing technologies with innovative network modelling techniques. It intends to facilitate the system diagnosis and extend the use of models on the decision-making process to provide better solutions to the management of urban water networks. For this purpose a software extension that works on a geographic information system (GIS) has been developed. It integrates: the hydraulic and water quality simulation program EPANET 2, innovative tools for model analysis and diagnostic, automatic tools for sectoring and computing tools to conduct water balances in the sectors using actual measurements. The work demonstrates the compatibility and complementarity of GIS and hydraulic models as technologies that can be used to support the assessment and diagnosis of water distribution networks. Considering that the majority of information linked to the network system has some geographic reference, it is not surprising that GIS has become a popular tool for dealing with such information. At the same time, the integration of mathematical modelling and simulation tools, offers the GIS a new dimension in the realm of hydraulic study of water networks. Furthermore, if this specific integration is provided with new features aimed not only to facilitate the model building, but also to assist the user in decision-making using powerful algorithms based on the application of the graph theory, the result is a powerful up-to-date analytical tool, which opens up new possibilities in the field of management and efficient operation of urban water supply systems.
Bartolín Ayala, HJ. (2013). Confección de modelos de redes de distribución de agua desde un Sig y desarrollo de herramientas de apoyo a la toma de decisiones [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33152
TESIS
APA, Harvard, Vancouver, ISO, and other styles
50

Karim, Karzan Khowaraham. "Investigating the effects of curcumin and resveratrol on pancreatic cancer stem cells." Thesis, University of Leicester, 2015. http://hdl.handle.net/2381/33155.

Full text
Abstract:
Anti-proliferative and cancer stem-cell targeting abilities of curcumin and resveratrol individually have been shown in different cancers. This project aimed to assess the activity of these compounds, alone and in combination in pancreatic cancer cell lines (PCCLs) and stellate cells. Anti-proliferation assays were performed for curcumin and resveratrol alone and in combination, combined with end point markers of activity including apoptosis and cell cycle arrest. Pancreatic cancer stem cell populations were defined using the cell surface markers CD44, CD24, ESA, CD133, ALDH-1 activity or sphere forming ability, and finally Nanog expression was assessed. The intracellular uptake of curcumin and its metabolites was analysed by HPLC. The PCCLs were more sensitive to curcumin than resveratrol, and combinations of these compounds showed anti-proliferative efficacy through apoptosis and cell cycle arrest at low, clinically achievable concentrations (CACs) in 2 out of 4 cell lines. Capan-1 cells exhibited the highest sensitivity to curcumin, which was able to enhance the effectiveness of resveratrol treatments in targeting cancer stem-like populations. Spheroid growth was significantly inhibited by curcumin and resveratrol combinations in Capan-1 cells, correlating with decreased ALDH1 activity and Nanog expression. In human pancreatic cancer tissue, various stem-like populations were identified based on expression of ALDH1 or CD24+/CD44+, which may provide a suitable target in vivo. Capan-1 cells metabolised curcumin to detectable amounts of curcumin glucuronide. However, curcumin metabolites did not show any significant activity at CACs. Curcumin alone may have activity against pancreatic cancer stem cells, and enhances efficacy at low concentrations when in combination with resveratrol. Capan-1 cells are able to internalise curcumin, and this cell line exhibited the greatest sensitivity to treatment. Overall, the results suggest that curcumin and resveratrol warrant further investigation as combination therapies for targeting cancer stem-like cells and stellate cells responsible for the dense stroma observed in pancreatic cancer.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography