To see the other types of publications on this topic, follow the link: Building physic.

Dissertations / Theses on the topic 'Building physic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Building physic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gábrová, Lenka. "Analýza technických požadavků na stavby se zaměřením na stavební fyziku." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232686.

Full text
Abstract:
The Thesis "Analysis of Technical Requirements for Buildings with focus on Building Physics" deals with the solution of masonry and monolithic residential buildings in terms of building physic requirements primarily listed in Decree No.268/2009 Coll and in Czech technical standards.
APA, Harvard, Vancouver, ISO, and other styles
2

Tink, Victoria J. "The measured energy efficiency and thermal environment of a UK house retrofitted with internal wall insulation." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/33727.

Full text
Abstract:
Approximately 30% of the UK s housing stock is comprised of older, solid wall buildings. Solid walls have no cavity and were built without insulation; therefore these buildings have high heat loss, can be uncomfortable for occupants throughout the winter and require an above-average amount of energy to heat. Solid wall buildings can be made more energy efficient by retrofitting internal wall insulation (IWI). However, there is little empirical evidence on how much energy can be saved by insulating solid wall buildings and there are concerns that internal wall insulation could lead to overheating in the summer. This thesis reports measured results obtained from a unique facility comprised of a matched pair of unoccupied, solid wall, semi-detached houses. In the winter of 2015 one house of the pair was fitted with internal wall insulation then both houses had their thermal performance measured to see how differently they behaved. Measuring the thermal performance was the process of measuring the wall U-values, the whole house heat transfer coefficient and the whole house airtightness of the original and insulated houses. Both houses were then monitored in the winter of 2015, monitoring was the process of measuring the houses energy demand while using synthetic occupancy to create normal occupancy conditions. In the summer of 2015 indoor temperatures were monitored in the houses to assess overheating. The monitoring was done firstly to see how differently an insulated and an uninsulated house perform under normal operating conditions: with the blinds open through the day and the windows closed. Secondly, a mitigation strategy was applied to reduce high indoor operative temperatures in the houses, which involved closing the blinds in the day to reduce solar gains and opening the windows at night to purge warm air from the houses. The original solid walls were measured to have U-values of 1.72 W/m2K, while with internal wall insulation the walls had U-values of 0.21 W/m2K, a reduction of 88%. The house without IWI had a heat transfer coefficient of 238 W/K; this was reduced by 39% to 144 W/K by installing IWI. The monitored data from winter was extrapolated into yearly energy demand; the internally insulated house used 52% less gas than before retrofit. The measured U-values, whole house heat loss and energy demand were all compared to those produced from RdSAP models. The house was found to be more energy efficient than expected in its original state and to continue to use less energy than modelled once insulated. This has important implications for potential carbon savings and calculating pay-back times for retrofit measures. In summer, operative temperatures in the living room and main bedroom were observed to be higher, by 2.2 oC and 1.5 oC respectively, in the internally insulated house in comparison to the uninsulated house. Both of these rooms overheated according to CIBSE TM52 criteria; however the tests were conducted during an exceptionally warm period of weather. With the simple mitigation strategy applied the indoor operative temperature in the internally insulated house was reduced to a similar level as observed in the uninsulated house. This demonstrates that any increased overheating risk due to the installation of internal wall insulation can be mitigated through the use of simple, low cost mitigation measures. This research contributes field-measured evidence gathered under realistic controlled conditions to show that internal wall insulation can significantly reduce the energy demand of a solid wall house; this in turn can reduce greenhouse gas emissions and could help alleviate fuel poverty. Further to this it has been demonstrated that in this archetype and location IWI would cause overheating only in unusually hot weather and that indoor temperatures can be reduced to those found in an uninsulated house through the use of a simple and low cost mitigation strategy. It is concluded that IWI can provide a comfortable indoor environment, and that overheating should not be considered a barrier to the uptake of IWI in the UK.
APA, Harvard, Vancouver, ISO, and other styles
3

Jack, Richard. "Building diagnostics : practical measurement of the fabric thermal performance of houses." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19274.

Full text
Abstract:
This thesis is concerned with measuring the fabric thermal performance of houses. This is important because the evidence shows that predictions of performance, based upon a summation of expected elemental performance, are prone to significant inaccuracy and in-situ performance is invariably worse than expected the so-called performance gap . Accurate knowledge of the thermal performance of houses could cause a shift in the way that houses are built, retrofitted and managed. It would enable quality-assurance of newly-built and retrofitted houses, driving an improvement in the energy performance of the housing stock. The current barrier to achieving these benefits is that existing measurement methods are impractically invasive for use on a mass-scale. The aim of this research is to address this issue by developing non-invasive fabric thermal performance measurement methods for houses. The co-heating test is currently the most used method for measuring whole-house fabric thermal performance; it is used to measure the Heat Loss Coefficient (HLC) of a house, which is a measure of the rate of heat loss with units of Watts per degree Kelvin. It has been used extensively in a research context, but its more widespread use has been limited. This is due to a lack of confidence in the accuracy of its results and the test s invasiveness (the house must be vacant for two weeks during testing, which has so far been limited to the winter months, and testing cannot be carried out in newly-built houses for a period of approximately one year due to the drying out period). To build confidence in the results of co-heating testing, the precision with which test results can be reported was determined by the combination of a sensitivity analysis to quantify measurement errors, and an analysis of the reproducibility of the test. Reproducibility refers to the precision of a measurement when test results are obtained in different locations, with different operators and equipment. The analysis of the reproducibility of the test was based upon a direct comparison of seven co-heating tests carried out by different teams in a single building. This is the first such analysis and therefore provides a uniquely powerful analysis of the co-heating test. The reproducibility and sensitivity analyses showed that, provided best practise data collection and analysis methods are followed, the HLC measured by a co-heating test can be reported with an uncertainty of ± 10%. The sensitivity analysis identified solar heat gains as the largest source of measurement error in co-heating tests. In response, a new approach for co-heating data collection and analysis, called the facade solar gain estimation method, has been developed and successfully demonstrated. This method offers a clear advancement upon existing analysis methods, which were shown to be prone to inaccuracy due to inappropriate statistical assumptions. The facade method allowed co-heating tests to be carried out with accuracy during the summer months, which has not previously been considered feasible. The demonstration of the facade method included a direct comparison against other reported methods for estimating solar gains. The comparison was carried out for co-heating tests undertaken in three buildings, with testing taking place in different seasons (winter, summer, and spring or autumn) in each case. This comparison provides a unique analysis of the ability of the different solar gain estimation methods to return accurate measurements of a house s HLC in a wide variety of weather conditions. Building on these results, a testing method was developed: the Loughborough In-Use Heat Balance (LIUHB). The LIUHB is a non-invasive measurement method, designed and tested in this study, which can measure the HLC of a house with an accuracy of ± 15% while it is occupied and used as normal. Measurements of energy consumption and internal temperature are discreetly collected over a period of three weeks, and combined with data collected at a local weather station to inform an energy balance, from which the HLC is calculated. This low impact monitoring approach removes the barriers to fabric thermal performance testing on a mass scale. The LIUHB has been successfully demonstrated in several comparative trials versus a baseline measurement provided by the co-heating test. The trials have included the application of extreme examples of synthetic occupancy conditions, testing in an occupied house, and quantification of the effects of a retrofit. Subject to further validation, the LIUHB has the potential to deliver many of the benefits associated with mass-scale measurement and quality assurance of housing performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Jordaan, Bertus Scholtz. "Building a Cross-Cavity Node for Quantum Processing Networks." Thesis, State University of New York at Stony Brook, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13424934.

Full text
Abstract:

Worldwide there are significant efforts to build networks that can distribute photonic entanglement, first with applications in communication, with a long-term vision of constructing fully connected quantum processing networks (QPN). We have constructed a network of atom-light interfaces, providing a scalable QPN platform by creating connected room-temperature qubit memories using dark-state polaritons (DSPs). Furthermore, we combined ideas from two leading elements of quantum information namely collective enhancement effects of atomic ensembles and Cavity-QED to create a unique network element that can add quantum processing abilities to this network. We built a dual connection node consisting of two moderate finesse Fabry-Perot cavities. The cavities are configured to form a cross-cavity layout and coupled to a cold atomic ensemble. The physical regime of interest is the non-limiting case between (i) low N with high cooperativity and (ii) free-space-high-N ensembles. Lastly, we have explored how to use light-matter interfaces to implement an analog simulator of relativistic quantum particles following Dirac and Jackiw-Rebbi model Hamiltonians. Combining this development with the cross-cavity node provides a pathway towards quantum simulation of more complex phenomena involving interacting many quantum relativistic particles.

APA, Harvard, Vancouver, ISO, and other styles
5

Passaro, Davide. "Model building on gCICYs." Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411813.

Full text
Abstract:
Prompted by the success of heterotic line bundle model building on Complete Intersection Calabi Yau (CICY) manifolds and the new developments regarding a generalization thereof, I analyze the possibility of model building on generalized CICY (gCICY) manifolds.  Ultimately this is realized on two examples of gCICYs, one of which topologically equivalent to a CICY and one inequivalent to any previously studied examples.  The first chapter is dedicated to reporting background information on CICYs and gCICYs.  The mathematical machinery of CICYs and their generalizations are introduced alongside explicit constructions of two examples.  The second chapter introduces the reader to heterotic line bundle model building on CICYs and gCICYs.  In the setting of gCICYs, similar to regular CICYs, model building is accomplished in two steps: first the larger $E_{8}$ gauge group is broken to an $SU( 5 )$ grand unified theory  through a line bundle model.  Then the GUT is broken using Wilson line symmetry breaking, for which the presence of a freely acting discrete symmetry must be established.  To that end, I proceed to show that the two previous examples benefit from a $\mathbb{Z}_{2}$ freely acting discrete symmetry.  Utilizing this symmetry I construct 20 and 11 explicit models for the two gCICY examples respectively, by scanning over a finite range of line bundle charges.
Ett av de största problemen i modern teoretisk fysik är att hitta en teori för kvantgravitation.För en konsekvent kvantteori gravitation skulle vara en väsentlig del i fysikens pussel, och koppla samman gravitationsfysiken för planeter och galaxer, som beskrivs av allmänna relativitetsteorin, till fysiken för partiklar, beskrivet av kvantfältteori.Bland de mest lovande teorierna finns strängteorin som föreslår att ersätta partiklar med strängar som materiens grundläggande beståndsdel.Förutom att lösa kvantgravitationproblemet hoppas teoretiska fysiker genom strängteorin att förenkla beskrivningen av partikelfysik.Detta skulle ske genom att ersätta hela partikelzoo med ett enda objekt: strängen.Olika vibrationer i strängen skulle motsvara olika partiklar och interaktioner mellan strängar skulle motsvara interaktioner mellan partiklar.För att vara motsägelsefri kräver dock strängteori att det finns minst sex fler dimensioner än de vi kan uppleva.En av strategierna som för närvarande studeras för att förlika extra dimensioner med och moderna experiment kallas ``kompaktifiering'' eller ``compactification'' på engelska.Strategin föreslår att dessa extra dimensioner ska vara kompakta och så små att de är osynliga för observationer.Interesant nog påverkar geometrin i det sexdimensionella kompakta rummet i stor utsträckning fysiken som strängteorin producerar: olika rum skulle producera olika partiklar och olika grundläggande naturkrafter.I den här uppsatsen studerar jag två exempel på sådana sexdimensionella rum som kommer från en uppsättning av rum som kallas `` generaliserade CICYs'' som nyligen har upptäckts.Med hjälp av de tekniker som liknar de som har utvecklats för andra liknade rum, visar jag att vissa aspekter av en strängteori kompaktifierad på generaliserade CICY återspeglar de som mäts genom moderna partikelfysikexperiment.
APA, Harvard, Vancouver, ISO, and other styles
6

Janovick, Patrick. "PROGRESS TOWARD BUILDING A RATCHET IN COLD ATOM DISSIPATIVELATTICES." Miami University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=miami1533338035196042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Campbell, Sara L. S. B. Massachusetts Institute of Technology. "Building an apparatus for ultracold lithium-potassium Fermi-Fermi mixtures." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61204.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 93-95).
In this thesis, I designed and built laser systems to cool, trap and image lithium-6 and potassium-40 atoms. I also constructed the vacuum system for the experiment and experimentally tested a new method to coat the chamber with a Titanium-Zirconium- Vanadium alloy that acts as a pump. The final apparatus will use a 2D Magneto- Optical Trap (MOT) as a source of cool potassium and a Zeeman slower as a source of cool lithium. The atoms will then be trapped and cooled together in a double-species 3D MOT. In the 3D MOT, we will perform photoassociation spectroscopy on the atoms to determine the Li-K molecular energies and collisional properties. Using this information, we can transfer weakly-bound Feshbach LiK molecules into their ground state. LiK has an electric dipole moment and will open the door to the study of novel materials with very long-range interactions. This new material might form a crystal, a superfluid with anisotropic order parameter or a supersolid.
by Sara L. Campbell.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
8

I'Anson, S. J. "Physical aspects of chemical injection damp-proof courses." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stirewalt, Heather R. "Computation as a Model Building Tool in a High School Physics Classroom." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10785706.

Full text
Abstract:

The Next Generation Science Standards (NGSS) have established computational thinking as one of the science and engineering practices that should be developed in high school classrooms. Much of the work done by scientists is accomplished through the use of computation, but many students leave high school with little to no exposure to coding of any kind. This study outlines an attempt to integrate computational physics lessons into a high school algebra-based physics course which utilizes Modeling Instruction. Specifically, it aims to determine if students who complete computational physics assignments demonstrate any difference in understanding force concepts as measured by the Force Concept Inventory (FCI) versus students who do not. Additionally, it investigates students’ attitudes about learning computation alongside physics. Students were introduced to Vpython programs during the course of a semester. The FCI was administered pre and post instruction, and the gains were measured against a control group. The Computational Modeling in Physics Attitudinal Student Survey (COMPASS) was administered post instruction and the responses were analyzed. While the FCI gains were slightly larger on average than the control group, the difference was not statistically significant. This at least suggests that incorporating computational physics assignments does not adversely affect students’ conceptual learning.

APA, Harvard, Vancouver, ISO, and other styles
10

McHattie, Samuel Alexander. "Seismic Response of the UC Physics Building in the Canterbury Earthquakes." Thesis, University of Canterbury. Civil and Natural Resource Engineering, 2013. http://hdl.handle.net/10092/8801.

Full text
Abstract:
The purpose of this thesis is to evaluate the seismic response of the UC Physics Building based on recorded ground motions during the Canterbury earthquakes, and to use the recorded response to evaluate the efficacy of various conventional structural analysis modelling assumptions. The recorded instrument data is examined and analysed to determine how the UC Physics Building performed during the earthquake-induced ground motions. Ten of the largest earthquake events from the 2010-11 Canterbury earthquake sequence are selected in order to understand the seismic response under various levels of demand. Peak response amplitude values are found which characterise the demand from each event. Spectral analysis techniques are utilised to find the natural periods of the structure in each orthogonal direction. Significant torsional and rocking responses are also identified from the recorded ground motions. In addition, the observed building response is used to scrutinise the adequacy of NZ design code prescriptions for fundamental period, response spectra, floor acceleration and effective member stiffness. The efficacy of conventional numerical modelling assumptions for representing the UC Physics Building are examined using the observed building response. The numerical models comprise of the following: a one dimensional multi degree of freedom model, a two dimensional model along each axis of the building and a three dimensional model. Both moderate and strong ground motion records are used to examine the response and subsequently clarify the importance of linear and non-linear responses and the inclusion of base flexibility. The effects of soil-structure interaction are found to be significant in the transverse direction but not the longitudinal direction. Non-linear models predict minor in-elastic behaviour in both directions during the 4 September 2010 Mw 7.1 Darfield earthquake. The observed torsional response is found to be accurately captured by the three dimensional model by considering the interaction between the UC Physics Building and the adjacent structure. With the inclusion of adequate numerical modelling assumptions, the structural response is able to be predicted to within 10% for the majority of the earthquake events considered.
APA, Harvard, Vancouver, ISO, and other styles
11

An, Zhifeng. "The building and testing of a gas Cherenkov counter for OHIPS." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/32673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Callaghan, James. "Model building and phenomenological aspects of F-Theory GUTs." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/362141/.

Full text
Abstract:
In recent years, Grand Unified Theories (GUTs) constructed from F-theory have been extensively studied due to the substantial scope for model building and phenomenology which they provide. This thesis will motivate and introduce the basic tools required for model building in the setting of local F-theory. Starting with GUT groups of E6, SO(10) and SU(5), a group theoretic dictionary between the three types of theory is formulated, which provides considerable insight into how to build a realistic model. The spectral cover formalism is then applied to each case, enabling the possible low energy spectra after flux breaking of the GUT group to be found. Using these results an E6 based model is constructed that demonstrates, for the first time, that it is possible to construct a phenomenologically viable model which leads to the MSSM at low energies. In addition to the MSSM model, the E6 starting point is also used to build F-theory models in which the low energy supersymmetric theory contains the particle content of three 27 dimensional representations of the underlying E6 gauge group, with the possibility of a gauged U(1) group surviving down to the TeV scale. The models with TeV scale exotics initially appear to be inconsistent due to a splitting of the gauge couplings at the unification scale which is too large, and incompatible with the formalism. However, in E6 models with flux breaking, there are bulk exotics coming from the 78 dimensional adjoint representation which are always present in the spectrum, and it turns out that a set of these exotics provide a natural way to achieve gauge coupling unification at the one-loop level, even for models with TeV exotics. This motivates a detailed study of bulk exotics, where specific topological formulae determining the multiplicities of bulk states are investigated, and the constraints imposed by these relations applied to the spectra of the models previously studied. In particular, bulk exotics are relevant to the almost miraculous restoration of gauge coupling unification in the case of the models with TeV scale exotics. The consistent local F-theory models with low energy exotics have distinctive characteristics when compared with other, similar models, and so provide potential opportunities to be tested at the LHC.
APA, Harvard, Vancouver, ISO, and other styles
13

Hughes, Deirdre Patricia. "Versatile building blocks in crystal engineering." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Spinrath, Martin. "New Aspects of Flavour Model Building in Supersymmetric Grand Unification." Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-119190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jurke, Benjamin. "Nonperturbative Type IIB Model Building in the F-Theory Framework." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-127722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bish, Samuel Gerard. "Building and Detecting an Optical Lattice." Miami University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=miami1186456085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Blumreiter, Torsten. "Building of a Thermoacoustic Refrigerator and Measuring the Basic Performance." PDXScholar, 1994. https://pdxscholar.library.pdx.edu/open_access_etds/4714.

Full text
Abstract:
The application of thermoacoustic phenomena for cooling purposes has a comparatively short history. However, recent experiments have shown that thermoacoustic refrigeration can achieve practical significance for both every day cooling in households and cryocooling for scientific purposes due to its high reliability, environmental safety and functioning under extreme conditions. We build a thermoacoustic refrigerator driven by a commercial loudspeaker. It was equipped with a vacuum pump and an entrance port for introducing different gases under different pressures as working fluids. It contained two thermocouples and a pressure transducer for quantitative measurements of the basic performance. The resonance frequency of the tube for different gases has been determined and compared to the theoretical value. The temperatures of the hot and the cold heat exchanger have been measured. Also, a simple thermoacoustic oscillator for demonstration purposes was built. After immersing one end in liquid nitrogen or heating up the other end with a bunsen burner it started to oscillate and emit a sound.
APA, Harvard, Vancouver, ISO, and other styles
18

Susman, Gideon. "The application of phase change materials to cool buildings." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7639.

Full text
Abstract:
Five projects improve understanding of how to use PCM to reduce building cooling energy. Firstly, a post-installation energy-audit of an active cooling system with PCM tank revealed an energy cost of 10.6% of total cooling energy, as compared to an identical tankless system, because PCM under%cooling prevented heat rejection at night. Secondly, development of a new taxonomy for PCM cooling systems allowed reclassification of all systems and identified under-exploited types. Novel concept designs were generated that employ movable PCM units and insulation. Thirdly, aspects of the generated designs were tested in a passive PCM sail design, installed in an occupied office. Radiant heat transfer, external heat discharge and narrow phase transition zone all improved performance. Fourthly, passive PCM product tests were conducted in a 4.2 m3 thermal test cell in which two types of ceiling tile, with 50 and 70% microencapsulated PCM content, and paraffin/copolymer composite wallboards yielded peak temperature reductions of 3.8, 4.4 and 5.2 °C, respectively, and peak temperature reductions per unit PCM mass of 0.28, 0.34 and 0.14 °C/kg, respectively. Heat discharge of RACUS tiles was more effective due to their non-integration into the building fabric. Conclusions of preceding chapters informed the design of a new system composed of an array of finned aluminium tubes, containing paraffin (melt temperature 19.79 °C, latent heat 159.75 kJ/kg) located below the ceiling. Passive cooling and heat discharge is prioritised but a chilled water loop ensures temperature control on hotter days (water circulated at 13 °C) and heat discharge on hotter nights (water circulated at 10 °C). Test cell results showed similar passive performance to the ceiling tiles and wallboards, effective active temperature control (constant 24.6˚C air temperature) and successful passive and active heat discharge. A dynamic heat balance model with an IES% generated UK office’s annual cooling load and PCM temperature%enthalpy functions predicted annual energy savings of 34%.
APA, Harvard, Vancouver, ISO, and other styles
19

Arnan, Vendrell Pere. "Building a Scenario of Physics Beyond the Standard Model in the Flavor Sector." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670861.

Full text
Abstract:
Recently lepton flavor universality tests in colliders have reported tensions with the Standard Model, providing an experimental hint at low energies of the effect of a new high energy theory. These tensions are known as flavor anomalies and that is why in this thesis we focus on the study of the impact of the flavor anomalies on the flavor sector of the Standard Model of particle physics. We introduce the Standard Model of particle physics and list the relevant tensions of the theory with the experiments in B decays, commenting in a brief manner the possible solutions involving new physics, as well as the main advantages and inconveniences of each new physics scenario. We first focus in a model with new heavy scalars and fermions were we only account for left-handed couplings to the Standard Model particles. We consider two possible models: one with an additional scalar and two vector-like fermions and another with two additional scalars and one vector-like fermion. The purpose of this model is to solve the tensions in FCNC decays, together with the anomalous magnetic moment of the muon, which are induced at loop-level in both models. We list the relevant constraints and show that the models are able to solve the anomalies, albeit a relatively large coupling of the muons is required. In the case of semi-leptonic decays, stringent constraints arising from B oscillations can be relaxed if the new fermions are considered Majorana particles. Then, since we cannot explain the muon magnetic moment with only left-handed couplings, we construct a model with new scalars and fermions allowing also for right- handed couplings, were we list the relevant Wilson coefficients for b quark decays to a strange quark and two muons, as well as the relevant observables acting as constraints for any number of new scalars and fermions. In order to illustrate this generic approach we supplement the Standard Model with a fourth generation of quarks and leptons. With this model we can explain FCNC anomalies and the magnetic moment of the muon avoiding all the constraints. The second kind of model that we explore is an extension of the Standard Model with scalar leptoquarks. In this case, we compute the Z- and W-decays to leptons for each one of the scalar leptoquarks at next-to-leading-logarithm approximation, and show that the finite terms can account for 20% of the total contribution for leptoquark masses of below 1.5 TeV. We also show their phenomenological relevance in a model with a singlet and a triplet, where our computation pushes the fit towards a better explanation of data. Besides, we comment on the fact that the B mixing has to be implemented carefully as it is one of the main constraints that was missing in earlier studies of these kind of models, and we also illustrate its key role in a particular model model, since it spoils the pure left- handed scenario with 2018 data. Finally, we scrutinize the FCNC process of b quark dcaying into an s quark and two leptons, in the framework of a two Higgs doublet model. We compute all the relevant Wilson coefficients performing the matching of the full theory with the low energy theory showing that it is necessary to keep the external momenta for the scalar and pseudo-scalar operators. This is the first time computation of a proper matching including all the relevant operators. We perform a phenomenological analysis of the model with the strange B meson decaying to two muons and the B meson decaying into two muons and a K meson at high momenta, where we have control of the hadronic uncertainties.
Des del 2012, una serie de mesures en experiments com BaBar, Belle i LHCb, han presentat tensions respecte el model estàndard en decaïments de mesons B. Aquestes desviacions són conegudes com les anomalies de sabor. En aquesta tesi interpretem les anomalies de sabor com a possibles efectes de nova física i proposem alguns models simples per poder acomodar les dades experimentals que difereixen del model estàndard. En primer lloc, proposem un model amb nous escalars i fermions pesats que només s’acoblen als fermions tipus esquerra del model estàndard. En aquest escenari intentem explicar una part de les anomalies ensems amb el valor anòmal del moment magnètic del muó. El resultat són uns acoblaments relativament grans de valor 2 aproximadament. Pel que fa a resoldre part de les anomalies, tenim la possibilitat de considerar els nous fermions com fermions de Majorana, que permet reduir el valor dels acoblaments. Per tal de poder relaxar el valor dels acoblaments d’una forma més general, en el següent capítol proposem un model semblant però amb acoblaments tipus dreta amb el model estàndard. En aquest model calculem totes les fórmules d’una manera genèrica i posteriorment ho particularitzem en un model de quarta generació. Amb la introducció d’aquests acoblaments tipus dreta combinats amb els esquerra podem explicar part de les anomalies de sabor i també el moment magnètic del muó. Seguidament, construïm un model amb leptoquarks escalars on estudiem alguns dels lligams més importants, com els decaïments de Z o les oscil·lacions del mesó B. Posteriorment, proposem models fenomenològics de 2 leptoquarks escalars on veiem l’impacte dels lligams estudiats anteriorment. També discutim com els models quedarien obsolets si no fos perquè les dades experimentals van canviar subtilment després de Moriond 2019. Per últim, fem un estudi de l’impacte de dos decaïments: del mesó B estrany a dos muons i del mesó B al mesó K i dos muons dins d’un model de dos doblets de Higgs. Aquí no ens centrem en les anomalies, sinó que mirem l’impacte d’aquests observables en el model i realitzem una comparació adequada entre la teoria efectiva i la teoria a altes enegies.
APA, Harvard, Vancouver, ISO, and other styles
20

Blomberg, Thomas R. "Heat conduction in two and three dimensions : computer modelling of building physics applications /." Lund : Dept. of Building Physics [Institutionen för byggnadsteknik], Univ, 1996. http://www.byfy.lth.se/Publikationer/1000pdf/TB-1008.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fields, Shaun. "Building a software tool for simulating the multi-physics of thermal protection systems." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa43072.

Full text
Abstract:
The motivation for this research is to overcome the costs of using the current wind tunnels which replicate the high speed, temperatures and Reynolds numbers of new concept vehicles such as Hyper-Sonic passenger jets. The idea is that by employing accurate computational methods, costs can be reduced and more scenarios can be investigated. It will be argued that the characteristic based split scheme is a modified central difference temporal scheme, and can be utilized to capture the flow regimes of interest to the European Space Agency (ESA). The hypothesis of this thesis is that it is possible to model Hyper-Sonic applications with shock capturing reliably in a collocated, unstructured polyhedral, Finite Volume (FV) software framework. The reason for this hypothesis is a desire to develop an alternative approach for accurate, non-oscillatory solutions to the conservation laws for high speed flows that does away with calculating the upwind flow direction, donor nodes, Riemann solvers and can avoid Jacobian evaluations. The finite volume method is generally preferred for industrial Computational Fluid Dynamics (CFD) because it is relatively inexpensive and lends itself well to the solution of large sets of equations associated with complex flows according to Greenshields et al. Usually physical variables such as velocity, temperature, density and pressure are co-located, which means that the values at the centroid of a control volume are chosen to represent these physical variables in the enclosed control volume. Co-location is popular in industrial CFD, because it allows greater freedom in mesh structure for complex 3D geometries and for refinement of boundary layers as mentioned in Greenshields et al. It is no coincidence that collocated, polyhedral, FV numerical methods are adopted by several of the best known industrial CFD software packages, including FLUENT, STAR CCM+ and CFD-ACE+. There is a current preference for unstructured meshes o f polyhedral cells with six faces (hexahedra) or more, rather than tetrahedral cells that are prone to numerical inaccuracy and other problems. For example, Ferguson and Peric mention that they are unsuitable for features such as boundary layers. Discontinuities, such as shocks, in Hyper-Sonic compressible computations require numerical schemes that can accurately capture these features while avoiding spurious numerical oscillations. Current methods that are effective in producing accurate non-oscillating solutions are first of all monotone upstreamcentred schemes for conservation laws- by Van Leer; secondly the nonoscillatory (ENO) schemes by Harten A, Engquist B, Osher S, and lastly the weighted ENO schemes known as WENO schemes by Liu, X. D., Osher, and Chan. Unfortunately these methods typically involve Riemann solvers and Jacobian evaluation, making them complex and difficult to implement in a collocated, 3D unstructured framework. This work seeks to find a method which overcomes these disadvantages.
APA, Harvard, Vancouver, ISO, and other styles
22

Oliveira, Alexandra Carvalho Antunes de [UNESP]. "New physics from warped compact extra dimensions: from model building to colliders signals." Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/127595.

Full text
Abstract:
Made available in DSpace on 2015-09-17T15:24:26Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-06-03. Added 1 bitstream(s) on 2015-09-17T15:47:50Z : No. of bitstreams: 1 000837676.pdf: 5906522 bytes, checksum: 4d7d813fa9837e6b5a068271f434ec38 (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
No Modelo Padrão que descreve a física das partículas elementares e suas interações o campo de Higgs pode ser imaginado como um campo composto formado por uma força forte ainda desconhecida. Tal hípótese é bastante atrativa para completar o Modelo Padrão a altas energias. Problemas como hierarquia e naturalidade podem ser mais facilmente evitados. No contexto de uma força forte porém métodos de cálculo baseados em expansões perturbativas não tem mais validade. Uma alternativa para entender as propriedades básicas desse tipo de teoria é trabalhar em termos de teorias de gravitacão com dimensões extras. Nesta tese focamos no caso de uma dimensão espacial extra. Características genéricas desse tipo de cenário são a existência de partículas de gravidade massivas, associadas com a métrica penta-dimensional que acopla com o Modelo Padrão para matéria, levando a assinaturas diretas em colisores de partículas (como o LHC no CERN). Tais partículas de gravidade se acoplam com o setor de Higgs. A descoberta do bóson de higgs abriu um novo camp o de investigação para sua detecção direta, no estado final com dois bósons de higgs. Nós usamos técnicas de Monte Carlo para estudar as estratégias de análise que levariam a um melhor reconhecimento de novas ressonâncias que decaem em pares de bósons de higgs em colisores hadrônicos, que podem ser interpretadas como partículas de gravidade massivas. Finalmente apresentamos as buscas experimentais por tais ressonâncias realizadas no contexto do experimento CMS com dados retirados do primeiro run do LHC (com uma energia de centro de massa de 8 TeV)
The Higgs field of the Standard Model theory for elementary particles and interactions can be realized as a composite state from an underlying strong sector. Such hypothesis is very attractive as an ultraviolet completion of the Standard Model since it solves the hierarchy and avoids naturalness problems. The standard perturbative methods cannot be used in the context of strongly interacting theories, however thyose can be broadly describes in terms of extra dimensional models of gravity. We focus on the case of one additional Warped compact Extra Dimension (WED). The generic signatures of this scenario are the manifestation of heavy gravity particles, associated with the five dimensional metric, that couples with the Standard Model matter leading to direct collider signatures. The heavy gravity particles couples to the Higgs sector. The Higgs discovery had oponed a new investigation channel to LHC direct detection that is the di-higgs final state. We use Monte Carlo techniques to study the analysis strategies that would lead to a best recognition of new resonances decaying to a pair of higgses in hadron colliders, that can be interprets as the gravity particles. We finally present resonance searches performed with data taken by the CMS experiment on the 8 TeV LHC run. The results are interpreted as the gravity particles signatures in the WED context
CNPq: 141964/2009-0
APA, Harvard, Vancouver, ISO, and other styles
23

OLIVEIRA, A. C. A. (. Alexandra Carvalho Antunes). "New physics from warped compact extra dimensions: from model building to colliders signals /." São Paulo, 2014. http://hdl.handle.net/11449/127595.

Full text
Abstract:
Orientador: Rogério Rosenfeld
Co-orientador: Maxime Gouzevich
Banca: Eduardo Pontón Bayona
Banca: André Sznajder
Banca: Sérgio Ferraz Novaes
Banca: Oscar José Pinto Éboli
Resumo: No Modelo Padrão que descreve a física das partículas elementares e suas interações o campo de Higgs pode ser imaginado como um campo composto formado por uma força forte ainda desconhecida. Tal hípótese é bastante atrativa para completar o Modelo Padrão a altas energias. Problemas como hierarquia e naturalidade podem ser mais facilmente evitados. No contexto de uma força forte porém métodos de cálculo baseados em expansões perturbativas não tem mais validade. Uma alternativa para entender as propriedades básicas desse tipo de teoria é trabalhar em termos de teorias de gravitacão com dimensões extras. Nesta tese focamos no caso de uma dimensão espacial extra. Características genéricas desse tipo de cenário são a existência de partículas de gravidade massivas, associadas com a métrica penta-dimensional que acopla com o Modelo Padrão para matéria, levando a assinaturas diretas em colisores de partículas (como o LHC no CERN). Tais partículas de gravidade se acoplam com o setor de Higgs. A descoberta do bóson de higgs abriu um novo camp o de investigação para sua detecção direta, no estado final com dois bósons de higgs. Nós usamos técnicas de Monte Carlo para estudar as estratégias de análise que levariam a um melhor reconhecimento de novas ressonâncias que decaem em pares de bósons de higgs em colisores hadrônicos, que podem ser interpretadas como partículas de gravidade massivas. Finalmente apresentamos as buscas experimentais por tais ressonâncias realizadas no contexto do experimento CMS com dados retirados do primeiro run do LHC (com uma energia de centro de massa de 8 TeV)
Abstract: The Higgs field of the Standard Model theory for elementary particles and interactions can be realized as a composite state from an underlying strong sector. Such hypothesis is very attractive as an ultraviolet completion of the Standard Model since it solves the hierarchy and avoids naturalness problems. The standard perturbative methods cannot be used in the context of strongly interacting theories, however thyose can be broadly describes in terms of extra dimensional models of gravity. We focus on the case of one additional Warped compact Extra Dimension (WED). The generic signatures of this scenario are the manifestation of heavy gravity particles, associated with the five dimensional metric, that couples with the Standard Model matter leading to direct collider signatures. The heavy gravity particles couples to the Higgs sector. The Higgs discovery had oponed a new investigation channel to LHC direct detection that is the di-higgs final state. We use Monte Carlo techniques to study the analysis strategies that would lead to a best recognition of new resonances decaying to a pair of higgses in hadron colliders, that can be interprets as the gravity particles. We finally present resonance searches performed with data taken by the CMS experiment on the 8 TeV LHC run. The results are interpreted as the gravity particles signatures in the WED context
Doutor
APA, Harvard, Vancouver, ISO, and other styles
24

Witte, Wiebke Verfasser], Andrei [Akademischer Betreuer] [Vescan, and Joachim [Akademischer Betreuer] Knoch. "Building blocks of vertical GaN-based devices / Wiebke Witte ; Andrei Vescan, Joachim Knoch." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1127337173/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Simkus, Gintautas Verfasser], Michael [Akademischer Betreuer] Heuken, and Rainer [Akademischer Betreuer] [Waser. "Building blocks for gas-phase-processed perovskite LED / Gintautas Simkus ; Michael Heuken, Rainer Waser." Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1238365337/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vaz, Amali L. "Building a better flat-field : an instrumental calibration projector for the Large Synoptic Survey Telescope." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65435.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 125-126).
The Large Synoptic Survey Telescope (LSST) is a next-generation ground-based survey telescope whose science objectives demand photometric precision at the 1% level. Recent efforts towards 1% photometry have advocated in-situ instrumental calibration schemes that use a calibrated detector, rather than a celestial source, as the fundamental reference point for all measurements of system throughput. Results have been promising, but report systematic errors due to stray and scattered light from the flat-field screens used. The LSST calibration scheme replaces the traditional Lambertian-scattering flat-field screen with an array of projectors whose light is constrained in angle, thereby minimizing scattered light incident on the detector. This thesis presents the construction and testing of a single prototype projector within the LSST array. In particular, we evaluate the use of Engineered Diffusers to define the angular radiance of incident light, and of either a Fresnel lens or parabolic mirror to collimate that light. We find that flat-top Engineered Diffusers produce light that is constrained in angle, but which shows persistent pixel-to-pixel non-uniformity at the 5-10% level, and colorto- color non-uniformity at the 5-15% level; unless compensated, chromatic non-uniformity renders them unsuitable for our purposes. The additional chromatic aberrations introduced by Fresnel lens collimators render such transmissive collimators infeasible. Nevertheless, we demonstrate the soundness of the flat-field projector concept by constructing an alternative projector prototype, based on an integrating sphere, that satisfies each criterion well within our tolerances. The magnitude of improvement granted by the integrating sphere projector suggests that future work further investigate this approach.
by Amali L. Vaz.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
27

Meyer, Nadine [Verfasser], and Klaus [Akademischer Betreuer] Sengstock. "Building and characterisation of a dual species quantum simulator / Nadine Meyer. Betreuer: Klaus Sengstock." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2015. http://d-nb.info/1075317509/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Robertson, Craig Collumbine. "Building complex systems based on simple molecular architectures." Thesis, University of St Andrews, 2011. http://hdl.handle.net/10023/2573.

Full text
Abstract:
Over the past twenty years molecules capable of templating their own synthesis, so called self–replicating molecules have gained prominence in the literature. We show herein that mixing the reagents for replicating molecules can produce a network of self–replicators which coexist and that the networks can be instructed by the addition of preformed template upon initiation of the reaction. Whilst self–replicating molecules offer the simplest form of replication, nature has evolved to utilise not minimal self–replication but reciprocal replication where one strand templates the formation of not an identical copy of itself but a reciprocal strand. Efforts thus far at producing a synthetic reciprocal replicating system are discussed and an alternative strategy to address the problems encountered is proposed and successfully implemented. The kinetic behaviour of a self–replicating reaction bears two distinctive time periods. Upon initiation, the reaction proceeds slowly as no template exists to catalyse the reaction. Upon production of the template, the reaction proceeds more rapidly via template direction. During this slow reaction period, the system is prone to mistakes as the reaction is slow and unselective. The creation of an [A•B] binary complex through non–covalent recognition of reagents allows for the reaction to proceed at an accelerated rate upon initiation however products of such a reaction are usually catalytically inert and do not promote further template directed reaction. A strategy to combine the desired behaviour of an [A•B] binary complex with the further template directed autocatalytic self–replicating reaction is described and implemented. Supramolecular polymers consist of repeating monomers which are held together by non–covalent interactions. The strong association of a self–replicating template dimer is comparable to that of supramolecular polymers reported thus far in the literature which are produced by cumbersome standard linear synthetic procedures. Herein the application of self–replication to the field of supramolecular polymer synthesis is discussed. As the autocatalytic reaction to produce the template monomers occurs under the same conditions as required to allow polymerisation to proceed, the polymer is able to spontaneously form in situ by self–replicating supramolecular polymerisation.
APA, Harvard, Vancouver, ISO, and other styles
29

Allin, Steven M. "Applications of 1,3-dithiane 1-oxide asymmetric building block." Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

O'Neil, Kason M. "High-activity Cooperative and Teaming Building Games for Secondary Physical Education." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/4042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

O'Neil, Kason M. "High-activity Cooperative and Teaming Building Games for Elementary Physical Education." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etsu-works/4043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ganta, Dinesh. "An Effort toward Building more Secure and Efficient Physical Unclonable Functions." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/51217.

Full text
Abstract:
Over the last decade, there has been a tremendous growth in the number of electronic devices and applications. One of the very important aspects to deal with such proliferation of ICs is their security. Establishing the Identity (ID) of a device is the cornerstone of any secure application. Typically, the IDs of devices are stored in non-volatile memories (NVM) or through burning fuses on ICs. However, through such traditional techniques, IDs are vulnerable to attacks. Further, maintaining such secrets in NVMs is expensive. Physical Unclonable Functions (PUF) provide an alternative method for creating chip IDs. They exploit the uncontrollable variations that exist in IC manufacturing to generate identifiers. However, since PUFs exploit the small mismatch across identically designed circuits, the responses of PUFs are prone to error in the presence of unwanted variations in the operating temperature, supply voltage, and other noises. The overarching goal of this work is to develop silicon PUFs that are highly efficient and stable to such noises. In addition, to make PUFs more attractive for low cost and tiny embedded systems, our goal is to develop PUFs with minimal area and power consumption for a given ID length and security requirement. Techniques to develop such PUFs span different abstraction levels ranging from technology-independent application-level techniques to technology-dependent device-level ones. In this dissertation, we present different technology-independent and technology-dependent techniques and evaluate which techniques are good candidates for improving different qualities of PUFs. In technology-independent techniques, we propose two modifications to a conventional PUF architecture, which are detailed in this thesis. Both modifications result in a PUF that is more efficient in terms of area and power. Compared to the traditional architecture, for a given silicon real estate, the proposed architecture provides over two orders of magnitude larger $C/R$ space and it has higher resistance toward modeling attacks. Under technology-dependent methods, we investigate multiple techniques that improve stability and efficiency of PUF designs. In one approach, we propose a novel PUF design with a similar architecture to that of a traditional design, where we replace large and power hungry digital components with more efficient analog components. In another technique, we exploit the differences between pMOS and nMOS transistors in their variation of threshold voltage (Vth) and in the temperature coefficients of Vth to significantly improve the stability of bi-stable PUFs. We also use circuit-level simulations to evaluate the stability of silicon PUFs to aging degradation. We believe that our technology-independent techniques are good candidates for improving overall efficiency of PUFs in terms of both operation and implementation costs, suitable for PUFs with tight constraints on cost for design and test. However, with regards to improving the stability of PUFs, it is cost-effective to use our technology-dependent techniques as long as the extra effort for implementation and testing can be tolerated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Kassavou, A. "Building an evidence base for effective walking groups." Thesis, Coventry University, 2014. http://curve.coventry.ac.uk/open/items/21295df9-0d78-4227-995c-af00182d2003/1.

Full text
Abstract:
Walking groups are increasingly being set up to increase physical activity in sedentary population groups, but little is known about whether they are effective at doing so and how they work. The present thesis aims to build an evidence base of whether walking groups are effective at promoting public health and what factors account for their effectiveness. Methods: Four studies were conducted to address the overall aim. Study One: a systematic literature review with meta-analysis investigated whether interventions to promote walking in groups are effective at promoting physical activity. Study 2: a multi-perspective thematic analysis of interviews with walkers, walk leaders and walk co-ordinators, includingfollow up interviews with walkers, explored whether the needs and expectations of people who participated in walking groups were satisfied. The sample was gained from walking schemes run by Coventry City Council. Study Three: awalk-along interview study with walk leaders explored what and how environmental factors are seen to affect walking behaviours in groups. Study Four: a prospective cohort survey explored what theoretical constructspredict maintenance of attendance at walking groups in the Midlands. Results: Study One:interventions to promote walking in groups were found to be effective at promoting physical activity within efficacy studies targeting adults (d=0.42). Study Two: walkers reported that they joined walking groups to gain social and health benefits. Three months later the same walkers reported that they continued attending walking groups when their initial needs were satisfied by the other people in the group. Walk leaders and walk coordinators often acknowledged the same reasons but expressed lack of confidence to effectively address them. Study Three: walk leaders describedenvironmental factors that were important facilitators for behaviours within walking places. Lap walking places were reported to facilitate physical activity, park walking places were reported to facilitate social interactions and city centre walking places were reported to facilitate time efficient behaviours. Study Four: recovery self-efficacy and satisfaction with outcome expectancies and overall experiences within the groups were found to predict maintenance of attendance at walking groups. ix Conclusions:The results of this thesis suggest that walking groups increase physical activity. Furthermore, successful walking groups should include theory based techniques to promote behaviour change and social integration within participants. The outcomes of this thesiscan be used as an evidence base for developing, implementing and evaluating effective walking groups within the community.
APA, Harvard, Vancouver, ISO, and other styles
34

Golla, Andrea [Verfasser], and Gerd [Akademischer Betreuer] Leuchs. "Building blocks for efficient light-matter interaction in free space / Andrea Golla. Gutachter: Gerd Leuchs." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2014. http://d-nb.info/1065270275/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Saliou, Anthony. "Building Neural Network Potentials for Lennard-Jones and Aluminium systems." Thesis, KTH, Fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ilina, Elena. "Understanding the application of knowledge management to the safety critical facilities." Doctoral thesis, KTH, Bro- och stålbyggnad, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-27608.

Full text
Abstract:
Challenges to the operating nuclear power plants and transport infrastructures are outlined. It is concluded that most aggravating factors are related to knowledge. Thus, of necessity, effective knowledge management is required. Knowledge management theories are reviewed in their historical perspective as a natural extension and unification of information theories and theories about learning. The first line is identified with names as Wiener, Ashby, Shannon, Jaynes, Dretske, Harkevich. The second line – with Vygotsky, Engeström, Carayannis. The recent developments of knowledge management theorists as Davenport, Prusak, Drew, Wiig, Zack are considered stressing learning, retaining of knowledge, approaching the state awareness of awareness, and alignment of knowledge management with the strategy of the concerned organizations. Further, some of the details and results are presented of what is achieved so far. More specifically, knowledge management tools are applied to the practical work activities as event reporting, data collection, condition assessment, verification of safety functions and incident investigation. Obstacles are identified and improvements are proposed. Finally, it is advised to continue to implement and further develop knowledge management tools in the organizations involved in various aspects of safety critical facilities.
Utmaningar som kärnkraftverken och transportinfrastrukturer står inför har kartlagts. Kartläggningen pekar på att problemen är relaterade till brister i kunskap. Det är därför nödvändigt att fokusera på kunskap och implementera kunskapsmanagement med däri ingående teorier. Sådana teorier beskrivs i ett historiskt perspektiv. Det framgår att kunskapsmanagement har flera rötter av vilka de viktigaste är informationsteorier och teorier om inlärning. Den första är associerad med namn som Wiener, Ashby, Jaynes, Dretske, Harkevich. Den andra med namn som Vygotsky, Engeström, Carayannis. Även bidrag från moderna tänkare inom kunskapsmanagement som Davenport, Prusak, Drew, Wiig, Zack utvärderas för att förstå hur de involverade organisationerna kontinuerligt kan lära sig, bevara kunskap, nå medvetande gällande kunskap och integrera kunskapsmanagement med företagsstrategier. Vidare så presenteras ett urval av resultat för att illustrera vad som har åstadkommits hittills. Kunskapsmanagement-teorier appliceras på verksamheter som erfarenhetsrapportering, databaser, provning, verifiering av säkerhetsfunktioner och utredning av incidenter. Kunskapsmanagement gör det möjligt att identifiera och beskriva brister i de etablerade verksamheterna och att föreslå förbättringar. Rekommendationen för framtiden är att fortsätta arbetet med implementering och vidareutveckling av kunskapsmanagement för applikationer som är relevanta för kärkraftverk, transportinfrastrukturer och andra säkerhetskritiska anläggningar.
QC 20101214
APA, Harvard, Vancouver, ISO, and other styles
37

Weir, Gillian Francis. "Life cycle assessment of multi-glazed windows." Thesis, Edinburgh Napier University, 1998. http://researchrepository.napier.ac.uk/Output/2747.

Full text
Abstract:
In 1987 the World Commission on Environment and Development proposed a reduction in per capita energy consumption of 50%. Increasing demands, and initiatives of this nature, produce a need for more reliable assessment methods, measurement tools and improvement regimes. Since the late 1960's Life Cycle Assessment (LCA) has become an increasingly important tool for engineers, technologists, scientists, designers, managers and environmentalists alike. LCA enables the effects which products, processes and activities have on local, regional or global environments to be assessed, adopting a holistic, or whole life approach to design methodologies. The design of window systems has a large impact upon LCA results generated. Thermal performance properties influence energy consumption patterns throughout a lifetime of use, while appropriate use of materials, window positioning and size have a knock-on effect on lighting control functions and air conditioning demands. In developing countries, residential sectors account for between 20% and 30% of the total energy used (30% in the UK). Windows in dwellings alone account for 6% of the total UK energy consumption. This thesis addresses an ongoing need to focus on sustainable development, using LCA as an assessment tool to develop a greater understanding of the window life cycle, and to highlight improvements which are necessary to lessen its environmental impact and make the processes involved more benign. To do this successfully requires that the demands of modern day living, and the comfort conditions expected, be incorporated into design criteria, whilst ensuring that the needs of future generations are not compromised by today's activities. Along with rising demands to improve efficiency and decrease energy consumption in buildings, comes an expectation for continual improvement in building interiors. To this end, both the aural and visual haracteristics of window installations become paramount, in addition to the well researched thermal performance criteria. Much research has focused on investigating the social and physiological benefits associated with improved interior environments. The correlation between worker satisfaction and performance has been well proven. If complete physical well-being is satisfied then an individual's mental well-being is less likely to be affected by the additional stressors of environmental dissatisfaction. An optimisation model has been developed, linking the thermal, aural and visual performance of varying window designs, such that an "advanced" window system is created. Two outputs are generated from the model, which may be used to evaluate the "optimum" window design in terms of energy consumption and global environmental impact. Optimisation of energy consumption incorporates embodied energy, thermal performance and electric lighting demand, over the life cycle of a window. Global environmental impact optimisation is similar, but evaluation is based on energy generation, and greenhouse gas production. Finally, a flowchart for optimisation guides the user towards a glazing solution which offers sufficient noise attenuation, whilst minimising thermal losses and electric lighting demand. Each output provides a guide for design, leaving room for judgement, and is not intended to be followed definitively. Recommendations for improvements to manufacture systems and production of multiglazed windows are offered, based on sustainable development criteria. Future research needs, which are necessary to minimise the total environmental impact resulting from multi-glazed window production, are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
38

Williams, Stacey L., and Emma G. Fredrick. "Minority Stress & LGBT Mental and Physical Health: Building Interventions & Resources." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/8080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schumacher, Erik [Verfasser], Heinrich [Akademischer Betreuer] Päs, and Gudrun [Gutachter] Hiller. "A model-building approach to the origin of flavor / Erik Schumacher ; Gutachter: Gudrun Hiller ; Betreuer: Heinrich Päs." Dortmund : Universitätsbibliothek Dortmund, 2016. http://d-nb.info/1125107022/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Reininghaus, Nies [Verfasser], Carsten [Akademischer Betreuer] Agert, Sascha [Akademischer Betreuer] Schäfer, and Jürgen [Akademischer Betreuer] Parisi. "Silicon thin film concepts for building integrated photovoltaic applications / Nies Reininghaus ; Carsten Agert, Sascha Schäfer, Jürgen Parisi." Oldenburg : BIS der Universität Oldenburg, 2018. http://d-nb.info/1178680479/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Reininghaus, Nies Verfasser], Carsten [Akademischer Betreuer] [Agert, Sascha [Akademischer Betreuer] Schäfer, and Jürgen [Akademischer Betreuer] Parisi. "Silicon thin film concepts for building integrated photovoltaic applications / Nies Reininghaus ; Carsten Agert, Sascha Schäfer, Jürgen Parisi." Oldenburg : BIS der Universität Oldenburg, 2018. http://d-nb.info/1178680479/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Purdie, Mark. "Stereoselective control by the 1,3-dithiane 1-oxide asymmetric building block." Thesis, University of Liverpool, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Burke, Sarah Anne. "Building foundations for molecular electronics: growth of organic molecules on alkali halides as prototypical insulating substrates." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=32258.

Full text
Abstract:
The epitaxy and growth of a series of organic molecules deposited on insulating surfaces were investigated by noncontact atomic force microscopy (nc-AFM). The molecules studied, C60 , 3,4,9,10-perylene tetracarboxylic dianhydride (PTCDA), 3,4,9,10-perylene tetracarboxlylic diimide (PTCDI), and copper (II) phthalocyanine (CuPc), were selected to investigate the effect of different molecular geometries, charge distributions and intermolecular interactions and as interesting candidates in molecular electronic applications. As it is known that the properties of molecules are influenced by their structural arrangements, an understanding of the interactions of molecules with substrates of interest as well as the dominant processes involved in growth are of great interest. Model insulating substrates KBr and NaCl were used for growth studies, due to the necessity of insulators in electrically isolating device regions. Dewetting processes were observed in several of these systems : C60 on KBr and NaCl, PTCDA on NaCl and PTCDI on NaCl. The specific influences of dewetting are discussed for each system, in particular the morphological impact of dewetting and the driving of dewetting by strained metastable monolayers. For C60 deposits, interesting branched structures are formed in the process of dewetting which are remarkably stable once formed, yet do not represent the equilibrium growth morphology. A determination of the large cell coincident epitaxy reveals a small, yet significant discrepancy between the observed overlayer and calculated stable adsorption sites indicating a dominance of the intermolecular interaction over the molecule–substrate interaction. For both PTCDA a
L'épitaxie et la croissance d'une série de molécules organiques déposées sur surface isolantes ont été étudiées par nc-AFM. Les molécules étudiées, C60, 3,4,9,10-perylene tetracarboxylic dianhydride (PTCDA), 3,4,9,10-perylene tetracarboxylic diimide (PTCDI), et Copper (II) Phthalocyanine (CuPc), ont été choisies pour examiner l'influence des géométries moléculaires, des distribution de charge, et de différentes interactions intermoléculaires, sur l'assemblage de candidats intéressants dans des applications en électronique moléculaire. Etant donné que les agencements structurels influencent leurs propriétés moléculaires, la compréhension des interactions entre molécules et substrats donnant lieu à la formation de couches mince est intéressant de plusieurs points de vue. Des surface isolantes modèles, KBr et NaCl, ont été utilisées pour les études de croissance, en raison de l'importance de l'isolement électronique de certaines régions des dispositifs. On a observé des processus de démouillage dans quelques systèmes: C60 sur KBr et NaCl, PTCDA sur NaCl, et PTCDI sur NaCl. Les influences de ces processus sont discutées pour chaque système, avec une emphase particulière sur l'impact morphologique du démouillage et la force d'entraînement par des mono-couches déformées. Dans le cas de C60, des îlots ramifiés se forment pendant le démouillage. Ces structures sont remarquablement stables un fois formées, mais ne représentent pas la structure en équilibre. La détermination d'une épitaxie coïncidente indique une petite, mais importante, difference entre la sur-couche observée et les sites d'adsorption stable calculés. Ce
APA, Harvard, Vancouver, ISO, and other styles
44

Silva, Maria Cecília Martins Ferreira da. "The national physics and chemistry exams and the learning of sciences." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10413.

Full text
Abstract:
Dissertação apresentada para obtenção do Grau de Doutor em Ciências da Educação, pela Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa
This work has as its starting point the acknowledgement of significant fluctuations in the degree of difficulty of the Physics-Chemistry national exams. The study of these fluctuations from 1949 to 2005 aims to understand to what extent the differences, which occurred in the content, the structure of the exams, and the adopted standards, are reflected on the degree of difficulty they present. It reports and provides comparative standard-setting results of Portuguese exams of Physics and Chemistry for the nine and the last years of secondary schooling through the use of different item-grouping approaches. Three standard setting methods, Contrasting Groups, Beuk and Extended Angoff, were applied in order to study the differences in item, panellist and item difficulty in final performance. Initially, my goal in this work was to investigate the existence of possible differences in exam results in a logical and holistic manner, as to promote improvements in the teaching and learning process. I found, however, that it was very difficult to establish a single difficulty variation pattern due to the heterogeneity of the results. Even though the cognitive analysis allowed for the creation of a group of items, the evolution in the exams analysed, in a 50 year period, reflects the changes in the educational policies and allow for other considerations to be pondered based on different political, social and economic contexts.
APA, Harvard, Vancouver, ISO, and other styles
45

Kilbourne, John R. "Building a bridge between athletics and academics." The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1240496158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Poh, Zijie. "Model Building in the LHC Era: Vector-like Leptons and SUSY GUTs." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1502809360161742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Agarwal, Rahul. "Graph-Based Simulation for Cyber-Physical Attacks on Smart Buildings." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103614.

Full text
Abstract:
As buildings evolve towards the envisioned smart building paradigm, smart buildings' cyber-security issues and physical security issues are mingling. Although research studies have been conducted to detect and prevent physical (or cyber) intrusions to smart building systems(SBS), it is still unknown (1) how one type of intrusion facilitates the other, and (2) how such synergic attacks compromise the security protection of whole systems. To investigate both research questions, the author proposes a graph-based testbed to simulate cyber-physical attacks on smart buildings. The testbed models both cyber and physical accesses of a smart building in an integrated graph, and simulates diverse cyber-physical attacks to assess their synergic impacts on the building and its systems. In this thesis, the author presents the testbed design and the developed prototype, SHSIM. An experiment is conducted to simulate attacks on multiple smart home designs and to demonstrate the functions and feasibility of the proposed simulation system.
Master of Science
A smart home/building is a residence containing multiple connected devices which enable remote monitoring, automation, and management of appliances and systems, such as lighting, heating, entertainment, etc. Since the early 2000s, this concept of a smart home has becomequite popular due to rapid technological improvement. However, it brings with it a lot of security issues. Typically, security issues related to smart homes can be classified into two types - (1) cybersecurity and (2) physical security. The cyberattack involves hacking into a network to gain remote access to a system. The physical attack deals with unauthorized access to spaces within a building by damaging or tampering with access control. So far the two kinds of attacks on smart homes have been studied independently. However, it is still unknown (1) how one type of attack facilitates the other, and (2) how the combination of two kinds of attacks compromises the security of the whole smart home system. Thus, to investigate both research questions, we propose a graph-based approach to simulate cyber-physical attacks on smart homes/buildings. During the process, we model the smart home layout into an integrated graph and apply various cyber-physical attacks to assess the security of the smart building. In this thesis, I present the design and implementation of our tool, SHSIM. Using SHSIM we perform various experiments to mimic attacks on multiple smart home designs. Our experiments suggest that some current smart home designs are vulnerable to cyber-physical attacks
APA, Harvard, Vancouver, ISO, and other styles
48

Schradin, Leslie J. "Textures, model building, and orbifold gauge anomalies research in three topics in physics beyond the standard model /." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1166571116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Reza, Humayun. "Cleaning and restoring old masonry buildings : investigations of physical and chemical characteristics of masonry stones and clay bricks during cleaning." Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/8851.

Full text
Abstract:
Historic buildings and monuments are a precious finite asset and powerful reminders for future generations of the work and way of life of earlier cultures and civilisations. The stone cleaning and restoration of historic buildings is a crucial element in keeping the good look, integrity and quality of the fine art, method of construction and architecture of previous civilisations. Stone cleaning is one of the most noticeable changes a building can be subjected to, which changes its appearance, persona and environmental context. In this study, a series of physical and chemical tests were conducted to further investigate, evaluate and improve the efficiency of building cleaning. Seven different abrasives were adopted for air abrasive cleaning, including copper slag (fine, medium and coarse), recycled glass (fine, medium and coarse) and hazelnut/almond shell (natural abrasive), on a total of eight masonry stones and clay bricks, including yellow sandstone, red sandstone, limestone, marble, granite, white clay brick, yellow clay brick and red clay brick. Physical investigations included sieve tests and impact tests on the abrasives, greyscale image analysis, thickness reduction measurements, Vickers surface hardness tests, Charpy impact tests and water absorption tests. Chemical investigations included Scanning Electron Microscope (SEM) and Energy-Dispersive X-Ray Spectroscopy (EDX) analyses. Sieve tests and impact tests confirmed that the abrasives utilised were fairly reliable, and the abrasives with high bulk densities were stronger and tougher than those with low bulk density. Greyscale digital image analysis indicated a lower greyscale value corresponded to a dirtier masonry surface. In general, the greyscale continuously increased with the increasing cleaning time and tended to be stable when the surface became fully cleaned. The cleanness was also introduced for assessing the effectiveness of the building cleaning. Similar trends could be observed. Both parameters proved to be significantly useful. For most of the samples, monotonic increase trends were observed between the greyscale and thickness reduction. The image analysis on greyscale and the thickness measurement were two useful methods for assessing the cleaning degree of a masonry stone or clay brick. Based on the analysis on all the testing data, it is possible to recommend a more suitable abrasive for each masonry stone or brick. For granite and red clay brick, medium glass produced the best performance, while for limestone, marble and red sandstone, fine glass was promising. For yellow clay brick, fine slag could be the best option, while for yellow sandstone the natural abrasive was found to be the most suitable. vi The Vickers hardness test results indicated that a larger hardness corresponded to a harder masonry surface. Also the surface hardness continuously increased with the increasing cleaning time but at a decrease rate. Most of the increasing trends of the surface hardness could be approximately expressed using parabolic relationships. Granite was found to be the hardest, and followed by marble and limestone. However, there were no big differences in the surface hardness between yellow clay brick, yellow sandstone, red sandstone and white clay brick. The impact resistances of seven masonry stones and bricks were obtained by conducting the Charpy impact resistance tests. Granite showed the highest impact resistance among all the stones and bricks and was followed by marble, limestone, clay bricks and sandstones. The stones and bricks with higher impact resistances also had higher hardness values but lower water absorptions. The water absorbing capacity of the seven masonry stones and bricks was quantitatively determined. Two types of clay bricks showed the highest water absorptions, and the water absorptions for limestone, yellow sandstone and red sandstone were also quite high. However, the water absorption of marble and granite was found to be very low. Larger water absorption corresponded to a softer stone or brick, while smaller water absorption corresponded to a harder stone or brick. The chemical investigations by using the SEM and EDX techniques showed that the chemical substances on the masonry surface varied largely for different types of stones and bricks. This study showed the way to detect such soiling using chemical analysis by monitor the changes in chemical elements and compounds during the building cleaning. Finally, comprehensive conclusions were presented, together with useful suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
50

Choi, Young-Seon. "The physical environment and patient safety: an investigation of physical environmental factors associated with patient falls." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45974.

Full text
Abstract:
Patient falls are the most commonly reported "adverse events" in hospitals, according to studies conducted in the U.S. and elsewhere. The rate of falls is not high (2.3 to 7 falls per 1,000 patient days), but about a third of falls result in injuries or even death, and these preventable events drive up the cost of healthcare and, clearly, are harmful outcomes for the patients involved. This study of a private hospital, Dublin Methodist Hospital, in Dublin, Ohio analyzes data about patient falls and the facility's floor plans and design features and makes direct connections between hospital design and patient falls. This particular hospital, which was relatively recently constructed, offered particular advantages in investigating unit-layout-related environmental factors because of the very uniform configuration of its rooms, which greatly narrowed down the variables under study. This thesis investigated data about patients who had suffered falls as well as patients with similar characteristics (e.g., age, gender, and diagnosis) who did not suffer falls. This case-control study design helps limit differences between patients. Then patient data was correlated to the location of the fall and environmental characteristics of the locations, analyzed in terms of their layout and floor plan. A key part of this analysis was the development of tools to measure the visibility of the patient's head and body to nurses, the relative accessibility of the patient, the distance from the patient's room to the medication area, and the location of the bathroom in patient rooms (many falls apparently occur during travel to and from these areas). From the analysis of all this data there emerged a snapshot of the specific rooms in the hospital being analyzed where there was an elevated risk of a patient falling. While this finding is useful for the administrators of that particular facility, the study also developed a number of generally applicable conclusions. The most striking conclusion was that, for a number of reasons, patients whose heads were not visible from caregivers working from their seats in nurses' stations and/or from corridors had a higher risk of falling, in part because staff were unable to intervene in situations where a fall appeared likely to occur. This was also the case with accessibility; patients less accessible within a unit had a higher risk of falling. The implications for hospital design are clear: design inpatient floors to maximize a visible access to patients (especially their heads) from seats in nurses' stations and corridors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography