To see the other types of publications on this topic, follow the link: Quantity and measurement.

Dissertations / Theses on the topic 'Quantity and measurement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Quantity and measurement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bennich-Björkman, Oscar. "A comprehensive summary and categorization of physical quantity libraries." Thesis, Uppsala universitet, Informationssystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353817.

Full text
Abstract:
In scientific applications, physical quantities and units of measurement are used regularly. If the inherent incompatibility between these different units is not handled properly it can lead to major, and sometimes catastrophic, problems. Although the risk of a miscalculation is high and the cost equally so, almost no programming languages has support for physical quantities. Instead developers often rely on external libraries to help them spot these mistakes or prevent them all together. There are several hundred of these types of libraries, spread across multiple sites and with no simple way to get an overview. No one has summarized what has and has not been achieved so far in the area leading to many developers trying to ‘reinvent the wheel’ instead of building on what has already been done. This shows a clear need for this type of research. Employing a systematic approach to look through and analyze all available physical quantity libraries, the search results were condensed into 82 libraries which are presented in this thesis. These are the most comprehensive and well-developed, open-source libraries, chosen from approximately 3700 search results across seven repository hosting sites. In this group, 30 different programming languages are represented. The goal is for the results of this thesis to contribute to a shared foundation on which to build future libraries as well as provide an easy way of spreading knowledge about which libraries exist in the area, thus making it easier for more people to use them.
APA, Harvard, Vancouver, ISO, and other styles
2

Inzinger, Dagmar, and Peter Haiss. "Integration of European Stock Markets. A Review and Extension of Quantity-Based Measures." Europainstitut, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/320/1/document.pdf.

Full text
Abstract:
We examine to what extent Europe´s stock markets are integrated, and how this can be measured. We review 54 empirical studies and find an overemphasis on price-based measures and a need for more quantity-based studies. We update the Baele et al (2004) study on investment funds' equity holdings to March 2006 for ten euro area and four non-euro area countries, provide additional quantity based evidence, and discuss integration theories. Our results indicate a decline in home bias particularly after the advent of the euro. We conclude that although European stock markets have undergone significant developments, the level of European integration is below expectations and there is a high joint integration with the U.S. (author's abstract)
Series: EI Working Papers / Europainstitut
APA, Harvard, Vancouver, ISO, and other styles
3

Retief, Daniel Christoffel Hugo. "Investigating integrated catchment management using a simple water quantity and quality model : a case study of the Crocodile River Catchment, South Africa." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1017875.

Full text
Abstract:
Internationally, water resources are facing increasing pressure due to over-exploitation and pollution. Integrated Water Resource Management (IWRM) has been accepted internationally as a paradigm for integrative and sustainable management of water resources. However, in practice, the implementation and success of IWRM policies has been hampered by the lack of availability of integrative decision support tools, especially within the context of limited resources and observed data. This is true for the Crocodile River Catchment (CRC), located within the Mpumalanga Province of South Africa. The catchment has been experiencing a decline in water quality as a result of the point source input of a cocktail of pollutants, which are discharged from industrial and municipal wastewater treatment plants, as well as diffuse source runoff and return flows from the extensive areas of irrigated agriculture and mining sites. The decline in water quality has profound implications for a range of stakeholders across the catchment including increased treatment costs and reduced crop yields. The combination of deteriorating water quality and the lack of understanding of the relationships between water quantity and quality for determining compliance/non-compliance in the CRC have resulted in collaboration between stakeholders, willing to work in a participatory and transparent manner to create an Integrated Water Quality Management Plan (IWQMP). This project aimed to model water quality, (combined water quality and quantity), to facilitate the IWQMP aiding in the understanding of the relationship between water quantity and quality in the CRC. A relatively simple water quality model (WQSAM) was used that receives inputs from established water quantity systems models, and was designed to be a water quality decision support tool for South African catchments. The model was applied to the CRC, achieving acceptable simulations of total dissolved solids (used as a surrogate for salinity) and nutrients (including orthophosphates, nitrates +nitrites and ammonium) for historical conditions. Validation results revealed that there is little consistency within the catchment, attributed to the non-stationary nature of water quality at many of the sites in the CRC. The analyses of the results using a number of representations including, seasonal load distributions, load duration curves and load flow plots, confirmed that the WQSAM model was able to capture the variability of relationships between water quantity and quality, provided that simulated hydrology was sufficiently accurate. The outputs produced by WQSAM was seen as useful for the CRC, with the Inkomati-Usuthu Catchment Management Agency (IUCMA) planning to operationalise the model in 2015. The ability of WQSAM to simulate water quality in data scarce catchments, with constituents that are appropriate for the needs of water resource management within South Africa, is highly beneficial.
APA, Harvard, Vancouver, ISO, and other styles
4

Passelaigue, Theys Dominique. "Grandeurs et mesures à l'école élémentaire." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20161/document.

Full text
Abstract:
Depuis 2002, dans un nouveau domaine des mathématiques intitulé « grandeurs et mesure », les auteurs des textes officiels proposent d'aborder les grandeurs au travers d'activités de comparaison directe, indirecte, en utilisant des étalons arbitraires avant d'introduire les unités conventionnelles et en puisant dans le réservoir d'activités que sont les sciences expérimentales.Ces injonctions ont été à l'origine de ce travail.Une analyse épistémologique des concepts nous a permis de montrer que la distinction« grandeur/mesure », présentée comme naturelle, est pertinente pour ce niveau d'enseignement.Nous avons cherché l'origine de la proposition des prescriptions officielles en étudiant les textes duprimaire de sciences et de mathématiques depuis 1923. Nous avons constaté un tournant décisif dans les programmes : à la suite de la réforme des mathématiques modernes, l'étude des grandeurs avant la mesure apparaît en sciences comme en mathématiques, s'appuyant sur les travaux des psychologues de l'époque. Cette étude dans les deux disciplines ne sera plus demandée jusqu'en 2002.Dans notre travail, nous avons mis en évidence une mauvaise maîtrise des concepts de « grandeur »et « mesure » ainsi qu'une conception erronée de « grandeur » chez les professeurs d'école. Certains d'entre eux sont par ailleurs réticents à adopter la démarche décrite dans les textes pour l'ensemble des grandeurs.Nous avons étudié l'impact des activités de comparaison avec l'utilisation d'étalons arbitraires sur la construction du concept de masse et sur le sens de la mesure, à l'aide de la mise en œuvre deux ingénieries comparatives en CE1. Nos résultats montrent que le niveau de conceptualisation des élèves, tel qu'il est évalué à l'aide de nos critères, est supérieur tant pour le sens de la grandeur que pour celui de la mesure, chez les élèves ayant vécu une séquence introduisant la masse à partir d'activités de comparaisons détachées du nombre
Since 2002, in a new domain of the mathematics entitled " quantity and measurement ", the authors ofthe curriculum suggest approaching the quantity through activities of direct, indirect comparison, byusing arbitrary standards measurement before introducing the conventional units and by drawing fromthe reservoir of activities that are the experimental sciences.These orders were at the origin of this work.An epistemological analysis of the concepts allowed us to show that the distinction "quantity/measurement", presented as natural, is relevant for this level of teaching.We looked for the origin of the curriculum's proposition by studying the texts of the primary schoolin sciences and mathematics since 1923. We noticed a decisive bend in the programs: following thereform of the modern mathematics, the study of quantity before measurement appears in sciences asin mathematics, leaning on the works of the contemporary psychologists. This study in both disciplineswill not be any more asked until 2002.In our work, we brought to light a bad control of the concepts of "quantity and "measurement " as wellas a misconception of "quantity" at the primary school teachers. Some of them are besides reluctantto adopt the approach described in curricula for all the quantities.We studied the impact of the comparison's activities with the use of arbitrary standards on theconstruction of the concept of mass and on the sense of moderation, by means of the implementationtwo comparative engineerings in 2nd year of primary school. Our results show that the level of pupil'sconceptualization, such as it is estimated by means of our criteria, is upper so much for the sense ofthe quantity than for that of the measure, at the pupils having lived a sequence introducing the massfrom comparison's activities out of the number
APA, Harvard, Vancouver, ISO, and other styles
5

Albinsson, Anders. "”De va svinhögt typ 250 kilo” : Förskolebarns mätande av längd, volym och tid i legoleken." Licentiate thesis, Linköpings universitet, Lärande, Estetik, Naturvetenskap (LEN), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124659.

Full text
Abstract:
The purpose of the dissertation is to study, describe and analyses which comparative measurement activities preschool children construct and use, and how they solve problems and communicate when they use the comparative measurement activities whilst playing with Lego (“the Lego play”). The measurement activities chosen are length/height, quantity and time. The empirical material is based on data from two preschool classes with children aged 2 – 5 year, It was collected through participant observation (video captures) of the children’s Lego play. The theoretical starting points in this study are grounded in childhood sociology and the sociocultural perspective. The study assumes the childhood sociology perspective by viewing the children as competent and active in creating meaning as well as controlling and influencing their own and others’ social environment. The sociocultural perspective gives prominence to development and learning, and its related tools and concepts are used to analyses the results of the study. That is, the Lego play is studied in a social context from the child’s perspective, and the sociocultural perspective describes and analyses the child’s use of mathematics and the acquisition of knowledge in the Lego play in a sociocultural context. The results show that children measuring length/height and quantity explored a store of measurement tools in order to make comparisons, and adapted these to the context in question. These were own body, other body, artefacts, numbers and counting. The measurements were used individually and with others, and the solving of the own or shared problems constituted a large share of the time spent constructing models during Lego play. By contrast, the time concept was used mainly as a tool when the children played with their finished Lego models. Thus, a time perspective was added to the child’s finished model, which inspired thoughts and reflections about time used in the Lego play. The children used the time concepts of the present, the past and the future, and also considered the concept of velocity in the context of the timescale. The children’s communication had a large impact on the Lego play, and they expressed their ideas verbally, physically and through action. The children’s use of mathematics was prominent and meaningful during the Lego play.
APA, Harvard, Vancouver, ISO, and other styles
6

Ally, Abdallah K. "Quantile-based methods for prediction, risk measurement and inference." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/5342.

Full text
Abstract:
The focus of this thesis is on the employment of theoretical and practical quantile methods in addressing prediction, risk measurement and inference problems. From a prediction perspective, a problem of creating model-free prediction intervals for a future unobserved value of a random variable drawn from a sample distribution is considered. With the objective of reducing prediction coverage error, two common distribution transformation methods based on the normal and exponential distributions are presented and they are theoretically demonstrated to attain exact and error-free prediction intervals respectively. The second problem studied is that of estimation of expected shortfall via kernel smoothing. The goal here is to introduce methods that will reduce the estimation bias of expected shortfall. To this end, several one-step bias correction expected shortfall estimators are presented and investigated via simulation studies and compared with one-step estimators. The third problem is that of constructing simultaneous confidence bands for quantile regression functions when the predictor variables are constrained within a region is considered. In this context, a method is introduced that makes use of the asymmetric Laplace errors in conjunction with a simulation based algorithm to create confidence bands for quantile and interquantile regression functions. Furthermore, the simulation approach is extended to an ordinary least square framework to build simultaneous bands for quantiles functions of the classical regression model when the model errors are normally distributed and when this assumption is not fulfilled. Finally, attention is directed towards the construction of prediction intervals for realised volatility exploiting an alternative volatility estimator based on the difference of two extreme quantiles. The proposed approach makes use of AR-GARCH procedure in order to model time series of intraday quantiles and forecast intraday returns predictive distribution. Moreover, two simple adaptations of an existing model are also presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Christopher Francis. "Use of wind profilers to quantify atmospheric turbulence." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/use-of-wind-profilers-to-quantify-atmospheric-turbulence(d6a12ed2-533a-4dae-9f0d-747bc0b4c725).html.

Full text
Abstract:
Doppler radar wind profilers are already widely used to measure atmospheric winds throughout the free troposphere and stratosphere. Several methods have been developed to quantify atmospheric turbulence with such radars, but to date they have remained largely un-tested; this thesis presents the first comprehensive validation of one such method. Conventional in-situ measurements of turbulence have been concentrated in the surface layer, with some aircraft and balloon platforms measuring at higher altitudes on a case study basis. Radars offer the opportunity to measure turbulence near continuously, and at a range of altitudes, to provide the first long term observations of atmospheric turbulence above the surface layer. Two radars were used in this study, a Mesosphere-Stratosphere-Troposphere (MST) radar, at Capel Dewi, West Wales, and the Facility for Ground Based Atmospheric Measurements (FGAM) mobile boundary layer profiler. In-situ measurements were made using aircraft and tethered-balloon borne turbulence probes. The spectral width method was chosen for detailed testing, which uses the width of a radar's Doppler spectrum as a measure of atmospheric velocity variance. Broader Doppler spectra indicate stronger turbulence. To obtain Gaussian Doppler spectra (a requirement of the spectral width method), combination of between five and seven consecutive spectra was required. Individual MST spectra were particularly non-Gaussian, because of the sparse nature of turbulence at its observation altitudes. The width of Gaussian fits to the Doppler spectrum were compared to those from the `raw' spectrum, to ensure that non-atmospheric signals were not measured. Corrections for non-turbulent broadening, such as beam broadening, and signal processing, were investigated. Shear broadening was found to be small, and the errors in its calculation large, so no corrections for wind shear were applied. Beam broadening was found to be the dominant broadening contribution, and also contributed the largest uncertainty to spectral widths. Corrected spectral widths were found to correlate with aircraft measurements for both radars. Observing spectral widths over time periods of 40 and 60 minutes for the boundary layer profiler and MST radar respectively, gave the best measure of turbulence intensity and variability. Median spectral widths gave the best average over that period, with two-sigma limits (where sigma is the standard deviation of spectral widths) giving the best representation of the variability in turbulence. Turbulent kinetic energies were derived from spectral widths; typical boundary layer values were 0.13 m 2.s (-2) with a two-sigma range of 0.04-0.25 m 2.s (-2), and peaked at 0.21 m 2.s (-2) with a two-sigma range of 0.08-0.61 m 2.s (-2). Turbulent kinetic energy dissipation rates were also calculated from spectral widths, requiring radiosonde measurements of atmospheric stability. Dissipation rates compared well width aircraft measurements, reaching peaks of 1x10 (-3) m 2.s (-3) within 200 m of the ground, and decreasing to 1-2x10 (-5) m 2.s (-3) near the boundary layer capping inversion. Typical boundary layer values were between 1-3x10 (-4) m 2.s (-3). Those values are in close agreement with dissipation rates from previous studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Schulte, Marc Alan. "Dilution Gauging as a Method to Quantify Groundwater Baseflow Fluctuations in Arizona's San Pedro River." Thesis, The University of Arizona, 1997. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_etd_hy0133_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aamoth, Kelsey. "Instrumentation and Control System to Quantify Colonic Activity." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459190138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tattaris, Maria. "Investigating methods used to quantify gaseous emissions from vegetation fires using spectroscopic measurements." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/investigating-methods-used-to-quantify-gaseous-emissions-from-vegetation-fires-using-spectroscopic-measurements(49898568-ba83-4e2a-88fe-2a6a697ae543).html.

Full text
Abstract:
This work investigates the application of ground-based trace gas spectroscopy to deter- mine the chemical makeup and quantity of smoke emitted from vegetation fires. Ultraviolet Differential Optical Absorption Spectroscopy (UV-DOAS) has been infrequently deployed in fire emission studies, yet is potentially a portable, lightweight, inexpensive and simple method. Fourier Transform Infrared (FTIR) spectroscopy has been more commonly used in fire emissions studies, but not generally in the long (> 10 m) open-path ground-based geometry explored here. This research combines these approaches to investigate their ability to quantify trace gas fluxes emitted from open vegetation fires, in part to help validate estimates of fuel consumption rate based on fire radiative power [FRP] measures. UV and IR measurements of the smoke plumes from controlled open vegetation fires (> 4 hectares) were recorded during three field campaigns in Arnhem Land (Northern Australia), Kruger Park (South Africa) and Alberta (Canada). The UV-DOAS was used to quantify NO2 and SO2 vertical column amounts (maximum column amounts approx 200 ppmm), allowing the determination of flux-rates when used to traverse the smoke plume and coupled with plume velocity estimates. Horizontal column amounts of the main plume carbonaceous species (CO2, CO and CH4) were quantified using FTIR methods and used to calculate emission ratios and emissions factors for the target gases, providing detail on inter- and intra- fire variations that are often available from the current literature. Providing NO2 and SO2 are detectable by the FTIR, UV-DOAS flux-rates and FTIR emissions ratios can be combined to calculate flux rates for all FTIR-detectable species. This allows for the determination of the total carbon flux from the fires, and its variation over time. Since vegetation is approximately 50% carbon, this flux is in theory directly proportion to the fuel consumption rate, and directly comparable to the fire’s radiative power output variations as determined by airborne thermal imaging. Hence, in addition to providing the means to estimate smoke plume chemical makeup, emissions magnitude and variability, the simultaneous deployment of the techniques of UV-DOAS, FTIR spectroscopy and airborne thermal imaging enables the validation of FRP derived fuel consumption rates. The FRP method is gaining ground as a tool for improving biomass burning emissions inventories based on satellite observations, but at present has had relatively little validation. This study therefore contributes to the ongoing evaluation effort. Findings demonstrate that the UV-DOAS is an effective way to measure column amounts of SO2 and NO2 in vegetation fire plumes, providing that the fires are of an adequate size and emit smoke in sufficient quantities. The exact nature of the ability to accurately quantify NO2 and SO2 using the method did have a dependence on fuel type, since the combustion of different fuel types (e.g. grasses vs. woody fuels vs. organic soils) appeared to cause more of less of these particular gases to be emitted. There was difficulty in confidently detecting NO2 via the OP-FTIR approach for the majority of the study cases, due to the relatively weak IR absorption bands used and the relative scarcity of this gas in the plumes in comparison to some others studied. We advocate using the UV-DOAS and FTIR combination in relation to trace gas measurements from vegetation fires, providing SO2 or NO2 can be identified by the FTIR in the particular biomass burning situation under study. Where simultaneous FRP measurements are available, the carbonaceous flux rates calculated using the FTIR/UV-DOAS method show a strong correlation with FRP, helping to confirm the relationship between FRP and fuel consumption rate at the scale of these vegetation fires. This is to our knowledge currently by far the largest fires upon which this relationship has been evaluated, prior evaluations being limited to laboratory-scale events only.
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Anna Glyn. "A Novel Device and Method to Quantify Knee Stability during Anterior Cruciate Ligament Reconstruction." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159535872238711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Firesinger, Devon Robert. "Quantity Trumps Quality: Bayesian Statistical Accumulation Modeling Guides Radiocarbon Measurements to Construct a Chronology in Real-time." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6701.

Full text
Abstract:
The development of an accurate and precise geochronology is imperative to understanding archives containing information about Earth’s past. Unable to date all intervals of an archive, researchers use methods of interpolation to approximate age between dates. Sections of the radiocarbon calibration curve can induce larger chronological uncertainty independent of instrumental precision, meaning even a precise date may carry inflated error in its calibration to a calendar age. Methods of interpolation range from step-wise linear regression to, most recently, Bayesian statistical models. These employ prior knowledge of accumulation rate to provide a more informed interpolation between neighboring dates. This study uses a Bayesian statistical accumulation model to inform non-sequential dating of a sediment core using a high-throughput gas-accepting accelerator mass spectrometer. Chronological uncertainty was iteratively improved but approached an asymptote due to a blend of calibration uncertainty, instrument error and sampling frequency. This novel method resulted in a superior chronology when compared to a traditional sediment core chronology with fewer, but more precise, dates from the same location. The high-resolution chronology was constructed for a gravity core from the Pigmy Basin with an overall 95% confidence age range of 360 years, unmatched by the previously established chronology of 460 years. This research reveals that a larger number of low-precision dates requires less interpolation, resulting in a more robust chronology than one based on fewer high-precision measurements necessitating a higher degree of age interpolation.
APA, Harvard, Vancouver, ISO, and other styles
13

Martínez, Arias Borja. "Torque measurement in turbulent Couette-Taylor flows." Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0004/document.

Full text
Abstract:
L’écoulement entre deux cylindres coaxiaux, appelé l’écoulement de Couette-Taylor, a été étudié lorsque le cylindre intérieur tourne. Quatre dispositifs ont été utilisés avec différentes tailles d’entrefer. Les visualisations montrent l’évolution des motifs avec le nombre de Reynolds, Re. La variation du couple sur le cylindre intérieur a été déterminée en utilisant le pseudo-nombre de Nusselt, qui est une mesure du taux de dissipation d’énergie.Pour des faibles valeurs de Re, l’écoulement est laminaire et azimutal, et le couple est proportionnel à Re. Au-delà d’une valeur critique de Re, les rouleaux de Taylor apparaissent et la pente de variation du couple change brutalement. Pour de grandes valeurs de Re, les rouleaux deviennent turbulents et la pente du couple augmente à cause de la dissipation d’énergie turbulente. Le couple a été mesuré jusqu’à Re=45.000 et montre une dépendance avec le rapport de rayons des cylindres et du nombre de vortex. Avant le régime ultime de la turbulence, les états avec plus de rouleaux présentent un couple plus grand et la situation est inversée dans le régime ultime.Une étude du couple agissant sur le cylindre intérieur a été menée en présence d’un liquide viscoélastique contenant des polymères de grande masse molaire. En appliquant des cycles d’accélération-décélération de la rotation du cylindre intérieur, le couple présente une boucle d’hystérèse dont l’aire augmente avec la concentration de polymère. Les statistiques des fluctuations de la turbulence élastique ont été analysées. Le couple exercé par les vortex solitaires obtenus lors de la phase de décélération, avant la relaminarisation complète de l’écoulement, a été étudié
The flow between two concentric cylinders, i.e., the Couette-Taylor flow, has been investigated when only the inner cylinder rotates. Four set-ups have been employed with 4 values of the radius ratio. Flow visualisations have been performed to analyse the evolution of the flow patterns with the Reynolds number, Re. The variation of the torque acting on the inner cylinder with different parameters has been quantified using the pseudo-Nusselt number, which measures the rate of energy dissipation in the flow.At low Re, the flow is laminar and azimuthal, and the torque is proportional to Re. Above a critical value of Re, Taylor vortices emerge in the flow and the slope of the torque changes drastically. At high values of Re, the vortices become turbulent and the increase rate of torque is enhanced due to the energy dissipation of turbulence. The torque measured up to Re=45 000 depends on the radius ratio of the cylinders and on the number of vortices. Below the ultimate regime of turbulence, flows containing larger number of vortices exert larger levels of torque; above it, flows containing larger number of vortices exert lower levels of torque.A specific study of the torque exerted on the inner cylinder has been carried out with viscoelastic fluids made of large-weight-molecule polymers. If acceleration-deceleration cycles of the rotation of the inner cylinder are applied, the torque exhibits a hysteretic loop, which increases with the polymer concentration. The statistics of the elastic turbulence fluctuations have been analysed. A special focus was made on the torque induced by the solitary vortices obtained in the deceleration phase, before the flow relaminarisation
APA, Harvard, Vancouver, ISO, and other styles
14

Visi, Federico. "Methods and technologies for the analysis and interactive use of body movements in instrumental music performance." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/8805.

Full text
Abstract:
A constantly growing corpus of interdisciplinary studies support the idea that music is a complex multimodal medium that is experienced not only by means of sounds but also through body movement. From this perspective, musical instruments can be seen as technological objects coupled with a repertoire of performance gestures. This repertoire is part of an ecological knowledge shared by musicians and listeners alike. It is part of the engine that guides musical experience and has a considerable expressive potential. This thesis explores technical and conceptual issues related to the analysis and creative use of music-related body movements in instrumental music performance. The complexity of this subject required an interdisciplinary approach, which includes the review of multiple theoretical accounts, quantitative and qualitative analysis of data collected in motion capture laboratories, the development and implementation of technologies for the interpretation and interactive use of motion data, and the creation of short musical pieces that actively employ the movement of the performers as an expressive musical feature. The theoretical framework is informed by embodied and enactive accounts of music cognition as well as by systematic studies of music-related movement and expressive music performance. The assumption that the movements of a musician are part of a shared knowledge is empirically explored through an experiment aimed at analysing the motion capture data of a violinist performing a selection of short musical excerpts. A group of subjects with no prior experience playing the violin is then asked to mime a performance following the audio excerpts recorded by the violinist. Motion data is recorded, analysed, and compared with the expert’s data. This is done both quantitatively through data analysis xii as well as qualitatively by relating the motion data to other high-level features and structures of the musical excerpts. Solutions to issues regarding capturing and storing movement data and its use in real-time scenarios are proposed. For the interactive use of motion-sensing technologies in music performance, various wearable sensors have been employed, along with different approaches for mapping control data to sound synthesis and signal processing parameters. In particular, novel approaches for the extraction of meaningful features from raw sensor data and the use of machine learning techniques for mapping movement to live electronics are described. To complete the framework, an essential element of this research project is the com- position and performance of études that explore the creative use of body movement in instrumental music from a Practice-as-Research perspective. This works as a test bed for the proposed concepts and techniques. Mapping concepts and technologies are challenged in a scenario constrained by the use of musical instruments, and different mapping ap- proaches are implemented and compared. In addition, techniques for notating movement in the score, and the impact of interactive motion sensor systems in instrumental music practice from the performer’s perspective are discussed. Finally, the chapter concluding the part of the thesis dedicated to practical implementations describes a novel method for mapping movement data to sound synthesis. This technique is based on the analysis of multimodal motion data collected from multiple subjects and its design draws from the theoretical, analytical, and practical works described throughout the dissertation. Overall, the parts and the diverse approaches that constitute this thesis work in synergy, contributing to the ongoing discourses on the study of musical gestures and the design of interactive music systems from multiple angles.
APA, Harvard, Vancouver, ISO, and other styles
15

Ndoye, Abdoul Aziz Junior. "Essays on the econometrics of inequality and poverty measurements." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1125.

Full text
Abstract:
Cette thèse est composée de quatre essais sur l'économétrie des mesures d'inégalité et de pauvreté. Elle fournit un traitement statistique fondé sur l'analyse de modèles probabilistes de mélange fini de distributions et de modèle de régression quantile, le tout dans une approche Bayésienne.Le deuxième chapitre s'intéresse à la modélisation d'une distribution de revenus par un mélange fini de lois log-normales dont les paramètres sont estimés par la méthode d'échantillonnage de Gibbs. Ce chapitre propose une méthode d'inférence statistique pour certains indices d'inégalité par une Rao-Blackwellisation de l'échantillonnage de Gibbs. Le troisième chapitre propose une estimation Bayésienne de la récente régression quantile non-conditionnelle basée sur la fonction d'influence recentrée (regression RIF) dans laquelle la densité est estimée par un mélange de lois normales. De cette approche, on déduit une inférence Bayesienne pour la méthode de décomposition d'Oaxaca-Blinder. La méthode proposée est utilisée pour analyser la dispersion des salaires aux Etats-Unis entre 1992-2009.Le quatrième chapitre propose une inférence Bayésienne d'un mélange de deux lois de Pareto simples pour modéliser la partie supérieure d'une distribution de salaires. Cette approche est utilisée pour analyser la répartition des hauts salaires aux Etats-Unis afin de tester les deux modèles (Tournoi et Superstar). Le cinquième chapitre de la thèse est consacré à l'analyse des rendements privés de l'éducation sur le revenu des ménages et des inégalités entre les populations urbaines et rurales. Il considère le cas du Sénégal et utilise les dépenses totales de consommation comme indicateur du revenu
This dissertation consists of four essays on the econometrics of inequality and poverty measurement. It provides a statistical analysis based on probabilistic models, finite mixture distributions and quantile regression models, all using aBayesian approach.Chapter 2 models income distribution using a mixture of lognormal densities. Using the analytical expression of inequality indices, it shows how a Rao-Blackwellised Gibbs sampler can lead to accurate inference on income inequality measurements even in small samples.Chapter 3 develops Bayesian inference for the unconditional quantile regression model based on the Re-centered Influence Function (RIF). It models the considered distribution by a mixture of lognormal densities and then provides conditional posterior densities for the quantile regression parameters. This approach is perceived to provide better estimates in the extreme quantiles in the presence of heavy tails as well as valid small sample confidence intervalsfor the Oaxaca-Blinder decomposition.Chapter 4 provides Bayesian inference for a mixture of two Pareto distributions which is then used to approximate the upper tail of a wage distribution. This mixture model is applied to the data from the CPS ORG to analyze the recent structure of top wages in the U.S. from 1992 through 2009. Findings are largely in accordance with the explanations combining the model of superstars and the model of tournaments in hierarchical organization structures. Chapter 5 makes use of the RIF-regression to measure both changes in the return to education across quantiles and rural urban inequality decomposition in consumption expenditure in Senegal
APA, Harvard, Vancouver, ISO, and other styles
16

Omejer, Ole Øvergaard. "A System for the Acquisition and Analysis of Invasive and Non-invasive Measurements used to quantify Cardiovascular Performance." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13817.

Full text
Abstract:
Invasive measurements of cardiac functioning allows for more accurate measures of cardiac functioning than non-invasive measurements. However, invasive measurements is often not available in clinical settings. By comparing invasive and non-invasive measurements collected in an experimental context, better relationships between non-invasive measurements and cardiac functioning may be found. This master thesis describes the development of two computer applications for simultaneous acquisition, calibration and synchronization of these measurements. The developed applications were tested out during operations on pigs with all measurement sources connected. The results shows that all the desired measurements were successfully acquired by the system. Calibration of the different measurement were also achieved. Different methods for synchronization were tested out during the experiments. It was possible to achieve synchronization of all clocks present in the system. Finally, all of the desired parameters were calculated.
APA, Harvard, Vancouver, ISO, and other styles
17

Dole, Alecia A. "The Effects of Self-Graphing and Feedback on the Quantity and Quality of Written Responses to Mathematical Word Problems." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1468921405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jian, Jun. "Predictability of Current and Future Multi-River discharges: Ganges, Brahmaputra, Yangtze, Blue Nile, and Murray-Darling Rivers." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19777.

Full text
Abstract:
Thesis (Ph.D)--Earth and Atmospheric Sciences, Georgia Institute of Technology, 2008.
Committee Chair: Judith Curry; Committee Chair: Peter J Webster; Committee Member: Marc Stieglitz; Committee Member: Robert Black; Committee Member: Rong Fu.
APA, Harvard, Vancouver, ISO, and other styles
19

Thoreson, Erik J. "From nanoscale to macroscale using the atomic force microscope to quantify the role of few-asperity contacts in adhesion." Link to electronic dissertation, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-010906-204218/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Farmer, Sybil E. "Development and clinical application of assessment measures to describe and quantify intra-limb coordination during walking in normal children and children with cerebral palsy." Thesis, University of Wolverhampton, 2008. http://hdl.handle.net/2436/49074.

Full text
Abstract:
This thesis investigates coordination of the lower limb joints within the limb during walking. The researcher was motivated by her clinical experience as a paediatric physiotherapist. She observed that the pattern of lower limb coordination differed between normal children and those with cerebral palsy. Many of the currently used interventions did not appear to influence this patterning. As a precursor to evaluating the effectiveness of treatments in modifying coordination, a tool to measure coordination was required. The researcher initially investigated qualitative and then quantitative methods of measuring within limb coordination. A technique was developed that used relative angular velocity of two joints to determine when joints were in-phase, antiphasic or in stasis. The phasic parameters of hip/knee, knee/ankle and hip/ankle joints coordination were quantified. There were some significant differences between normal children and children with cerebral palsy. Asymmetry of these phasic parameters was identified, with children with cerebral palsy being more asymmetrical than normal children. The clinical utility of this technique was tested by comparing 2 groups of children before and after 2 surgical procedures. This showed some significant differences in phasic parameters between pre and post-operative data for one procedure. Low samples sizes mean that further work is required to confirm these findings. Data from this work has been used to calculate sample sizes to give an a priori power of 0.8 and further research is proposed and potential applications discussed. It is hoped that this technique will raise awareness of abnormal intra-limb coordination and allow therapists to identify key interactions between joints that need to be facilitated during walking training.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yang. "An Empirical Analysis of Family Cost of Children : A Comparison of Ordinary Least Square Regression and Quantile Regression." Thesis, Uppsala University, Department of Statistics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126660.

Full text
Abstract:

Quantile regression have its advantage properties comparing to the OLS model regression which are full measurement of the effects of a covariate on response, robustness and Equivariance property. In this paper, I use a survey data in Belgium and apply a linear model to see the advantage properites of quantile regression. And I use a quantile regression model with the raw data to analyze the different cost of family on different numbers of children and apply a Wald test. The result shows that for most of the family types and living standard, from the lower quantile to the upper quantile the family cost on children increases along with the increasing number of children and the cost of each child is the same. And we found a common behavior that the cost of the second child is significantly more than the cost of the first child for a nonworking type of family and all living standard families, at the upper quantile (from 0.75 quantile to 0.9 quantile) of the conditional distribution.

APA, Harvard, Vancouver, ISO, and other styles
22

Marof, Ahmad, and Emilia Struijk. "PRODUKTIVITETSMÄTNINGAR : Hur definieras produktivitet och kan den mätas med avseende på kvalitet, kapacitetsutnyttjande och mängd per tidsenhet?" Thesis, KTH, Byggvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174121.

Full text
Abstract:
I kampen om marknaden är det nödvändigt att vara konkurrenskraftig, ett av verktygen för att ta sig dit är att ha en hög produktivitet i verksamheten. Ett effektivt sätt att nå en hög produktivitet är att använda sig av mätningar. Dessa kan vara finansiella och icke-finansiella beroende på de mål som fastställts av ledningen. Mätningar och dess resultat kan följas upp för utvärdering och för vidare utveckling som sedan kan användas som underlag för företagsledningens måluppsättning.  Syftet med detta examensarbete är att undersöka hur icke-finansiella produktivitetsmätningar på lämpligast sätt kan utföras på projektnivå inom de tänkbara områdena kapacitetsutnyttjande, kvalitet och mängd/tidsenhet. För att få fram material till detta arbete ställs följande frågor: Vad är produktivitet? Hur mäts produktivitet inom byggprojekten idag? Vilka produktivitetsmått kan lämpligast användas i projekt? Målet med rapporten är att: Ta fram ett eller fler användbara mätningsförslag till Skanska Region Hus Stockholm Syd. Metodmässigt har detta examensarbete behandlats med hjälp av litteraturstudier och intervjuer med medarbetare vars befattningar har kopplingar till ämnet. Intresse för mätningar har upptäckts främst ligga inom området kvalitet. För att säkerställa att den efterfrågade kvaliteten tillhandahålls har det diskuterats kring systematiserade egenkontroller. I Skanskas ledningssystem, VSAA (Vårt Sätt Att Arbeta), presenteras en mall för utförandet av egenkontroller, utifrån intervjuerna tolkar skribenterna att man på projekten inte använder sig av mallen på ett systematiskt sätt. I kapitlet Analys och resultat har det getts förslag på ett tillvägagångsätt för utförandet och sammanställning av egenkontroller.     Resultaten från intervjuerna visar att Skanska Region Hus Stockholm Syd inte utför direkta mätningar av produktivitet. Istället utförs avstämningar av tidplaner och kostnadskalkyler. Önskvärt från de intervjuade är införandet av en process att följa för att underlätta för mätningar på projekten. Ett förslag på ett arbetssätt vid projektering har diskuterats i kapitlet Analys och resultat.
To attract the market it is necessary to be competitive, one of the tools is to have a high productivity in the operations. An effective way is the use of measurements. These can be financial and non-financial depending on the goals set from the management. Measurements and the results can be used for evaluations and further development which then can be useful as a basis for management's goals. The purpose of this study is to investigate the non-financial productivity measurements in the most suitable way that can be used at the projects with the possible areas of capacity utilization, quality and quantity/time unit. The following questions been treated in order to obtain material for this work: ● What is productivity? ● How is productivity measured in construction projects today? ● What measures of productivity can be suitably used in the project? The goal of this report is to: ● Find useful measurement proposals for Skanska Region Hus Stockholm Syd. This thesis in terms of method is dealt with literature studies and interviews with employees of whom positions have connections to the subject. Interest in the measurements have been discovered mainly be in the range of quality, it has been discussed about the systematic controls as a way to ensure a higher quality. In VSAA, a model for the performance of controls is presented. The writers view, based on the interviews, is that the template is not used in a systematic way. There have also been suggestions of an approach for the execution and compilation of controls. The result of the interviews shows that Skanska Region Hus Stockholm Syd do not perform any systematic measurements. Reconciliations of timetables and costing is performed instead. It is desirable to create an introduction of a process to follow in order to an easier platform for the measurements of the projects. A proposal on an approach for planning has been discussed in the analysis.
APA, Harvard, Vancouver, ISO, and other styles
23

Bräutigam, Marcel. "Pro-cyclicality of risk measurements. Empirical quantification and theoretical confirmation." Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUS100.

Full text
Abstract:
Cette thèse analyse, d’un point de vue empirique et théorique, la procyclicité des mesures de risque sur les données historiques, i.e. l'effet de surestimation du risque futur en temps de crise, et sa sous-estimation en temps normal. Nous développons une méthodologie pour évaluer empiriquement le degré de procyclicité, en introduisant un processus de quantiles (« Value-at-Risk ») historiques pour mesurer le risque. En appliquant cette procédure à 11 indices boursiers, nous identifions deux facteurs expliquant la procyclicité : le « clustering » et le retour à la moyenne de la volatilité (tel que modélisée par un GARCH(1,1)), mais aussi la façon intrinsèque d'estimer le risque sur des données historiques (même en l'absence de dynamique de la volatilité). Pour confirmer théoriquement ces arguments, nous procédons en deux étapes. Premièrement, nous démontrons des théorèmes bivariés (fonctionnels) de limite centrale pour les estimateurs de quantiles avec différents estimateurs de dispersion. Comme modèles de base, nous considérons les suites de variables aléatoires iid, ainsi que la classe des processus GARCH(p,q) augmentés. Enfin, ces résultats asymptotiques permettent de valider théoriquement la procyclicité observée empiriquement. Généralisant cette étude à d’autres mesures de risque et de dispersion, nous concluons que la procyclicité persistera quel que soit le choix de ces mesures
This thesis examines, empirically and theoretically, the pro-cyclicality of risk measurements made on historical data. Namely, the effect that risk measurements overestimate the future risk in times of crisis, while underestimating it in quiet times. As starting point, we lay down a methodology to empirically evaluate the amount of pro-cyclicality when using a sample quantile (Value-at-Risk) process to measure risk. Applying this procedure to 11 stock indices, we identify two factors explaining the pro-cyclical behavior: The clustering and return-to-the-mean of volatility (as modeled by a GARCH(1,1)) and the very way of estimating risk on historical data (even when no volatility dynamics are present). To confirm these claims theoretically, we proceed in two steps. First, we derive bivariate (functional) central limit theorems for quantile estimators with different measure of dispersion estimators. We establish them for sequences of iid random variables as well as for the class of augmented GARCH(p,q) processes. Then, we use these asymptotics to theoretically prove the pro-cyclicality observed empirically. Extending the setting of the empirical study, we show that no matter the choice of risk measure (estimator), measure of dispersion estimator or underlying model considered, pro-cyclicality will always exist
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Hanze. "Bayesian inference on quantile regression-based mixed-effects joint models for longitudinal-survival data from AIDS studies." Scholar Commons, 2017. https://scholarcommons.usf.edu/etd/7456.

Full text
Abstract:
In HIV/AIDS studies, viral load (the number of copies of HIV-1 RNA) and CD4 cell counts are important biomarkers of the severity of viral infection, disease progression, and treatment evaluation. Recently, joint models, which have the capability on the bias reduction and estimates' efficiency improvement, have been developed to assess the longitudinal process, survival process, and the relationship between them simultaneously. However, the majority of the joint models are based on mean regression, which concentrates only on the mean effect of outcome variable conditional on certain covariates. In fact, in HIV/AIDS research, the mean effect may not always be of interest. Additionally, if obvious outliers or heavy tails exist, mean regression model may lead to non-robust results. Moreover, due to some data features, like left-censoring caused by the limit of detection (LOD), covariates with measurement errors and skewness, analysis of such complicated longitudinal and survival data still poses many challenges. Ignoring these data features may result in biased inference. Compared to the mean regression model, quantile regression (QR) model belongs to a robust model family, which can give a full scan of covariate effect at different quantiles of the response, and may be more robust to extreme values. Also, QR is more flexible, since the distribution of the outcome does not need to be strictly specified as certain parametric assumptions. These advantages make QR be receiving increasing attention in diverse areas. To the best of our knowledge, few study focuses on the QR-based joint models and applies to longitudinal-survival data with multiple features. Thus, in this dissertation research, we firstly developed three QR-based joint models via Bayesian inferential approach, including: (i) QR-based nonlinear mixed-effects joint models for longitudinal-survival data with multiple features; (ii) QR-based partially linear mixed-effects joint models for longitudinal data with multiple features; (iii) QR-based partially linear mixed-effects joint models for longitudinal-survival data with multiple features. The proposed joint models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also implemented to assess the performance of the proposed methods under different scenarios. Although this is a biostatistical methodology study, some interesting clinical findings are also discovered.
APA, Harvard, Vancouver, ISO, and other styles
25

Vera-Sorroche, Javier. "Thermal homogeneity and energy efficiency in single screw extrusion of polymers : the use of in-process metrology to quantify the effects of process conditions, polymer rheology, screw geometry and extruder scale on melt temperature and specific energy consumption." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/13965.

Full text
Abstract:
Polymer extrusion is an energy intensive process whereby the simultaneous action of viscous shear and thermal conduction are used to convert solid polymer to a melt which can be formed into a shape. To optimise efficiency, a homogeneous melt is required with minimum consumption of process energy. In this work, in-process monitoring techniques have been used to characterise the thermal dynamics of the single screw extrusion process with real-time quantification of energy consumption. Thermocouple grid sensors were used to measure radial melt temperatures across the melt flow at the entrance to the extruder die. Moreover, an infrared sensor flush mounted at the end of the extruder barrel was used to measure non-invasive melt temperature profiles across the width of the screw channel in the metering section of the extruder screw. Both techniques were found to provide useful information concerning the thermal dynamics of the extrusion process; in particular this application of infrared thermometry could prove useful for industrial extrusion process monitoring applications. Extruder screw geometry and extrusion variables should ideally be tailored to suit the properties of individual polymers but in practise this is rarely achieved due the lack of understanding. Here, LDPE, LLDPE, three grades of HDPE, PS, PP and PET were extruded using three geometries of extruder screws at several set temperatures and screw rotation speeds. Extrusion data showed that polymer rheology had a significant effect on the thermal efficiency on the extrusion process. In particular, melt viscosity was found to have a significant effect on specific energy consumption and thermal homogeneity of the melt. Extruder screw geometry, set extrusion temperature and screw rotation speed were also found to have a direct effect on energy consumption and melt consistency. Single flighted extruder screws exhibited poorer temperature homogeneity and larger fluctuations than a barrier flighted screw with a spiral mixer. These results highlighted the importance of careful selection of processing conditions and extruder screw geometry on melt homogeneity and process efficiency. Extruder scale was found to have a significant influence on thermal characteristics due to changes in surface area of the screw, barrel and heaters which consequently affect the effectiveness of the melting process and extrusion process energy demand. In this thesis, the thermal and energy characteristics of two single screw extruders were compared to examine the effect of extruder scale and processing conditions on measured melt temperature and energy consumption. Extrusion thermal dynamics were shown to be highly dependent upon extruder scale whilst specific energy consumption compared more favourably, enabling prediction of a process window from lab to industrial scale within which energy efficiency can be optimised. Overall, this detailed experimental study has helped to improve understanding of the single screw extrusion process, in terms of thermal stability and energy consumption. It is hoped that the findings will allow those working in this field to make more informed decisions regarding set conditions, screw geometry and extruder scale, in order to improve the efficiency of the extrusion process.
APA, Harvard, Vancouver, ISO, and other styles
26

Tamminen, S. (Satu). "Modelling the rejection probability of a quality test consisting of multiple measurements." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205205.

Full text
Abstract:
Abstract Quality control is an essential part of manufacturing, and the different properties of the products can be tested with standardized methods. If the decision of qualification is based on only one test specimen representing a batch of products, the testing procedure is quite straightforward. However, when the measured property has a high variability within the product, as usual, several test specimens are needed for the quality verification. When a quality property is predicted, the response value of the model that most effectively finds the critical observations should naturally be selected. In this thesis, it has been shown that LIB-transformation (Larger Is Better) is a suitable method for multiple test samples, because it effectively recognizes especially the situations where one of the measurements is very low. The main contribution of this thesis is to show how to model quality of phenomena that consist of several measurement samples for each observation. The process contains several steps, beginning from the selection of the model type. Prediction of the exceedance probability provides more information for the decision making than that of the mean. Especially with the selected application, where the quality property has no optimal value, but the interest is in adequately high value, this approach is more natural. With industrial applications, the assumption of constant variance should be analysed critically. In this thesis, it is shown that exceedance probability modelling can benefit from the use of an additional variance model together with a mean model in prediction. The distribution shape modelling improves the model further, when the response variable may not be Gaussian. As the proposed methods are fundamentally different, the model selection criteria have to be chosen with caution. Different methods for model selection were considered and commented, and EPS (Exceedance Probability Score) was chosen, because it is most suitable for probability predictors. This thesis demonstrates that especially a process with high diversity in its production and more challenging distribution shape gains from the deviation modelling, and the results can be improved further with the distribution shape modelling
Tiivistelmä Laadunvalvonnalla on keskeinen rooli teollisessa tuotannossa. Valmistettavan tuotteen erilaisia ominaisuuksia mitataan standardin mukaisilla testausmenetelmillä. Testi on yksinkertainen, jos tuotteen laatu varmistetaan vain yhdellä testikappaleella. Kun testattava ominaisuus voi saada hyvin vaihtelevia tuloksia samastakin tuotteesta, tarvitaan useita testikappaleita laadun varmistamiseen. Tuotteen laatuominaisuuksia ennustettaessa valitaan malliin vastemuuttuja, joka tehokkaimmin tunnistaa laadun kannalta kriittiset havainnot. Tässä väitöskirjassa osoitetaan, että LIB-transformaatio (Large Is Better) tunnistaa tehokkaasti erityisesti tilanteet, joissa yksi mittauksista on hyvin matala. Tämän väitöskirja vastaa kysymykseen, kuinka mallintaa laatua, kun tutkittavasta tuotteesta tarvitaan useita testinäytteitä. Mallinnusprosessi koostuu useista vaiheista alkaen mallityypin valinnasta. Alitusriskin mallinnuksen avulla saadaan enemmän informaatiota päätöksenteon tueksi perinteisen odotusarvomallinnuksen sijaan, etenkin jos laatutekijältä vaaditaan vain riittävän hyvää tasoa optimiarvon sijaan. Teollisissa sovelluksissa ei voida useinkaan olettaa, että vasteen hajonta olisi vakio läpi prosessin. Tässä väitöskirjassa osoitetaan että alitusriskin ennustamistarkkuus paranee, kun odotusarvon lisäksi mallinnetaan myös hajontaa. Jakaumamuodon mallilla voidaan parantaa ennustetarkkuutta silloin, kun vastemuuttuja ei noudata Gaussin jakaumaa. Koska ehdotetut mallit ovat perustaltaan erilaisia, täytyy myös mallin valintakriteeri valita huolella. Työssä osoitetaan, että EPS (Exceedance Probability Score) toimii parhaiten käytetyillä todennäköisyyttä ennustavilla malleilla. Tässä väitöskirjassa osoitetaan, että erityisesti silloin kun tuotantoprosessi on monimuotoinen ja laatumuuttujan jakaumamuoto on haastava, mallinnuttaminen hyötyy hajontamallin käytöstä, ja tuloksia voidaan parantaa jakaumamuodon mallilla
APA, Harvard, Vancouver, ISO, and other styles
27

Ricci, Lorenzo. "Essays on tail risk in macroeconomics and finance: measurement and forecasting." Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/242122.

Full text
Abstract:
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
28

Ligier, Simon. "Développement d’une méthodologie pour la garantie de performance énergétique associant la simulation à un protocole de mesure et vérification." Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM083/document.

Full text
Abstract:
Les écarts communément observés entre les prévisions de consommations énergétiques et les performances réelles des bâtiments limitent le développement des projets de construction et de réhabilitation. La garantie de performance énergétique (GPE) a pour vocation d’assurer des niveaux de consommations maximaux et donc de sécuriser les investissements. Sa mise en place fait cependant face à plusieurs problématiques, notamment techniques et méthodologiques. Ces travaux de thèse se sont intéressés au développement d’une méthodologie pour la GPE associant les outils de simulation énergétique dynamique (SED) à un protocole de mesure et vérification. Elle repose d’abord sur la modélisation physico-probabiliste du bâtiment. Les incertitudes sur les paramètres physiques et techniques, et les variabilités des sollicitations dynamiques sont modélisées et propagées dans la SED. Un modèle de génération de données météorologiques variables a été développé. L’étude statistique des résultats de simulation permet d’identifier des modèles liant les consommations d’intérêt à des facteurs d’ajustement, caractéristiques des conditions d’exploitation. Les méthodes de régression quantile permettent de déterminer le quantile conditionnel des distributions et caractérisent donc conjointement la dépendance aux facteurs d’ajustement et le niveau de risque de l’engagement. La robustesse statistique de ces méthodes et le choix des meilleurs facteurs d’ajustement ont été étudiés, tout comme l’influence des incertitudes sur la mesure des grandeurs d’ajustement en exploitation. Leur impact est intégré numériquement en amont de la méthodologie. Cette dernière est finalement mise en œuvre sur deux cas d’étude : la rénovation de logements, et la construction de bureaux
Discrepancies between ex-ante energy performance assessment and actual consumption of buildings hinder the development of construction and renovation projects. Energy performance contracting (EPC) ensures a maximal level of energy consumption and secures investment. Implementation of EPC is limited by technical and methodological problems.This thesis focused on the development of an EPC methodology that allies building energy simulation (BES), and measurement and verification (M&V) process anticipation. The building parameters’ uncertainties and dynamic loads variability are considered using a Monte-Carlo analysis. A model generating synthetic weather data was developed. Statistical studies of simulation results allow a guaranteed consumption limit to be evaluated according to a given risk. Quantile regression methods jointly capture the risk level and the relationship between the guaranteed energy consumption and external adjustment factors. The statistical robustness of these methods was studied as well as the choice of the best adjustment factors to consider. The latter will be measured during building operation. The impact of measurement uncertainties is statistically integrated in the methodology. The influence of M&V process accuracy is also examined. The complete EPC methodology is finally applied on two different projects: the refurbishment of a residential building and the construction of a high energy performance office building
APA, Harvard, Vancouver, ISO, and other styles
29

Pozebon, Simone. "FORMAÇÃO DE FUTUROS PROFESSORES NA ORGANIZAÇÃO DO ENSINO DE MATEMÁTICA PARA OS ANOS INICIAIS DO ENSINO FUNDAMENTAL: APRENDENDO A SER PROFESSOR EM UM CONTEXTO ESPECÍFICO ENVOLVENDO MEDIDAS." Universidade Federal de Santa Maria, 2014. http://repositorio.ufsm.br/handle/1/7130.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Departing from the theoretical assumptions of the Historical-Cultural Theory, of the Activity Theory, of the Teaching Guiding Activity, as well as authors that approach the training of teachers who teach mathematics, this research has as its main concern the appropriation of theoretical knowledge and the teaching learning for future teachers of the early years. Our main goal is to investigate the training of future teachers in a specific context of organization of the measurement teaching for the early years of Elementary School, which involve study, planning, implementation and evaluation of the pedagogical activities. This research was conducted by a research group, Group of Research and Studies in Mathematical Education GEPEMat, in the project ―Mathematical Education in the early years of Elementary School: principles and practices of the teaching organization‖, sponsored by OBEDUC/CAPES. More specifically it is focused on an extension project the CluMat linked to the one previously mentioned, which develops actions with the early years involving mathematical contents in public schools of Santa Maria/SM since the year 2009.Thus, this research is focused on the actions that the students of Education and Mathematics Degree, members of the group, developed in CluMat in a third grade class of elementary school in a state school. As procedures for data collection we used a researcher s diary, as well as, audio and video recordings from fifteen meetings and photographic records. The data were organized based on four guiding principles that defined the panorama of the analysis: the initial discussions for the study; the planning movements and organization of activities; the mathematical knowledge in the development of the didactic unit; and the teaching learning from the evaluation of the students. Based on the systematization of information through these axes, we used the concept of episodes proposed by Moura (2000) to analyze them. From the analysis, we found indications that there was an attribution of new meanings to the actions that compose the pedagogical activity, and that these new insights, together with the needs that mobilized the students, and the appropriation of necessary mathematical knowledge to the teacher practice, were a movement of teaching learning.
A partir dos pressupostos teóricos da Teoria Histórico-Cultural, da Teoria da Atividade, da Atividade Orientadora de Ensino, assim como de autores que abordam a formação de professores que ensinam matemática, esta pesquisa foi desenvolvida tendo por preocupação a apropriação de conhecimentos e a aprendizagem da docência de futuros professores que ensinam matemática. O objetivo principal consiste em investigar a formação de futuros professores em um contexto específico de organização do ensino de medidas para os anos iniciais do Ensino Fundamental, que envolve estudo, planejamento, desenvolvimento e avaliação de atividades pedagógicas. O estudo situa-se no âmbito de um grupo de pesquisa, o Grupo de Estudos e Pesquisas em Educação Matemática GEPEMat e insere-se no projeto ―Educação Matemática nos Anos Iniciais do Ensino Fundamental: princípios e práticas da organização do ensino‖, financiado pelo OBEDUC/CAPES. A pesquisa, mais especificamente, está voltada a um projeto de extensão o Clube de Matemática/CluMat vinculado ao projeto anteriormente citado, que desenvolve atividades com conteúdos matemáticos nos anos iniciais, junto a escolas públicas de Santa Maria/RS desde o ano de 2009. Neste contexto, centra-se nas ações que os acadêmicos dos cursos de Licenciatura em Pedagogia e em Matemática, integrantes do GEPEMat, desenvolveram no CluMat em uma turma de terceiro ano do Ensino Fundamental. Como procedimentos para a coleta de dados, foi utilizado um diário de campo da pesquisadora; gravações em áudio e vídeo de quinze encontros; bem como registros fotográficos. Os dados foram organizados e sistematizados a partir de quatro unidades de análise que definiram o olhar da pesquisa: as discussões iniciais para o estudo; os movimentos de planejamento e organização das ações; o conhecimento matemático no desenvolvimento da unidade didática; e a aprendizagem da docência a partir da avaliação dos acadêmicos. Tomando por base a organização dos dados através destas unidades, foi utilizado, para analisá-los, o conceito de episódios proposto por Moura (2000). Como resultados da pesquisa, encontramos indicativos de que houve uma atribuição de novos sentidos às ações que compõem a atividade pedagógica, e que essas novas percepções, juntamente com as necessidades que mobilizaram os acadêmicos e a apropriação de conhecimentos matemáticos necessários a prática do professor, constituíram um movimento de aprendizagem da docência.
APA, Harvard, Vancouver, ISO, and other styles
30

Assis, Rogério Jorge de. "POVM no contexto de eletrodinâmica quântica de cavidades." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/7238.

Full text
Abstract:
Submitted by Erika Demachki (erikademachki@gmail.com) on 2017-04-27T16:35:09Z No. of bitstreams: 2 Dissertação - Rogério Jorge de Assis - 2017.pdf: 1072808 bytes, checksum: 2bcdda2ef8542516339e0b3a17ba3bd2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-05-03T11:38:47Z (GMT) No. of bitstreams: 2 Dissertação - Rogério Jorge de Assis - 2017.pdf: 1072808 bytes, checksum: 2bcdda2ef8542516339e0b3a17ba3bd2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-05-03T11:38:47Z (GMT). No. of bitstreams: 2 Dissertação - Rogério Jorge de Assis - 2017.pdf: 1072808 bytes, checksum: 2bcdda2ef8542516339e0b3a17ba3bd2 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-03-13
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
In this work it is proposed a simplified scheme to unambiguously discriminate between one of two nonorthogonal cavity field states. This scheme, which is based on POVM - positive operator valued measure, uses one three-level atom as the ancilla to obtain information on the cavity field state, the target. The efficiency of this scheme in discriminating the two quantum states is analy zed by comparing the maximum theoretical success probability with the maximum success probability possible to our case.
Neste trabalho é proposto um esquema simplificado para a discriminação inequívoca de dois determinados estados quânticos não ortogonais do campo no interior de uma cavidade de alta qualidade. Nesse esquema, o qual se baseia na medição quântica geral (também chamada de positive-operator valued measure ou, simplesmente, POVM ), um átomo de três níveis é utilizado como sonda para obter informação acerca do estado do campo da cavidade alvo. Por fim, a eficiência desse esquema em discriminar os estados é analisada comparando a probabilidade de sucesso máxima obtida com a probabilidade de sucesso máxima possível para o caso em questão.
APA, Harvard, Vancouver, ISO, and other styles
31

Makich, Hamid. "Etude théorique et expérimentale de l'usure des outils de découpe : influence sur la qualité des pièces décooupées." Phd thesis, Université de Franche-Comté, 2011. http://tel.archives-ouvertes.fr/tel-01068646.

Full text
Abstract:
La qualité des pièces découpées pour les industries électroniques et micromécaniques est appréciée via trois critères principaux : le niveau de bavure, l'aspect du bord découpé et la précision dimensionnelle. Or, l'étude de la qualité des pièces découpées ne peut se faire sans une compréhension de l'usure des poinçons. Ainsi, des méthodes de mesure en continu et in situ de l'usure ont été mises au point et validées, soit l'activation superficielle et la mesure par double réplique. Ainsi, il a été possible de suivre l'influence d'un certain nombre de paramètres du procédé sur l'évolution de l'usure lors du découpage. Par ailleurs, nous avons mis au point une méthode de quantification de la bavure sur la totalité du contour découpé. Il a ainsi été possible d'étudier l'évolution de la bavure au cours de la découpe. L'aspect des bords découpés a été examiné grâce à des relevés topographiques permettant le suivi de son évolution. Ainsi, une corrélation entre la cinétique d'usure des poinçons et l'apparition de la bavure a été établi. De plus, une simulation expérimentale de l'usure des poinçons a été entreprise. Un dispositif expérimental de tribométrie a été conçu et installé sur la ligne de presse, simulant les conditions de frottement d'un poinçon sur une tôle. Il a permis d'évaluer l'abrasivité des tôlesminces vis-à-vis des poinçons. Par ailleurs, une modélisation numérique de l'opération de découpage par éléments finis a été entreprise, permettant d'approcher le profil d'usure d'un poinçon de géométrie cylindrique. Et par conséquent la possibilité de prédire son évolution en fonction du nombre de pièces découpées devient accessible en fonction des paramètres du procédé
APA, Harvard, Vancouver, ISO, and other styles
32

Hill, Robert J., Miriam Steurer, and Sofie R. Waltl. "Owner Occupied Housing in the CPI and its Impact on Monetary Policy during Housing Booms and Busts." WU Vienna University of Economics and Business, 2019. http://epub.wu.ac.at/7039/1/WP285.pdf.

Full text
Abstract:
The treatment of owner-occupied housing (OOH) is probably the most important unresolved issue in inflation measurement. How -- and whether -- it is included in the Consumer Price Index (CPI) affects inflation expectations, the measured level of real interest rates, and the behavior of governments, central banks and market participants. We show that none of the existing treatments of OOH are fit for purpose. Hence we propose a new simplified user cost method with better properties. Using a micro-level dataset, we then compare the empirical behavior of eight different treatments of OOH. Our preferred user cost approach pushes up the CPI during housing booms (by 2 percentage points or more). Our findings relate to the following important debates in macroeconomics: the behavior of the Phillips curve in the US during the global financial crisis, and the response of monetary policy to housing booms, secular stagnation, and globalization.
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
33

Ghaffarian, Roohparvar Hossein. "Study of driftwood dynamics in rivers for hazard assessment." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI094.

Full text
Abstract:
Le bois flottant fait partie intégrante des systèmes fluviaux et joue un rôle important dans l'écologie et la morphologie des rivières. Au cours des dernières décennies, la quantité de bois transportée dans certaines rivières d’Europe a augmenté, notamment en raison de modifications de l'occupation et de la gestion des zones boisées le long des rivières, entraînant une augmentation des aléas pour les ouvrages hydrauliques et les zones urbaines. Dans ce contexte, l'objectif de cette thèse est d'étudier la dynamique du bois flottant dans les rivières afin de fournir des éléments pour l'estimation des risques. Cela a été effectué de deux manières: (i) en utilisant la vidéographie in situ au bord d'un cours d'eau pour mesurer la quantité de bois transporté lors d’une crue et (ii) en analysant la dynamique de morceaux de bois isolés, à la fois sur le terrain et par une étude expérimentale de laboratoire complétée par des modèles théoriques. Ce travail présente plusieurs contributions scientifiques et techniques. Tout d'abord, en étudiant le lien entre le débit de bois et les caractéristiques de crue, telles que son amplitude, son hydrogramme et le délai entre deux crues, nous renforçons et élargissons les connaissances actuelles sur le lien entre les débits de crue et de bois. Ensuite, nos études montrent que lorsqu'un tronc est pris dans la rivière, il est accéléré sur une distance limitée, d’ordre de grandeur donné par la longueur du bois dans la direction de l’écoulement. Une fois la vitesse d'écoulement atteinte, le bois se comporte comme un traceur de l'écoulement. En termes de contributions techniques, se basant sur deux sites différents, nous fournissons des recommandations utiles aux praticiens pour l'installation de nouvelles stations de surveillance. Ces travaux s’intègrent dans le cadre de l’évaluation des aléas et des risques liés au bois flottant, pour lesquelles des quantités précises de la dynamique du bois sont nécessaires
Driftwood is an integral part of river corridors where it plays an important role both in river ecology and morphology. During the last decades, the amount of large wood transported in some of the European rivers has increased, notably due to modifications in the human pressure and management of riparian forest buffers along rivers. This causes an increase of potential hazards for hydraulic structures and urban areas. In this context, the aim of this thesis is to study the driftwood dynamics in rivers in order to provide elements for hazard assessment. This is carried out in two ways: (i) using in-situ streamside videography to measure the amount of wood transported by the river during floods and (ii) analyzing the dynamics of individual pieces of wood both on the field and in a well-controlled experimental environment combined with theoretical models. The present work provides several scientific and technical contributions. First by studying the link between wood discharge and flood characteristics, such as flood magnitude, hydrograph and inter-flood time, we consolidate and extend the present knowledge about the link between flow and wood discharges. Second, our studies show that when a piece of wood is recruited into the river, it is accelerated on a limited distance, which scales as the wood length in the flow direction. Once the wood piece reaches the flow velocity, it behaves as a flow tracer. In terms of technical contributions, by comparing the video monitoring technique in two different sites, we provide some recommendations that are useful for practitioners for installing new monitoring stations. This work will be part of the driftwood hazard and risk assessments, for which accurate wood dynamics quantities are required
APA, Harvard, Vancouver, ISO, and other styles
34

Assaf, Elias. "Uncovering The Sub-Text: Presidents' Emotional Expressions and Major Uses of Force." Master's thesis, University of Central Florida, 2014. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6241.

Full text
Abstract:
The global context of decision making continues to adapt in response to international threats. Political psychologists have therefore considered decision making processes regarding major uses of force a key area of interest. Although presidential personality has been widely studied as a mitigating factor in the decision making patterns leading to uses of force, traditional theories have not accounted for the emotions of individuals as they affect political actions and are used to frame public perception of the use of force. This thesis therefore measures expressed emotion and cognitive expressions in the form of expressed aggression, passivity, blame, praise, certainty, realism, and optimism as a means of predicting subsequent major uses of force. Since aggression and blame are precipitated by anger and perceived vulnerability, they are theorized to foreshadow increased uses of force (Gardner and Moore 2008). Conversely, passivity and praise are indicative of empathy and joy respectively, and are not expected to precede aggressive behavior conducted to maintain emotional regulation (Roberton, Daffer, and Bucks 2012). Additionally, the three cognitive variables of interest expand on existing literature on beliefs and decision making expounded by such authors as Walker (2010), Winter (2003) and Hermann (2003). DICTION 6.0 is used to analyze all text data of presidential news conferences, candidate debates, and State of the Union speeches given between 1945 and 2000 stored by The American Presidency Project (Hart and Carroll 2012). Howell and Pevehouse's (2005) quantitative assessment of quarterly U.S. uses of force between 1945 and 2000 is employed as a means of quantifying instances of major uses of force. Results show systematic differences among the traits expressed by presidents, with most expressions staying consistent across spontaneous speech contexts. Additionally, State of the Union speeches consistently yielded the highest scores across the expressed traits measured; supporting the theory that prepared speech is used to emotionally frame situations and setup emotional interpretations of events to present to the public. Time sensitive regression analyses indicate that expressed aggression within the context of State of the Union Addresses is the only significant predictor of major uses of force by the administration. That being said, other studies may use the comparative findings presented herein to further establish a robust model of personality that accounts for individual dispositions toward emotional expression as a means of framing the emotional interpretation of events by audiences.
M.A.
Masters
Political Science
Sciences
Political Science; International Studies Track
APA, Harvard, Vancouver, ISO, and other styles
35

Huang, Ya-Lun, and 黃亞倫. "Design and Analysis of Fuel Quantity Measurement System on UAV." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/49180719724374974476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Jiin-Cheng, and 吳錦棖. "Improvement of Revenue Water Percentage by regional water quantity measurement." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/55982401701224200352.

Full text
Abstract:
碩士
淡江大學
土木工程學系碩士班
93
How we choose the suitable area to update old and leaking water pipes in the extensive water supply system. It depended on water engineers’ experience formerly. Water engineers began to map out, design and construct water pipes when they had consulted just the pipe blueprint. It results in that there is no objective data to survey the investment benefit. There is a better method to apply improving water pipe system. It modified the prevented leakage strategy by Tokyo Water Department. Taipei Water Department mark out the investigative distinct for prevented leakage as same as Tokyo Water Department. The steps of studying on the improvement of water-sold percentage by regional water quantity measurement are: First at all, introduction about the prevented leakage strategy in city water pipe system. Secondary, make the whole record at the first execution of regional water quantity measurement in city. It takes a long time to build a whole closed water pipe system area because of disappearance of the control valves and the mistakes of the pipe blueprint. We figure out the importance to set up the regional management books to maintain the control valves in my research. We can know well the most important factor at influence the percentage of revenue water by the analysis that was aimed at two particular areas. The percentages of improvement at investment benefit are 463% and 317% after following the improvement policy by regional water quantity measurement strategy.
APA, Harvard, Vancouver, ISO, and other styles
37

Guo, You-Ting, and 郭有廷. "Micro Particle Image Velocimetry in Continuous Quantity Mixing Flow Field Measurement." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/8u86rb.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Li-Pin, Yu, and 俞立平. "The utilization of regional water quantity measurement to decrease the Non-Revenue water in Taipei." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/28579870796488660040.

Full text
Abstract:
碩士
中華大學
營建管理研究所
95
Since 1976, the water distribution of water supply in Taiwan has been constructed actively, the over length of the (water distribution) pipes are in excess of 50,000 kilometers. As far as water supply engineering is concerned, the progression of constructing is quickly and rapidly. However, because of pursuing for better efficiency, the quality of water supply engineering become slack, such that the preciseness of the water supplying piping lines system is relatively dreadful, and the rate of leakage is going up relatively high level. In the pass few years, the percentage of unaccounted-for water in Taiwan is about 33% or so, wherein the rate of leakage in Taiwan is approximately 3.5 times than that in Japan. Besides, the water-sold percentage of water system is relatively low, such as 65% or below, and the reason was resulted from the percentage of unaccounted-for water are too high, such the amount is about 33%. The percentage of leakage of water supply in Taiwan is very high, the main reason is: the programming of pipe arrangement is undistributed yet, additionally, at the mean while, the quality of construction of laying down is not good as well. More than that, the environment conditions may result in great influence on the pipes. With regard to the regional pipes and the network whereas connect from one to another are extremely complicated, thus it’s very difficult if they need to apply regional water measurement entirely; alternatively, with regional detection, neither apply it directly nor indirectly, there’re still several disadvantages exist of their own. Thus, in order to enhance the problems of leakage, the present study is disclosed the regional water quantity measurement. The present study utilizes some cases and applies their consequence directly for doing comparisons and analyses, wherein by performing regional water quantity measurement can we control the main factors affected by the amount of unaccounted-for water; with the improvement of the network, apparently the amount of unaccounted-for water may get lower down little by little. Each of the revenue water percentage is improved separately from 47%、67% and 58% up to 86%、87% and 92%.
APA, Harvard, Vancouver, ISO, and other styles
39

"The Impact of Information Quantity and Quality on Parameter Estimation for a Selection of Dynamic Bayesian Network Models with Latent Variables." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.50531.

Full text
Abstract:
abstract: Dynamic Bayesian networks (DBNs; Reye, 2004) are a promising tool for modeling student proficiency under rich measurement scenarios (Reichenberg, in press). These scenarios often present assessment conditions far more complex than what is seen with more traditional assessments and require assessment arguments and psychometric models capable of integrating those complexities. Unfortunately, DBNs remain understudied and their psychometric properties relatively unknown. If the apparent strengths of DBNs are to be leveraged, then the body of literature surrounding their properties and use needs to be expanded upon. To this end, the current work aimed at exploring the properties of DBNs under a variety of realistic psychometric conditions. A two-phase Monte Carlo simulation study was conducted in order to evaluate parameter recovery for DBNs using maximum likelihood estimation with the Netica software package. Phase 1 included a limited number of conditions and was exploratory in nature while Phase 2 included a larger and more targeted complement of conditions. Manipulated factors included sample size, measurement quality, test length, the number of measurement occasions. Results suggested that measurement quality has the most prominent impact on estimation quality with more distinct performance categories yielding better estimation. While increasing sample size tended to improve estimation, there were a limited number of conditions under which greater samples size led to more estimation bias. An exploration of this phenomenon is included. From a practical perspective, parameter recovery appeared to be sufficient with samples as low as N = 400 as long as measurement quality was not poor and at least three items were present at each measurement occasion. Tests consisting of only a single item required exceptional measurement quality in order to adequately recover model parameters. The study was somewhat limited due to potentially software-specific issues as well as a non-comprehensive collection of experimental conditions. Further research should replicate and, potentially expand the current work using other software packages including exploring alternate estimation methods (e.g., Markov chain Monte Carlo).
Dissertation/Thesis
Doctoral Dissertation Family and Human Development 2018
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Shu-Mei, and 林淑眉. "An Analysis of Growth in “Quantity and Measurement” Skills in Elementary School Students Using a Two-factor Latent Growth Curve Model." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36155764614729093839.

Full text
Abstract:
碩士
國立臺中教育大學
教育測驗統計研究所
101
Based on the mathematic learning materials for each grade in Grade 1-9 Integrated Curriculum and National Assessment of Educational Progress (NAEP) 2003, this study designed three tests and administered them to 351 fourth-grade elementary school students for a three-year longitudinal study of growth in their “quantity and measurement” skills using a latent growth model. The main findings of this study were as follows: 1.The growth of “conceptual understanding”,“procedural knowledge”, and “problem solving” skills among the students was in a linear trend. The growth in “problem solving” skills varied significantly across students. 2.Students with better mathematic skills were not found to have a faster growth in these skills with passing of time or with accumulation of learning experiences. 3.Students’ “procedural knowledge” and “problem solving” skills with respect to quantity and measurement differed across gender. Male students were significantly better in these aspects than female ones, but female ones showed higher growth than male ones with passing of time. 4.Students' growth in “conceptual understanding”, “procedural knowledge” and “problem solving” skills was affected by their sibship size. 5.Students' growth in “procedural knowledge” and “problem solving” skills was affected by their fathers’ education degree. Students whose father held a college or higher degree showed higher growth in “procedural knowledge” and “problem-solving” skills than those whose father held a high school or lower degree. 6.Students’ “conceptual understanding” skills were affected by their mother’s birth place. Those whose mother was born in Taiwan were found to have better “conceptual understanding” skills than those whose mother was not. 7.Ethnic group, father’s birth place, mother’s education degree,watching TV,using the computer and reading or homework time were not significantly related to the students’ quantity and measurement skills.
APA, Harvard, Vancouver, ISO, and other styles
41

CHUN, Fan Jui, and 范瑞君. "Competence Indicators Test and Remedial Instruction Developments Based on Bayesian Networks-The Quantity and Measurement Related Indicators of Mathematics in Grade 6." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/03723215392927040466.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
94
We only get score form traditional paper test which analysis details rarely such as weaknesses and mistaken types. According to the circumstance, teachers can only offer the remedy instructions based on the most mistaken types. In fact, variety weaknesses cause to different mistaken types. Bayesian Networks is a very popular statistics analysis tool. It applies perfectly in artificial intelligence and medical treatment. It judges and integrates the problem uncertainties by the probability method. And many philosophy scholars apply Bayesian Networks on Educational Rating. The main idea of the study is to research the ability of quantity and measurement index on Grade 6 and demonstrate the applicable of student mistaken types on the basis of Bayesian Networks which is a probability analysis method. Four purposes as below. 1. Designing the computerized quiz question and set up the reliable index. 2. Establishing cybernate diagnosis mode based on Bayesian Networks, which bounds the ability of quantity and measurement index on Grade 6. 3. Studying the ability of Mistaken Types by using Bayesian Networks. 4. Making remedy instructions flash and demonstrate results of the remedy instructions program.
APA, Harvard, Vancouver, ISO, and other styles
42

FEI, LIU CHENG, and 劉政霏. "Competence Indicators Test and Remedial Instruction Developments Based on Bayesian Networks-The quantity and measurement Related Indicators of Mathematics in Grade 5." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/03833035583011718289.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
94
It is very important to improve the study effective for students and reduce the loading for teachers under the rapid information society. The main idea of the study is to research the ability index of measurement on Grade 5 and demonstrate the applicable of students mistaken types based on Bayesian Networks which is a probability analysis method. Students attend the test through the Learning Educational Program online, which is developed based on the index of Grade 1-9 Curriculum. The system can show the subject comprehension and begin to the Learning Educational Program on the basis of the mistaken type distribution. There are four main purposes of the study as below. 1. Demonstrating the measurement index of mistaken types on Grade 5. 2. Appling the Bayesian Networks to analyze measurement index of mistaken types on Grade 5, design the quiz question and set up Bayesian Networks Framing. 3. Establishing the Learning Educational Program based on the mistaken types. 4. Demonstrating the effective of the computerized adaptive flash. There are two advantages by adopting the program. First, students can take the remedy instructions based on the test result without the stress coming from peers and teachers. Second, teachers don’t need to spend much time to correct the answer.
APA, Harvard, Vancouver, ISO, and other styles
43

Ying-min, Liu, and 劉穎民. "Competence Indicators Test and Remedial Instruction Developments Based on Bayesian Networks -The ”Quantity and Measurement"Related Indicators of Mathematics in Grade 3." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/79910509834407267601.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
94
Abstract The main idea of the study is to establish the suitable remedy instructions program based on Bayesian Networks and the ability index of Quanity and Measurement on Grade 3. It can offer the real time remedy instructions by the mistaken types after taking the test online. It is time consuming to take the traditional paper test. According to the weakness, the program is designed to analyze the mistaken types and gives individual remedy instructions based on different talent. It can also reduce the teachers’ loading and improve the studying effective. There are four conclusions as below. 1. The designed question in the program can make the differentiation of the students’ ability index. 2. It is workable to diagnose the mistaken types of Quanity and Measurement index on Grade 3 by the program. 3. The test result online can feedback to the program, offer real time and suitable remedy instructions. 4. All of the students with different levels get obviously improvement by using the program. Keywords : Computerized Diagnose test, Bayesian Networks, Bug Type, Remedy Instructions, Quanity and Measurement
APA, Harvard, Vancouver, ISO, and other styles
44

YI, CHAO HSIN, and 趙心怡. "Competence Indicators Test and Remedial Instruction Developments Base on The Structures of Bayesian Networks— The "Quantity and Measurement" Related Indicators of Mathematicsin Grade 4." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/96400720546397818910.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系碩士班
94
Abstract The main purpose of the research is to use the educational assessment on the basis of Evidence-Centered assessment design(ECD) to build a intelligent diagnostic and remedial instruction system in The “Quantity and Measurement” Indicators of Mathematics in Grade 4 based on Bayesian Network. The system can be used to diagnose the mistaken types and the student can receive in-time computerized adaptive remedial instruction. Evaluation, Diagnosis, and remedy can be achieved simultaneously. The results: 1.The Bayesian networks evaluation mode and evidence- Centered assessment design apply effectively to the diagnosis of students’ mistakes and sub-skills. 2.The progress of student is significant after taking the Computerize adaptive remedial instruction. 3.One of the program advantages is to give individual remedy instructions based on mistaken types, and it is un-available by traditional paper test.
APA, Harvard, Vancouver, ISO, and other styles
45

CHANG, SHU-CHIH, and 張樹枝. "Computerized Adaptive Diagnostic Test and Remedial Instruction Developments Based on The Structures of Competence Indicators – The “ Quantity and Measurement ” of Mathematics in Grade 3." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/27176470522539786028.

Full text
Abstract:
碩士
臺中健康暨管理學院
資訊工程學系碩士班
93
This research aimed at establishing a Computerized Adaptive Diagnostic Test(CADT) and remedial instruction based on the structure of competence indicators, using the “Quantity and Measurement” of Mathematics in Grade 3 as an example. After taking the test, students were diagnosed and given the remedial instruction effectively according to their achievement in competence indicators and the concept nodes of knowledge structure. The items of this research were based on the knowledge structure established through the experts’ and teachers’ analysis of the competence indicators. After the written test, both the student items structure and the CADT Item Bank were established according to the OT and SS. After the CADT, students could conduct the remedial instruction on their own weaknesses of the competence indicators. This research had the following findings: 1. The CADT based on the structure of competence indicators could effectively save the time and items needed in a test. 2. The items of this test could effectively distinguish the achievement of competence indicators for students. 3. This Computerized remedial instruction could combine testing and remedial instruction consistently. The improved achievement of students could prove the effectiveness of remedial instruction.
APA, Harvard, Vancouver, ISO, and other styles
46

MITTASCH, Marek. "Výukový text pro úvodní fyzikální praktikum." Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-136447.

Full text
Abstract:
The diploma thesis presents basic concepts of measuring, physical quantities, units of measurement and Physics instruments. There are instructions how to realize the measuring in practice and an introduction of measuring deviation. There are examples showing processing of measured values including use of some Microsoft Excel functions.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Szu-Cheng, and 陳思成. "Lasso Quantile Regression Model to Construct Asia and Taiwan Systemic Risk Measurement." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4b2fy9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Kuan-Ting, and 劉冠廷. "Integration of Multi-satellite Measurements to Quantify the Temporal Changes of the Mekong River." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/32y23w.

Full text
Abstract:
碩士
國立中央大學
土木工程學系
105
Water level (WL) and water volume (WV) of surface-water bodies are among the most crucial variables used in water-resources assessment and management. They fluctuate as a result of climatic forcing, and they are considered as indicators of climatic impacts on water resources. Quantifying riverine WL and WV, however, usually requires the availability of timely and continuous in-situ data, which could be a challenge for rivers in remote regions, including the Mekong River basin. As one of the most developed rivers in the world, with more than 20 dams built or under construction, Mekong River is in need of a monitoring system that could facilitate basin-scale management of water resources facing future climate change. This study used spaceborne sensors to investigate two dams in the upper Mekong River, Xiaowan and Jinghong Dams within China, to examine river flow dynamics after these dams became operational. We integrated multi-mission satellite radar altimetry (RA, Envisat and Jason-2), satellite laser altimetry ICESat, Landsat-5/-7/-8 Thematic Mapper (TM)/Enhanced Thematic Mapper plus (ETM+)/Operational Land Imager (OLI) optical and Sentinel-1A synthetic aperture radar (SAR) remote sensing (RS) imageries to construct composite WL time series with enhanced spatial resolutions and substantially extended WL data records. An empirical relationship between altimetry WL and water extent was first established for each dam and 6 checkpoints, and then the combined long-term WL time series from Landsat/Sentinel-1A images are reconstructed for all study sites. The R2 between altimetry WL and Landsat water area measurements is >0.9. Next, the Tropical Rainfall Measuring Mission (TRMM) data were used to diagnose and determine water variation caused by the precipitation anomaly within the basin. Finally, the impact of hydrologic dynamics caused by the impoundment of the dams is assessed. The discrepancy between satellite-derived WL and available in-situ gauge data, in term of root-mean-square error (RMSE) is at 2–5 m level at upstream dams, and 1 m at downstream checkpoints. Estimated WV variations derived from combined RA, RS imageries and shuttle radar topography mission (SRTM) data are consistent with results from in-situ data with a difference at about 3%. We concluded that the river level downstream is affected by a combined operation of these two dams after 2009, which has increased WL by 0.18±0.08 m•yr-1 in dry seasons and decreased WL by 0.32±0.14 m•yr-1 in wet seasons.
APA, Harvard, Vancouver, ISO, and other styles
49

Feener, Jessica S. "Safeguards for Uranium Extraction (UREX) +1a Process." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-270.

Full text
Abstract:
As nuclear energy grows in the United States and around the world, the expansion of the nuclear fuel cycle is inevitable. All currently deployed commercial reprocessing plants are based on the Plutonium - Uranium Extraction (PUREX) process. However, this process is not implemented in the U.S. for a variety of reasons, one being that it is considered by some as a proliferation risk. The 2001 Nuclear Energy Policy report recommended that the U.S. "develop reprocessing and treatment technologies that are cleaner, more efficient, less waste-intensive, and more proliferation-resistant." The Uranium Extraction (UREX+) reprocessing technique has been developed to reach these goals. However, in order for UREX+ to be considered for commercial implementation, a safeguards approach is needed to show that a commercially sized UREX+ facility can be safeguarded to current international standards. A detailed safeguards approach for a UREX+1a reprocessing facility has been developed. The approach includes the use of nuclear material accountancy (MA), containment and surveillance (C/S) and solution monitoring (SM). Facility information was developed for a hypothesized UREX+1a plant with a throughput of 1000 Metric Tons Heavy Metal (MTHM) per year. Safeguard goals and safeguard measures to be implemented were established. Diversion and acquisition pathways were considered; however, the analysis focuses mainly on diversion paths. The detection systems used in the design have the ability to provide near real-time measurement of special fissionable material in feed, process and product streams. Advanced front-end techniques for the quantification of fissile material in spent nuclear fuel were also considered. The economic and operator costs of these systems were not considered. The analysis shows that the implementation of these techniques result in significant improvements in the ability of the safeguards system to achieve the objective of timely detection of the diversion of a significant quantity of nuclear material from the UREX+1a reprocessing facility and to provide deterrence against such diversion by early detection.
APA, Harvard, Vancouver, ISO, and other styles
50

Alotaibi, Ahmed Mohammed. "Development of a Mechatronics Instrument Assisted Soft Tissue Mobilization (IASTM) Device to Quantify Force and Orientation Angles." Thesis, 2016. http://hdl.handle.net/1805/10333.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Instrument assisted soft tissue mobilization (IASTM) is a form of massage using rigid manufactured or cast devices. The delivered force, which is a critical parameter in massage during IASTM, has not been measured or standardized for most clinical practices. In addition to the force, the angle of treatment and frequency play an important role during IASTM. As a result, there is a strong need to characterize the delivered force to a patient, angle of treatment, and stroke frequency. This thesis proposes two novel mechatronic designs for a specific instrument from Graston Technique(Model GT3), which is a frequently used tool to clinically deliver localize pressure to the soft tissue. The first design is based on compression load cells, where 4-load cells are used to measure the force components in three-dimensional space. The second design uses a 3D load cell, which can measure all three force components force simultaneously. Both designs are implemented with IMUduino microcontroller chips which can also measure tool orientation angles and provide computed stroke frequency. Both designs, which were created using Creo CAD platform, were also analyzed thorough strength and integrity using the finite element analysis package ANSYS. Once the static analysis was completed, a dynamic model was created for the first design to simulate IASTM practice using the GT-3 tool. The deformation and stress on skin were measured after applying force with the GT-3 tool. Additionally, the relationship between skin stress and the load cell measurements has been investigated. The second design of the mechatronic IASTM tool was validated for force measurements using an electronic plate scale that provided the baseline force values to compare with the applied force values measured by the tool. The load cell measurements and the scale readings were found to be in agreement within the expected degree of accuracy. The stroke frequency was computed using the force data and determining the peaks during force application. The orientation angles were obtained from the built-in sensors in the microchip.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography