To see the other types of publications on this topic, follow the link: The measured object.

Dissertations / Theses on the topic 'The measured object'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'The measured object.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jaworska, Katarzyna. "Understanding age-related differences in the speed of information processing of complex object categories measured with electroencephalography (EEG)." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8112/.

Full text
Abstract:
Ageing is associated with differences in visual function which can be observed, for example, as a decline in performance on face and object processing tasks. One of the most prominent accounts of age-related decrement in perceptual and cognitive tasks alike is that of a reduction in information processing speed (Salthouse, Psychological Review 1996, 103:403). Differences in myelin integrity in some parts of the cortex, as well as in neuronal responsivity are physiologically plausible as the origins of the age- related slowing-down of information processing. However, little research to date has directly investigated age-related slowing-down of visual information processing in humans. Previously, Rousselet et al. (Frontiers in Psychology 2010, 1:19) reported a 1ms/year delay in face visual processing speed in a sample of 62 subjects aged ~20-80, using event-related potentials (ERPs). This result was replicated in another 59 subjects, and was independent of stimulus luminance and senile miosis (Bieniek et al. Frontiers in Psychology 2013, 4:268). To go beyond differences in average brain activity and interpret previous findings, in the first study (Chapter 2) we investigated what information is coded by early face ERPs in younger and older observers. In a detection task, young and older observers each categorized 2,200 pictures of faces and noise textures revealed through Gaussian apertures (“Bubbles”). Using reverse correlation and Mutual Information (MI), we found that the presence of the left eye elicited fastest detection in both age groups. Older observers relied more on the eyes to be accurate, suggesting a strategy difference between groups. In both age groups, the presence of the eye contralateral to the recording electrode modulated single-trial ERPs at lateral-occipital electrodes, but this association was weaker in older observers and delayed by about 40 ms. We also observed a differentiated coding of the eyes across groups: in younger observers, both the N170 latency and amplitude coded the contralateral eye, whereas it was only the N170 amplitude in older adults. The latency modulation in younger adults was also higher in the right than in the left hemisphere, but very similar across hemispheres in older adults. Our results suggest that face detection in ageing is associated with delayed and weaker processing of the same face features, and point to potential coding differences. On the notion that incomplete or occluded stimuli (such as Bubbled images) might differentially affect older adults’ ability to perform a perceptual task, in the second study (Chapter 3) we sought to understand whether the age-related differences in eye sensitivity were preserved in a face context. Two groups of observers, young and older, performed a face detection task in which the visibility of the eye region was modulated in a parametric manner by adding phase noise. This way, we could investigate the modulation of ERPs by increasing information available in the eye region, when the face context was preserved (or absent – in control conditions). In line with behavioural results reported in Chapter 2, modulating the visibility of the left eye had a greater effect on reaction times across older participants, and this modulation increased with decreasing face context information in older adults. Contralateral eye sensitivity was weaker than that reported in Chapter 2 and did not differ between young and older observers, suggesting that coding of the eye by the N170 acts differently when the eye is revealed through Bubble masks and when it is presented in the face context. In Chapter 4, we investigated potential origins of the large N170 responses to textures observed in a sample of older participants before (Rousselet et al. BMC Neuroscience 2009, 10:114), and quantified age-related delays in visual processing speed of stimuli other than faces: houses and letters. Two groups of participants performed three simple detection tasks: face detection, house detection, and letter detection. Perceiving textures in the context of a face detection task, but not house detection or letter detection, influenced ERP responses to textures in older participants only to a small extent and after 200 ms post- stimulus, suggesting that the large N170 responses to textures are unlikely due to a top- down influence of the task at hand. Furthermore, visual processing speed of faces, houses and letters was delayed to a smaller extent than that predicted by the original study and depended on the nature of categorical comparisons made. Overall, our results fill the big gap in the literature concerned with age-related slowing of information processing: using Bubbles, we have presented direct evidence that processing of the same facial information is slower (and weaker) in ageing. However, quantifying visual processing speed using categorical designs yielded mixed evidence for the theory of slower information processing in ageing, pointing to the need for carefully designed visual stimuli in ageing research, and for careful selection of control stimuli for comparisons.
APA, Harvard, Vancouver, ISO, and other styles
2

Linnet, Elisabeth. "Age-related differences in visuomotor integration as measured by object affordance effects : a combined behavioural and neurophysiological investigation." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/4584.

Full text
Abstract:
Visuomotor behaviour – from handling simple objects to operating complex devices – is of fundamental importance in our everyday lives, yet there is relatively little evidence as to how healthy ageing affects these processes. A central role is played by the human capacity for reaching and grasping. Grasping an object requires complex visuomotor transformations, including processing of the object’s extrinsic features (it’s spatial location) and intrinsic features (such as size and shape). It has been documented that action relevant intrinsic object properties automatically facilitate specific motor actions despite being task-irrelevant, the so-called object affordance effect. These effects have been demonstrated for (1) grasp type (precision and power grips being facilitated by small and large objects) and (2) object-orientation (whereby right and left handed grasps are facilitated by object-orientation), and might underlie the effortlessness with which humans can interact with objects. Yet, these paradigms have not previously been employed in the study of healthy ageing, and little is known concerning how these processes change over the life span. Elucidating these changes is of particular importance as age-related degeneration of white matter integrity is well documented. Consequently, if successful visuomotor behaviour relies on white matter integrity, age-related reductions in affordance effects should be observed. This prediction was tested in a series of experiments. Experiment 1 investigated age-differences in object-size compatibility effects, and results corroborated our prediction of age-related reductions in object-size effects. Experiment 2 investigated age-differences in (1) spatial compatibility effects versus object-orientation effects, and (2) the locus of the effects (facilitation versus interference effects). Results revealed (1) some evidence of larger affordance than spatial effects in both age-groups, and (2) interference effects in the younger group and both facilitation and interference effects in the older group, showing a potential change in processing modes or strategies. Experiments 3 and 4 addressed the main competing account, the attention-directing hypothesis (according to which attentional shifts are responsible for the generation of automatic response codes, rather than the affects arising from afforded actions), by using a novel stimulus set in which such attentional differences can be ruled out. Results provided strong evidence in favour of the object-size affordance hypothesis. A final neuroimaging experiment investigated age-differences in the object-size effect and its neural correlates by combining behavioural, functional MRI and diffusion tensor imaging (DTI) data. Results revealed evidence of age-differences, both on the behavioural and functional level. For the DTI data, we investigated all four diffusion metrics (something which is not frequently reported in the healthy ageing literature), and found widespread age-related differences in white matter integrity. The empirical findings presented in this thesis offer a significant contribution to ageing research, by further elucidating the relationship between age-related neurophysiological changes and visuomotor behaviour. The overall picture which emerged from this series of experiments was consistent with our prediction of age-related reductions in affordance effects. Furthermore, it is likely that these age-differences may have, at least in part, a neurophysiological basis.
APA, Harvard, Vancouver, ISO, and other styles
3

Denneau, Larry. "Observational constraints on the steady-state catastrophic disruption rate of main belt asteroids measured using the Pan-STARRS moving object processing system." Thesis, Queen's University Belfast, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.680156.

Full text
Abstract:
I present the results of a search for main belt catastrophic disruptions over a 453-day Interval in the Pan-STARRS1 survey. I describe the design and Implementation of the Pan-STARRS Moving Object Processing System (MOPS), a software environment capable of both a) detecting moving objects in the Pan-STARRS1 transient detection stream and b) characterizing a general survey telescope's efficiency at detecting moving objects, providing a statistical framework from which one can characterize entire populations. I devised a simple model to describe how a catastrophic disruption would appear to the Pan-STARRS1 detection system, constructed simulations containing 1 billion synthetic catastrophic disruptions and used MOPS to measure the efficacy of Pan-STARRS1 to detect catastrophic disruption events. The catastrophic disruption search identifies a candidate catastrophic disruption, named P1 01 Oae, whose apparent brightness V = 18.5 is used to set an upper limit to the rate at which catastrophic disruptions can occur in the main asteroid belt. I adopt the power-law formulation from Bottke et al. (2005) describing differential disruption rates as a function of diameter to compute the largest diameter (more precisely the absolute magnitude HCL) at which one disruption can be occurring per year. The computed HCL suggests that collisional catastrophic disruptions, which are predicted to exhibit brightness Increases of 20 magnitudes, are occurring once per year for objects with H - 2B.7 (about 7 m diameter). At face value this would mean that for 100 m asteroids, collisional catastrophic disruptions are occurring at - 1/500 the rate predicted by Bottke et al. (2005). Recent work by Jacobson et al. (2014) shows that disruption by rotational spin-up from the Yarkovsky-O'Keefe- Radzievskii-Paddock effect (YORP; Rubincam 2000) may occur - 400 times more frequently than colllslonal disruptions, effectively making up the deficit In catastrophically disrupted 100 m asteroids.
APA, Harvard, Vancouver, ISO, and other styles
4

Kwon, Ohkyu. "Similarity measures for object matching in computer vision." Thesis, University of Bolton, 2016. http://ubir.bolton.ac.uk/890/.

Full text
Abstract:
The similarity measures for object matching and their applications have been important topics in many fields of computer vision such as those of image recognition, image fusion, image analysis, video sequence matching, and so on. This critical commentary presents the efficiency of new metric methods such as the robust Hausdorff distance (RHD), the accurate M-Hausdorff distance (AMHD), and the fast sum of absolute differences (FSAD). The RHD measure computes the similarity distance of the occluded/noisy image pair and evaluates the performances of the multi-modal registration algorithms. The AMHD measure is utilised for aligning the pair of the occluded/noisy multi-sensor face images, and the FSAD measure in adaptive-template matching method finds the zero location of the slide in an automatic scanning microscope system. A Hausdorff distance (HD) similarity measure has been widely investigated to compare the pair of two-dimensional (2-D) images by low-level features since it is simple and insensitive to the changes in an image characteristic. In this research, novel HD measures based on the robust statistics of regression analysis are addressed for occluded and noisy object matching, resulting in two RHD measures such as M-HD based on the M-estimation and LTS-HD based on the least trimmed squares (LTS). The M-HD is extended to three-dimensional (3-D) version for scoring the registration algorithms of the multi-modal medical images. This 3-D measure yields the comparison results with different outlier-suppression parameters (OSP) quantitatively, even though the Computed Tomography (CT) and emission-Positron Emission Tomography (PET) images have different distinctive features. The RHD matching technique requires a high level of complexity in computing the minimum distance from one point to the nearest point between two edge point sets and searching for the best fit of matching position. To overcome these problems, the improved 3×3 distance transform (DT) is employed. It has a separable scan structure to reduce the calculation time of the minimum distance in multi-core processors. The object matching algorithm with hierarchical structures is also demonstrated to minimize the computational complexity dramatically without failing the matching position. The object comparison between different modality images is still challenging due to the poor edge correspondence coming from heterogeneous characteristics. To improve the robustness of HD measures in comparing the pair of multi-modal sensor images, an accurate M-HD (AMHD) is proposed by utilizing the orientation information of each point in addition to the DT map. This similarity measure can precisely analyse the non-correspondent edges and noises by using the distance orientation information. The AMHD measure yields superior performance at aligning the pairs of multi-modal face images over those achieved by the conventional robust HD schemes. The sum of absolute differences (SAD) is popular similarity measure in template matching technique. This thesis shows the adaptive-template matching method based on the FSAD for accurately locating the slide in automated microscope. The adaptive-template matching method detects the fiduciary ring mark in the slide by predicting the constant used in the template, where the FSAD reduces the processing time with a low rate of error of the template matching by inducing 1-D vertical and horizontal SAD. The proposed scheme results in an accurate performance in terms of detecting the ring mark and estimating the relative offset in slide alignment during the on-line calibration process.
APA, Harvard, Vancouver, ISO, and other styles
5

Coates, Lewis Richard James. "Investigations of an "Objectness" Measure for Object Localization." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2949.

Full text
Abstract:
Object localization is the task of locating objects in an image, typically by finding bounding boxes that isolate those objects. Identifying objects in images that have not had regions of interest labeled by humans often requires object localization to be performed first. The sliding window method is a common naïve approach, wherein the image is covered with bounding boxes of different sizes that form windows in the image. An object classifier is then run on each of these windows to determine if each given window contains a given object. However, because object classification algorithms tend to be computationally expensive, it is helpful to have an effective filter to reduce the number of times those classifiers have to be run. In this thesis I evaluate one promising approach to object localization: the objectness algorithm proposed by Alexe et al. Specifically, I verify the results given by Alexe et al., and further explore the weaknesses and strengths of their "objectness"
APA, Harvard, Vancouver, ISO, and other styles
6

Mikula, Martin. "Termodiagnostika - dotykové a bezdotykové měření teploty." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231525.

Full text
Abstract:
This thesis is concerned with thermodiagnostics in industrial practise, which is very important for the assessment of technical condition of object on the basis of temperature, in today's time. It includes summary of contact and contact-free methods and their principle, advantages and disadvantages for aplication in industrial practise. Because of thesis it was carried out measurement in company Daikin Device Czech republic with the use of contact thermometer and two available thermocameras for solving of topical tasks relating to production.
APA, Harvard, Vancouver, ISO, and other styles
7

Novák, Jindřich. "Metodika pasportizací stavebních objektů dotčených okolní činností." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-232586.

Full text
Abstract:
The passportization of building structures is very important task efore the initiation of negative influences on to object. The aim is to prevent possible legal deputies in between owners of affected object. The demand of passportization of building structure is increasing with the exploding of developments projects from green fields to vacancies. The aim of this work is to determine the metodology of passportization of building structures and to apply it on existig object.
APA, Harvard, Vancouver, ISO, and other styles
8

Bonaventura, Brugués Xavier. "Perceptual information-theoretic measures for viewpoint selection and object recognition." Doctoral thesis, Universitat de Girona, 2015. http://hdl.handle.net/10803/302540.

Full text
Abstract:
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, volume visualization, object recognition, mesh simplification, and camera placement. But why is viewpoint selection important? For instance, automated viewpoint selection could play an important role when selecting a representative model by exploring a large 3D model database in as little time as possible. Such an application could show the model view that allows for ready recognition or understanding of the underlying 3D model. An ideal view should strive to capture the maximum information of the 3D model, such as its main characteristics, parts, functionalities, etc. The quality of this view could affect the number of models that the artist can explore in a certain period of time. In this thesis, we present an information-theoretic framework for viewpoint selection and object recognition. From a visibility channel between a set of viewpoints and the polygons of a 3D model we obtain several viewpoint quality measures from the respective decompositions of mutual information. We also review and compare in a common framework the most relevant viewpoint quality measures for polygonal models presented in the literature. From the information associated to the polygons of a model, we obtain several shading approaches to improve the object recognition and the shape perception. We also use this polygonal information to select the best views of a 3D model and to explore it. We use these polygonal information measures to enhance the visualization of a 3D terrain model generated from textured geometry coming from real data. Finally, we analyze the application of the viewpoint quality measures presented in this thesis to compute the shape similarity between 3D polygonal models. The information of the set of viewpoints is seen as a shape descriptor of the model. Then, given two models, their similarity is obtained by performing a registration process between the corresponding set of viewpoints
La selecció de punts de vista ha estat una àrea emergent en la computació gràfica des de fa alguns anys i ara està aconseguint la maduresa amb aplicacions en camps com la navegació d’una escena, la visualització de volums, el reconeixement d’objectes, la simplificació d’una malla i la col·locació de la càmera. Però per què és important la selecció del punt de vista? Per exemple, la automatització de la selecció de punts de vista podria tenir un paper important a l’hora de seleccionar un model representatiu mitjançant l’exploració d’una gran base de dades de models 3D en el menor temps possible. Aquesta aplicació podria mostrar la vista del model que permet el millor reconeixement o comprensió del model 3D. Un punt de vista ideal ha de captar la màxima informació del model 3D, com per exemple les seves principals característiques, parts, funcionalitats, etc. La qualitat d’aquest punt de vista pot afectar el nombre de models que l’artista pot explorar en un determinat període de temps. En aquesta tesi, es presenta un marc de teoria de la informació per a la selecció de punts de vista i el reconeixement d’objectes. Obtenim diverses mesures de qualitat de punt de vista a través de la descomposició de la informació mútua d’un canal de visibilitat entre un conjunt de punts de vista i els polígons d’un model 3D. També revisem i comparem en un marc comú les mesures més rellevants que s’han presentat a la literatura sobre la qualitat d’un punt de vista d’un model poligonal. A partir de la informació associada als polígons d’un model, obtenim diversos tipus de renderitzat per millorar el reconeixement d’objectes i la percepció de la forma. Utilitzem aquesta informació poligonal per seleccionar les millors vistes d’un model 3D i per la seva exploració. També usem aquestes mesures d’informació poligonal per millorar la visualització d’un model de terreny 3D amb textures generat a partir de dades reals. Finalment, s’analitza l’aplicació de les mesures de qualitat de punt de vista presentades en aquesta tesi per calcular la similitud entre dos models poligonals. La informació del conjunt de punts de vista és vista com un descriptor del model. Llavors, donats dos models poligonals, la seva similitud s’obté mitjançant la realització d’un procés de registre entre els conjunts de punts de vista corresponents
APA, Harvard, Vancouver, ISO, and other styles
9

Tchernychova, Maria. "Carathéodory cubature measures." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:a3a10980-d35d-467b-b3c0-d10d2e491f2d.

Full text
Abstract:
We introduce an efficient algorithm for computing Carathéodory cubature measures, which are defined as interpolatory cubature measures with positive weights, whose cardinality grows polynomially with dimension, as proved in [16]. We discuss two Carathéodory cubature problem formulations. Both are based on thinning the support of the Cartesian product cubature measure, whose cardinality grows exponentially with dimension, via a formulation of a suitable feasibility LP (Linear Programming) problem. A basic feasible solution to the latter fully characterises a Carathéodory cubature measure. The first problem formulation, initially presented in [48], employes the Simplex Algorithm or Interior Point Method to construct a basic feasible solution to the aforementioned LP problem. The complexity of this method is dependent on the number of nodes in the Cartesian product cubature and thus grows exponentially with dimension. The second problem formulation constitutes the main contribution of the present work. Starting from the LP problem, arising from the Cartesian product cubature construction, we employ a hierarchical cluster representation of the underlying constraint matrix and the strictly feasible solution, arising from the weights of the Cartesian product cubature. Applying the Recombination Algorithm, introduced in [96], to this hierarchical data structure, we recursively generate a sequence of smaller LP problems. We construct a basic feasible solution to each LP problem in turn, by employing a novel algorithm, based on the SVD (Singular Value Decomposition) of the constraint matrix, culminating in a basic feasible solution for the original LP problem. The complexity of this algorithm, is independent of the number of nodes in the Cartesian product cubature, and can be shown to grow polynomially rather than exponentially with dimension. Moreover, the novel SVD-based method for computing basic feasible solutions, produces a one order of magnitude speed-up of the overall algorithm, when compared to the algorithm in [96], and is therefore preferable.
APA, Harvard, Vancouver, ISO, and other styles
10

Cheung, Yee-him, and 張貽謙. "Secure object spaces for global information retrieval (SOSGIR)." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B29869596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Morissette, Laurence. "Auditory Object Segregation: Investigation Using Computer Modelling and Empirical Event-Related Potential Measures." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37856.

Full text
Abstract:
There are multiple factors that influence auditory steaming. Some, like frequency separation or rate of presentation, have effects that are well understood while others remain contentious. Human behavioural studies and event-related potential (ERP) studies have shown dissociation between a pre-attentive sound segregation process and an attention-dependent process in forming perceptual objects and streams. This thesis first presents a model that synthetises the processes involved in auditory object creation. It includes sensory feature extraction based on research by Bregman (1990), sensory feature binding through an oscillatory neural network based on work by Wang (1995; 1996; 1999; 2005; 2008), work by Itti and Koch (2001a) for the saliency map, and finally, work by Wrigley and Brown (2004) for the architecture of single feature processing streams, the inhibition of return of the activation and the attentional leaky integrate and fire neuron. The model was tested using stimuli and an experimental paradigm used by Carlyon, Cusack, Foxton and Robertson (2001). Several modifications were then implemented to the initial model to bring it closer to psychological and cognitive validity. The second part of the thesis furthers the knowledge available concerning the influence of the time spent attending to a task on streaming. Two deviant detection experiments using triplet stimuli are presented. The first experiment is a follow-up of Thompson, Carlyon and Cusack (2011) and replicated their behavioural findings, showing that the time spent attending to a task enhances streaming, and that deviant detection is easier when one stream is perceived. The ERP results showed double decisions markers indicating that subjects may have made their deviant detection based on the absence of the time delayed deviant and confirmed their decision with its later presence. The second experiment investigated the effect of the time spent attending to the task in presence of a continuity illusion on streaming. It was found that the presence of this illusion prevented streaming in such a way that the pattern of the triplet was strengthened through time instead of separated into two streams, and that the deviant detection was easier the longer the subjects attended to the sound sequence.
APA, Harvard, Vancouver, ISO, and other styles
12

Šulc, Zdeněk. "Similarity Measures for Nominal Data in Hierarchical Clustering." Doctoral thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-261939.

Full text
Abstract:
This dissertation thesis deals with similarity measures for nominal data in hierarchical clustering, which can cope with variables with more than two categories, and which aspire to replace the simple matching approach standardly used in this area. These similarity measures take into account additional characteristics of a dataset, such as frequency distribution of categories or number of categories of a given variable. The thesis recognizes three main aims. The first one is an examination and clustering performance evaluation of selected similarity measures for nominal data in hierarchical clustering of objects and variables. To achieve this goal, four experiments dealing both with the object and variable clustering were performed. They examine the clustering quality of the examined similarity measures for nominal data in comparison with the commonly used similarity measures using a binary transformation, and moreover, with several alternative methods for nominal data clustering. The comparison and evaluation are performed on real and generated datasets. Outputs of these experiments lead to knowledge, which similarity measures can generally be used, which ones perform well in a particular situation, and which ones are not recommended to use for an object or variable clustering. The second aim is to propose a theory-based similarity measure, evaluate its properties, and compare it with the other examined similarity measures. Based on this aim, two novel similarity measures, Variable Entropy and Variable Mutability are proposed; especially, the former one performs very well in datasets with a lower number of variables. The third aim of this thesis is to provide a convenient software implementation based on the examined similarity measures for nominal data, which covers the whole clustering process from a computation of a proximity matrix to evaluation of resulting clusters. This goal was also achieved by creating the nomclust package for the software R, which covers this issue, and which is freely available.
APA, Harvard, Vancouver, ISO, and other styles
13

Rasile, Karen D. "Object Relations Theory and Personal Construct Theory: Rapprochement Opportunity." Thesis, North Texas State University, 1987. https://digital.library.unt.edu/ark:/67531/metadc500772/.

Full text
Abstract:
Empirical investigation of the tenets of Object Relations Theory is recent. This study of the theoretical convergence between Object Relations Theory and Personal Construct Theory brought a new direction to the empirical investigation. It was hypothesized that individuals who displayed a well developed level of object relations, as measured by Object Relations Theory, would also display a highly adaptive blend of cognitive complexity and ordination, as described by Personal Construct Theory, and vice versa. A correlational analysis of personality measures on 136 college students approached but did not attain statistical significance. Results indicated no significant theoretical convergence between Object Relations Theory and Personal Construct Theory. Further research is warranted only if greater variability in sample age, life experience, and psychopathology is assured.
APA, Harvard, Vancouver, ISO, and other styles
14

Rahman, Md Atiqur. "Application specific performance measure optimization using deep learning." IEEE, 2016. http://hdl.handle.net/1993/31812.

Full text
Abstract:
In this thesis, we address the action retrieval and the object category segmentation problems by directly optimizing application specific performance measures using deep learning. Most deep learning methods are designed to optimize simple loss functions (e.g., cross-entropy or hamming loss). These loss functions are suitable for applications where the performance of the application is measured by overall accuracy. But for many applications, the overall accuracy is not an appropriate performance measure. For example, applications like action retrieval often use the area under the Receiver Operating Characteristic curve (ROC curve) to measure the performance of a retrieval algorithm. Likewise, in object category segmentation from images, the intersection-over-union (IoU) is the standard performance measure. In this thesis, we propose approaches to directly optimize these complex performance measures in deep learning framework.
October 2016
APA, Harvard, Vancouver, ISO, and other styles
15

Rabone, Martin Richard Kenwyn. "A measure for measure : moderation and the mean in the literature of Spain's Golden Age." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:4163bf16-630f-4cc9-a5cb-f48fd0a3ca1c.

Full text
Abstract:
This thesis presents the first sustained analysis of the reception of the Aristotelian golden mean in early modern Spanish literature. It argues that the critically-neglected ethical credo of moderation was an important part of the classical inheritance on which Golden-Age authors frequently drew, and that despite its famous origins in moral philosophy rather than literature, it was subject to just the same kind of imitative reworking as has long been acknowledged for literary predecessors. The analysis is divided into two sections. The first takes a synoptic view of the period, assessing the transmission of Aristotle's doctrine to the Renaissance and exploring what it meant to the Golden-Age mind. That includes identifying a particular early modern reformulation of the mean, which I argue was an important factor in the popularity of the Icarus and Phaethon myths, as analogues for Aristotle's moral. The body of the thesis then comprises three case studies of the role of moderation in works which span the period's chronological and generic range: the poetry of Garcilaso; Calderón's 'El médico de su honra'; and Gracián's 'Criticón'. These studies explore three important general trends in the reception of the mean: the association of excess and moderation with particular literary models; the incorporation of the mean into Christian thought; and its parallel existence as non-technical, commonplace wisdom. However, each chapter also constitutes an innovation within its own field, offering a reassessment of Garcilaso's relationship to literary tradition; a re-reading of the characters and plot structure of 'El médico', including the controversial King Pedro; and an analysis of the elusive moral approach behind Gracián's allegorical novel. The mean is thus remarkable for both the breadth and depth of its incorporation into literature, and a focus on its treatment offers substantial new insights into some of the canonical works of the age.
APA, Harvard, Vancouver, ISO, and other styles
16

Landsberg, David. "Methods and measures for statistical fault localisation." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:cf737e06-9f12-44fa-94d2-a8d247ad808e.

Full text
Abstract:
Fault localisation is the process of finding the causes of a given error, and is one of the most costly elements of software development. One of the most efficient approaches to fault localisation appeals to statistical methods. These methods are characterised by their ability to estimate how faulty a program artefact is as a function of statistical information about a given program and test suite. However, the major problem facing statistical approaches is their effectiveness -- particularly with respect to finding single (or multiple) faults in large programs typical to the real world. A solution to this problem hinges on discovering new formal properties of faulty programs and developing scalable statistical techniques which exploit them. In this thesis I address this by identifying new properties of faulty programs, developing the formal frameworks and methods which are formally proven to exploit them, and demonstrating that many of our new techniques substantially and statistically significantly outperform competing algorithms at given fault localisation tasks (using p = 0.01) on what (to our knowledge) is one of the largest scale set of experiments in fault localisation to date. This research is thus designed to corroborate the following thesis statement: That the new algorithms presented in this thesis are effective and efficient at software fault localisation and outperform state of the art statistical techniques at a range of fault localisation tasks. In more detail, the major thesis contributions are as follows: 1. We perform a thorough investigation into the existing framework of (sbfl), which currently stands at the cutting edge of statistical fault localisation. To improve on the effectiveness of sbfl, our first contribution is to introduce and motivate many new statistical measures which can be used within this framework. First, we show that many are well motivated to the task of sbfl. Second, we formally prove equivalence properties of large classes of measures. Third, we show that many of the measures perform competitively with the existing measures in experimentation -- in particular our new measure m9185 outperforms all existing measures on average in terms of effectiveness, and along with Kulkzynski2, is in a class of measures which statistically significantly outperforms all other measures at finding a single fault in a program (p = 0.01). 2. Having investigated sbfl, our second contribution is to motivate, introduce, and formally develop a new formal framework which we call probabilistic fault localisation (pfl). pfl is similar to sbfl insofar as it can leverage any suspiciousness measure, and is designed to directly estimate the probability that a given program artefact is faulty. First, we formally prove that pfl is theoretically superior to sbfl insofar as it satisfies and exploits a number of desirable formal properties which sbfl does not. Second, we experimentally show that pfl methods (namely, our measure pfl-ppv) substantially and statistically significantly outperforms the best performing sbfl measures at finding a fault in large multiple fault programs (p = 0.01). Furthermore, we show that for many of our benchmarks it is theoretically impossible to design strictly rational sbfl measures which outperform given pfl techniques. 3. Having addressed the problem of localising a single fault in a pro- gram, we address the problem of localising multiple faults. Accord- ingly, our third major contribution is the introduction and motiva- tion of a new algorithm MOpt(g) which optimises any ranking-based method g (such as pfl/sbfl/Barinel) to the task of multiple fault localisation. First we prove that MOpt(g) formally satisfies and exploits a newly identified formal property of multiple fault optimality. Secondly, we experimentally show that there are values for g such that MOpt(g) substantially and statistically significantly outperforms given ranking-based fault localisation methods at the task of finding multiple faults (p = 0.01). 4. Having developed methods for localising faults as a function of a given test suite, we finally address the problem of optimising test suites for the purposes of fault localisation. Accordingly, we first present an algorithm which leverages model checkers to improve a given test suite by making it satisfy a property of single bug opti- mality. Second, we experimentally show that on small benchmarks single bug optimal test suites can be generated (from scratch) efficiently when the algorithm is used in conjunction with the cbmc model checker, and that the test suite generated can be used effectively for fault localisation.
APA, Harvard, Vancouver, ISO, and other styles
17

King-Lacroix, Justin. "Securing the 'Internet of Things' : decentralised security for wireless networks of embedded systems." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:b41c942f-5389-4a5b-8bb7-d5fb6a18a3db.

Full text
Abstract:
The phrase 'Internet of Things' refers to the pervasive instrumentation of physical objects with sensors and actuators, and the connection of those sensors and actuators to the Internet. These sensors and actuators are generally based on similar hardware as, and have similar capabilities to, wireless sensor network nodes. However, they operate in a completely different network environment: wireless sensor network nodes all generally belong to a single entity, whereas Internet of Things endpoints can belong to different, even competing, ones. This difference has profound implications for the design of security mechanisms in these environments. Wireless sensor network security is generally focused on defence against attack by external parties. On the Internet of Things, such an insider/outsider distinction is impossible; every entity is both an endpoint for legitimate communications, and a possible source of attack. We argue that that under such conditions, the centralised models that underpin current networking standards and protocols for embedded systems are simply not appropriate, because they require such an insider/outsider distinction. This thesis serves as an exposition in the design of decentralised security mechanisms, applied both to applications, which must perform access control, and networks, which must guarantee communications security. It contains three main contributions. The first is a threat model for Internet of Things networks. The second is BottleCap, a capability-based access control module, and an exemplar of decentralised security architecture at the application layer. The third is StarfishNet, a network-layer protocol for Internet of Things wireless networks, and a similar exemplar of decentralised security architecture at the network layer. Both are evaluated with microbenchmarks on prototype implementations; StarfishNet's association protocol is additionally validated using formal verification in the protocol verification tool Tamarin.
APA, Harvard, Vancouver, ISO, and other styles
18

Yating, Zhang. "A Study on Object Search and Relationship Search from Text Archive Data." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/217201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bashon, Yasmina M. "Contributions to fuzzy object comparison and applications. Similarity measures for fuzzy and heterogeneous data and their applications." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6305.

Full text
Abstract:
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
Libyan Embassy
APA, Harvard, Vancouver, ISO, and other styles
20

Bashon, Yasmina Massoud. "Contributions to fuzzy object comparison and applications : similarity measures for fuzzy and heterogeneous data and their applications." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6305.

Full text
Abstract:
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
APA, Harvard, Vancouver, ISO, and other styles
21

DeLay, Steven. "Phenomenology and the self's measure : studies in subjectivity." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:af560322-5334-4be4-8ce4-d47c7e50540e.

Full text
Abstract:
The philosophical tradition has long understood subjectivity solely in reference to the self's place within the world and the powers of intentional transcendence which open it. Nowhere is this presupposition more apparent than in the thought of Husserl, Heidegger, Sartre, and Merleau-Ponty. Despite the precise differences among their respective philosophies of transcendence, each understands the self as little else than that which opens the exteriority of a world and is thereby exhausted and determined by it. Against this prevailing assumption that the self is a 'being-in-the-world', I contend that the essence of subjectivity instead consists in the unworldly interiority of life's affective self-revelation. The studies that follow accordingly investigate five related aspects of subjectivity: the irreducibility of the self's individuality to society; the blow of vanity that reveals this inwardness; the resultant life that marshals and in turn deploys it; the power of the work of art to express it; and finally the promise of immortality that sustains it.
APA, Harvard, Vancouver, ISO, and other styles
22

Mittal, Shruti. "Pancreas transplantation : associations with graft failure and measures of function." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:a91ed865-8404-4be9-a11f-7e0b71357c17.

Full text
Abstract:
Pancreas transplantation, either as a simultaneous pancreas kidney (SPK) or an isolated pancreas (IP), is a successful treatment option for some people with diabetes; however, grafts at the highest risk of failing are difficult to identify, and a means of monitoring graft function effectively post-transplant is lacking. The work in this thesis aimed to identify factors associated with pancreas graft failure and explore metabolic features of early graft dysfunction. First, the published Pancreas Donor Risk Index tool was predictive using UK National data in SPK but not IP transplantation, suggesting greater influence of recipient and post-transplant factors in determining IP graft survival. HLA antibody monitoring showed that the development of de novo DSA post-transplant was associated with poorer graft outcomes, particularly in IP transplant; and assessment of ICA and GADA autoantibodies also showed a deleterious effect of pre-transplant autoantibody positivity in IP transplantation; suggesting IP recipients may be immunological disparate to those receiving SPK. A retrospective analysis of early post-transplant oral glucose tolerance tests showed impaired glucose tolerance to be associated with later graft failure, and led to prospective studies examining glucose homeostasis in the early post-transplant period. Early impaired glucose tolerance was correlated to hyperglycaemia detected on continuous glucose monitoring. Analysis of patients pre- and post-transplant confirmed complete normalisation of glucagon concentrations early post-transplant and early impairment in insulin secretion. A comparison of the response to oral and intravenous glucose demonstrated that insulin secretion may be affected by the absence of the incretin effect early after pancreas transplantation, associated with reduced GLP-1 but normal GIP concentrations evident post-transplant. The incretin effect became established by 3 months post-transplant, and was associated with an increased GLP-1 response. Together these studies demonstrated novel features associated with high risk of graft failure which could help to identify patients who may benefit from therapeutic interventions aimed at improving graft outcomes.
APA, Harvard, Vancouver, ISO, and other styles
23

Elagab, Omer Yousif. "The legality of non-forcible counter-measures in international law." Thesis, University of Oxford, 1986. https://ora.ox.ac.uk/objects/uuid:bbb45168-8338-447a-bf4a-fe4e47834e3e.

Full text
Abstract:
The object of this thesis is to examine the legality of nonforcible counter-measures in bilateral contexts. The major questions addressed are as follows: (i) do counter-measures constitute an autonomous category of justification for wrongful conduct? (ii) if so, what are the conditions of the legality of that category? (iii) what are the more significant collateral constraints on the legality of countermeasures? and (iv) what is the extent to which policy considerations contribute towards the determination of the legality of counter-measures? The study begins with an Introduction which indicates the scope and the outline of the thesis, the terminology employed, and the approach adopted. Chapters One and Two deal with the historical development of the law of reprisals with special reference to non-forcible counter-measures. Chapter Three is entitled "The status of non-forcible counter-measures in customary international law since 1945: a preliminary sketch". Chapters Four, Five,and Six are devoted to examining the conditions of the legality of non-forcible counter-measures, viz., (i) breach, (ii) prior demand for reparation; and (iii) proportionality. Chapter Seven examines some of the more significant collateral constraints on the legality of counter-measures. In Chapter Eight an attempt will initially be made to examine the circumstances in which the performance of obligations under a treaty may be withheld as a counter-measure. The conclusions reached will then be compared and contrasted with the regime established by the Vienna Convention on the Law of Treaties. Chapter Nine considers the legality of counter-measures in the context of a commitment to peaceful settlement. Chapter Ten is concerned with the legality of economic coercion, and also with whether the legality of that concept has a bearing on the question of lawful counter-measures. The final Chapter summarises the major characteristics of the existing legal regime of counter-measures.
APA, Harvard, Vancouver, ISO, and other styles
24

Johan, Filip Rindler Johan Filip. "Lower Semicontinuity and Young Measures for Integral Functionals with Linear Growth." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:c4736fa2-ab51-4cb7-b1d9-cbab0ede274b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Famfulík, Lukáš. "Modul pro sledování mobilních objektů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219329.

Full text
Abstract:
The aim of this thesis is to innovate NCL 07 unit made by NAM system a.s. The new device is intended to have improved parameters and diminished dimensions. It should be fully comparable to similar products on market. In the first part there are described features and individual circuits of NCL07 unit. Further there is done an analysis to suggest modifications leading to reduce proportions of DPS and to enhance other parameters including SW design improvement. Theoretical part deals with selection of main measured values and methods of verification of measurement accuracy. There is also described theoretical basis used in the oncoming sections of the thesis. In the practical part there are presented procedures of designs of innovated product. In the conclusion there will be made testing of new unit and results will be compared to competitive commodities.
APA, Harvard, Vancouver, ISO, and other styles
26

Gupta, Gaurav. "Robust digital watermarking of multimedia objects." Phd thesis, Australia : Macquarie University, 2008. http://hdl.handle.net/1959.14/28597.

Full text
Abstract:
Thesis (PhD)--Macquarie University, Division of Information and Communication Sciences, Department of Computing, 2008.
Bibliography: p. 144-153.
Introduction -- Background -- Overview of watermarking -- Natural language watermarking -- Software watermarking -- Semi-blind and reversible database watermarking -- Blind and reversible database watermarking -- Conclusion and future research -- Bibliography.
Digital watermarking has generated significant research and commercial interest in the past decade. The primary factors contributing to this surge are widespread use of the Internet with improved bandwidth and speed, regional copyright loopholes in terms of legislation; and seamless distribution of multimedia content due to peer-to-peer file-sharing applications. -- Digital watermarking addresses the issue of establishing ownership over mul-timedia content through embedding a watermark inside the object. Ideally, this watermark should be detectable and/or extractable, survive attacks such as digital reproduction and content-specific manipulations such as re-sizing in the case of images, and be invisible to the end-user so that the quality of the content is not degraded significantly. During detection or extraction, the only requirements should be the secret key and the watermarked multimedia object, and not the original un-marked object or the watermark inserted. Watermarking scheme that facilitate this requirement are categorized as blind. In recent times, reversibility of watermark has also become an important criterion. This is due to the fact that reversible watermarking schemes can provided security against secondary watermarking attacks by using backtracking algorithms to identify the rightful owner. A watermarking scheme is said to be reversible if the original unmarked object can be regenerated from the watermarked copy and the secret key.
This research covers three multimedia content types: natural language documents, software, and databases; and discusses the current watermarking scenario, challenges, and our contribution to the field. We have designed and implemented a natural language watermarking scheme that uses the redundancies in natural languages. As a result, it is robust against general attacks against text watermarks. It offers additional strength to the scheme by localizing the attack to the modified section and using error correction codes to detect the watermark. Our first contribution in software watermarking is identification and exploitation of weaknesses in branch-based software watermarking scheme proposed in [71] and the software watermarking algorithm we present is an improvised version of the existing watermarking schemes from [71]. Our scheme survives automated debugging attacks against which the current schemes are vulnerable, and is also secure against other software-specific attacks. We have proposed two database watermarking schemes that are both reversible and therefore resilient against secondary watermarking attacks. The first of these database watermarking schemes is semi-blind and requires the bits modified during the insertion algorithm to detect the watermark. The second scheme is an upgraded version that is blind and therefore does not require anything except a secret key and the watermarked relation. The watermark has a 89% probability of survival even when almost half of the data is manipulated. The watermarked data in this case is extremely useful from the users' perspective, since query results are preserved (i.e., the watermarked data gives the same results for a query as the nmarked data). -- The watermarking models we have proposed provide greater security against sophisticated attacks in different domains while providing sufficient watermark-carrying capacity at the same time. The false-positives are extremely low in all the models, thereby making accidental detection of watermark in a random object almost negligible. Reversibility has been facilitated in the later watermarking algorithms and is a solution to the secondary watermarking attacks. We shall address reversibility as a key issue in our future research, along with robustness, low false-positives and high capacity.
Mode of access: World Wide Web.
xxiv, 156 p. ill. (some col.)
APA, Harvard, Vancouver, ISO, and other styles
27

Gupta, Alok. "A Bayesian approach to financial model calibration, uncertainty measures and optimal hedging." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:6158b433-20b6-4f8b-9199-895ced574330.

Full text
Abstract:
In this thesis we address problems associated with financial modelling from a Bayesian point of view. Specifically, we look at the problem of calibrating financial models, measuring the model uncertainty of a claim and choosing an optimal hedging strategy. Throughout the study, the local volatility model is used as a working example to clarify the proposed methods. This thesis assumes a prior probability density for the unknown parameter in a model we try to calibrate. The prior probability density regularises the ill-posedness of the calibration problem. Further observations of market prices are used to update this prior, using Bayes law, and give a posterior probability density for the unknown model parameter. Resulting Bayes estimators are shown to be consistent for finite-dimensional model parameters. The posterior density is then used to compute the Bayesian model average price. In tests on local volatility models it is shown that this price is closer than the prices of comparable calibration methods to the price given by the true model. The second part of the thesis focuses on quantifying model uncertainty. Using the framework for market risk measures we propose axioms for new classes of model uncertainty measures. Similar to the market risk case, we prove representation theorems for coherent and convex model uncertainty measures. Example measures from the latter class are provided using the Bayesian posterior. These are used to value the model uncertainty for a range of financial contracts priced in the local volatility model. In the final part of the thesis we propose a method for selecting the model, from a set of candidate models, that optimises the hedging of a specified financial contract. In particular we choose the model whose corresponding price and hedge optimises some hedging performance indicator. The selection problem is solved using Bayesian loss functions to encapsulate the loss from using one model to price and hedge when the true model is a different model. Linkages are made with convex model uncertainty measures and traditional utility functions. Numerical experiments on a stochastic volatility model and the local volatility model show that the Bayesian strategy can outperform traditional strategies, especially for exotic options.
APA, Harvard, Vancouver, ISO, and other styles
28

Kim, Ji-in. "Relationships between different measures of glycaemia and their relevance to diabetic complications." Thesis, University of Oxford, 2006. https://ora.ox.ac.uk/objects/uuid:1004b345-fca6-4516-9f5c-b1c20cba72ee.

Full text
Abstract:
Diabetes is a chronic disease characterised by high blood glucose levels. Monitoring of blood glucose plays an important role in the management of diabetes. Although, several different glycaemic measures are used in clinical practice, glycated haemoglobin (HbA1c) is increasingly accepted as the standard on which many clinical decisions are based. In this thesis, the relationship between HbA1c and different blood glucose measures will be examined and their relevance to diabetic complications will be investigated. This work is for people at increased risk of type 2 diabetes and for those with newly diagnosed type 2 diabetes. Using the blood glucose profiles available from continuous glucose monitoring, the relationships between HbA1c and derived blood glucose measures were studied. Mean blood glucose and blood glucose peaks equally predicted HbA1c. The correlation of FPG to HbA1c was weaker than that of mean blood glucose. The haemoglobin glycation index (HGI) is defined as the difference between a subject's HbA1c and that expected from regression of HbA1c on FPG and captures the variation in HbA1c not attributable to FPG. I used Early Diabetes Intervention Trial (EDIT) and UK prospective diabetes study (UKPDS) data to determine whether HGI is reproducible and whether differences relate to potential confounders. In both studies, HGI was reproducible and not explained by the potential confounders examined. I examined whether there were additive effects of FPG, HbA1c or HGI on diabetic complications using UKPDS data since it was found previously that either HbAlc or FPG was related to diabetic complications in UKPDS. For microvascular complications, both FPG and HbA1c were independent risk factors. When HGI was related to microvascular complications, HGI made an independent contribution adjusting for either FPG or HbA1c. For macrovascular complications, HbA1c but not FPG was an independent risk factor. HGI did not make an independent contribution to macrovascular complications adjusting for HbA1c. My work showed that the same HbA1c level does not necessarily have the same clinical meaning in two different subjects. There can be marked differences in individual levels of HbA1c relative to FPG values in people with type 2 diabetes. Thus, HbA1c levels should be interpreted with caution and treatment decisions need to be made considering individual glycaemic control. HGI may be useful, informing healthcare professionals of the likely magnitude and direction of difference between observed HbA1c levels and those expected in the light of prevailing blood glucose levels.
APA, Harvard, Vancouver, ISO, and other styles
29

Muse, Katherine. "Developing and evaluating valid, reliable and usable measures of assessing competence in Cognitive Behavioural Therapy." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:bd6a2178-7627-4829-a25e-afcbf8414590.

Full text
Abstract:
Tools for measuring competence in delivering Cognitive Behavioural Therapy (CBT) provide a means of assessing the training of new CBT therapists and ensuring the quality of treatment provision within routine practice, provide a framework for delivering formative feedback, promote ongoing self-reflection, and are essential to establishing treatment integrity in research trials. As such, identifying an optimal strategy for assessing the competence with which CBT is delivered is crucial to the continued progression of the field. However, research in this area has been somewhat limited to date. Thus, there are at present no evidence-based best practice guidelines outlining the way CBT competence should be assessed. Furthermore, many of the assessment measures currently available have been widely criticised, indicating a need for improved tools for assessing CBT competence. To begin addressing this issue, the first two chapters of this thesis focus on reviewing and evaluating current assessment methods. Chapter one provides a systematic review of current methods of assessing CBT competence and chapter two outlines a qualitative exploration of experts’ understandings and experiences of assessing CBT competence. Findings from these studies provide tentative recommendations for practitioners and researchers assessing CBT competence. These initial studies also highlight ways in which the assessment of CBT competence could be improved and therefore provide a platform for guiding subsequent thesis chapters which focus on further developing existing assessment measures. Specifically, chapters three to six focus on the development and evaluation of a novel CBT competence rating scale: the Assessment of Core CBT Skills (ACCS). The ACCS builds upon currently available scales (especially the Cognitive Therapy Scale- Revised: CTS-R) to provide an assessment framework for assessors to deliver formative and summative feedback regarding therapists’ performance within observed CBT treatment sessions and for therapists to rate and reflect on their own performance. Development of the ACCS involved three key stages: 1- theory-driven scale development (chapter three), 2- an ‘expert’ review of the content validity, face validity, and usability of the scale (chapter four), and 3- an evaluation of the scale involving a pilot study examining its psychometric properties (chapter five) and a focus group examining its usability and utility (chapter six). Results from these studies indicate that the ACCS is a useful learning tool, is easy to use, has good psychometric properties, and offers an acceptable alternative to the CTS-R. Finally, chapter seven examines whether assessors require training in how to use the ACCS, concluding that simply reading the ACCS manual may be sufficient to achieve acceptable levels of reliability and usability. The results from the thesis are then drawn together in the final concluding comments in chapter eight, which discusses the findings within the broader context of the assessment of CBT competence.
APA, Harvard, Vancouver, ISO, and other styles
30

Shublaq, Nour. "Use of inertial sensors to measure upper limb motion : application in stroke rehabilitation." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:3b1709fb-8be6-4402-b846-096693fc75bc.

Full text
Abstract:
Stroke is the largest cause of severe adult complex disability, caused when the blood supply to the brain is interrupted, either by a clot or a burst blood vessel. It is characterised by deficiencies in movement and balance, changes in sensation, impaired motor control and muscle tone, and bone deformity. Clinically applied stroke management relies heavily on the observational opinion of healthcare workers. Despite the proven validity of a few clinical outcome measures, they remain subjective and inconsistent, and suffer from a lack of standardisation. Motion capture of the upper limb has also been used in specialised laboratories to obtain accurate and objective information, and monitor progress in rehabilitation. However, it is unsuitable in environments that are accessible to stroke patients (for example at patients’ homes or stroke clubs), due to the high cost, special set-up and calibration requirements. The aim of this research project was to validate and assess the sensitivity of a relatively low cost, wearable, compact and easy-to-use monitoring system, which uses inertial sensors in order to obtain detailed analysis of the forearm during simple functional exercises, typically used in rehabilitation. Forearm linear and rotational motion were characterised for certain movements on four healthy subjects and a stroke patient using a motion capture system. This provided accuracy and sensitivity specifications for the wearable monitoring system. With basic signal pre-processing, the wearable system was found to report reliably on acceleration, angular velocity and orientation, with varying degrees of confidence. Integration drift errors in the estimation of linear velocity were unresolved. These errors were not straightforward to eliminate due to the varying position of the sensor accelerometer relative to gravity over time. The cyclic nature of rehabilitation exercises was exploited to improve the reliability of velocity estimation with model-based Kalman filtering, and least squares optimisation techniques. Both signal processing methods resulted in an encouraging reduction of the integration drift in velocity. Improved sensor information could provide a visual display of the movement, or determine kinematic quantities relevant to the exercise performance. Hence, the system could potentially be used to objectively inform patients and physiotherapists about progress, increasing patient motivation and improving consistency in assessment and reporting of outcomes.
APA, Harvard, Vancouver, ISO, and other styles
31

Brown, Phillip G. M. "2D ultrasound elastography as a functional measure of healing of the Achilles tendon in vivo." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:e2d0a97e-d557-4b5a-869a-36cbd33b9994.

Full text
Abstract:
The Achilles tendon is the largest tendon in the human body, which elastically stores and releases energy to facilitate walking and running. Tendons can suffer from a range of pathologies, most notably that of complete rupture, which affects athletes, physically active workers and the aged. There is a growing demand for in vivo methods of objectively measuring tendon health for aiding diagnosis, monitoring therapy and for assessment of new treatments. Knowledge of the changes in mechanical properties during the healing process is also limited and new methods to accurately and consistently estimate these could provide insights into the healing process and guide future research efforts. This thesis presents the development and use of 2D ultrasound elastography, a quantitative strain estimation imaging technique, as a tool to measure changes in the tensile mechanical properties of the Achilles tendon. This technique performs frame-to-frame block matching of image texture to track motion in an ultrasound signal sequence and create a strain estimation field from the spatial derivative of the motion. Elastography in the image-lateral direction of sagittal plane scans is of particular interest as this is in line with the longitudinal axis of the tendon, but presents extra accuracy issues from out of plane motion and lower image spatial resolution. Tendon rupture also presents unique problems to image acquisition and analysis- patient pain and safety are important considerations and disruption of the ultrasound texture can make 2D motion tracking more difficult. A new 2D elastography block matching algorithm, named `AutoQual', was developed to enable accurate tracking of motion in the image-lateral direction and reduce the impact of artefacts and errors common with damaged Achilles tendons image sequences. It was shown to outperform a multiscale block matching method when tested using ultrasound sequences from in vivo and gelatine phantom experiments. The input parameters of this algorithm were then optimised using the phantom data for benchmarking. The AutoQual algorithm was then used to analyse ultrasound sequences from a 24-week longitudinal study of 21 subjects with ruptured Achilles tendons to assess lateral, axial and principal strains during controlled passive motion of the foot or axial palpation of the ultrasound probe. Lateral and principal strains from controlled dorsiflexion were shown to be more repeatable and more sensitive to change than axial strains with manual palpation. This experience with lateral strain imaging from ruptured Achilles tendons gave an increased knowledge of the strain imaging artefacts and features that can occur. These are described in detail in order that they may be further mitigated in quantitative analysis by optimising acquisition protocols, further amendment of the block tracking algorithm, or exclusion of erroneous areas when selecting regions of interest. Regularisation is a potential solution to some common artefacts such as discontinuities from poor tracking in shadow regions. Regularisation of the lateral displacement fields is investigated using 2D bicubic smoothing splines. The regularisation parameters used are shown to have minimal effect on quantitative analysis and can aid visual clarity or reduce artefacts within certain settings. However, regularisation was also shown to cause large errors when parameters were set more aggressively. Finally, it is identified that cumulative lateral strain measurement of the Achilles and other tendons is feasible but that future work is needed to further improve the quality of force and cross sectional area measurements in order to infer mechanical properties accurately. Repeatable high force motion protocols also need to be developed to measure healthy tendons and to ensure comparable results between different patients and research groups.
APA, Harvard, Vancouver, ISO, and other styles
32

Pace, Michele. "Stochastic models and methods for multi-object tracking." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00651396.

Full text
Abstract:
La poursuite multi-cibles a pour objet le suivi d'un ensemble de cibles mobiles à partir de données obtenues séquentiellement. Ce problème est particulièrement complexe du fait du nombre inconnu et variable de cibles, de la présence de bruit de mesure, de fausses alarmes, d'incertitude de détection et d'incertitude dans l'association de données. Les filtres PHD (Probability Hypothesis Density) constituent une nouvelle gamme de filtres adaptés à cette problématique. Ces techniques se distinguent des méthodes classiques (MHT, JPDAF, particulaire) par la modélisation de l'ensemble des cibles comme un ensemble fini aléatoire et par l'utilisation des moments de sa densité de probabilité. Dans la première partie, on s'intéresse principalement à la problématique de l'application des filtres PHD pour le filtrage multi-cibles maritime et aérien dans des scénarios réalistes et à l'étude des propriétés numériques de ces algorithmes. Dans la seconde partie, nous nous intéressons à l'étude théorique des processus de branchement liés aux équations du filtrage multi-cibles avec l'analyse des propriétés de stabilité et le comportement en temps long des semi-groupes d'intensités de branchements spatiaux. Ensuite, nous analysons les propriétés de stabilité exponentielle d'une classe d'équations à valeurs mesures que l'on rencontre dans le filtrage non-linéaire multi-cibles. Cette analyse s'applique notamment aux méthodes de type Monte Carlo séquentielles et aux algorithmes particulaires dans le cadre des filtres de Bernoulli et des filtres PHD.
APA, Harvard, Vancouver, ISO, and other styles
33

Doel, Thomas MacArthur Winter. "Developing clinical measures of lung function in COPD patients using medical imaging and computational modelling." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:34bbf6fd-ea01-42a2-8e99-d1e4a3c765b7.

Full text
Abstract:
Chronic obstructive pulmonary disease (COPD) describes a range of lung conditions including emphysema, chronic bronchitis and small airways disease. While COPD is a major cause of death and debilitating illness, current clinical assessment methods are inadequate: they are a poor predictor of patient outcome and insensitive to mild disease. A new imaging technology, hyperpolarised xenon MRI, offers the hope of improved diagnostic techniques, based on regional measurements using functional imaging. There is a need for quantitative analysis techniques to assist in the interpretation of these images. The aim of this work is to develop these techniques as part of a clinical trial into hyperpolarised xenon MRI. In this thesis we develop a fully automated pipeline for deriving regional measurements of lung function, making use of the multiple imaging modalities available from the trial. The core of our pipeline is a novel method for automatically segmenting the pulmonary lobes from CT data. This method combines a Hessian-based filter for detecting pulmonary fissures with anatomical cues from segmented lungs, airways and pulmonary vessels. The pipeline also includes methods for segmenting the lungs from CT and MRI data, and the airways from CT data. We apply this lobar map to the xenon MRI data using a multi-modal image registration technique based on automatically segmented lung boundaries, using proton MRI as an intermediate stage. We demonstrate our pipeline by deriving lobar measurements of ventilated volumes and diffusion from hyperpolarised xenon MRI data. In future work, we will use the trial data to further validate the pipeline and investigate the potential of xenon MRI in the clinical assessment of COPD. We also demonstrate how our work can be extended to build personalised computational models of the lung, which can be used to gain insights into the mechanisms of lung disease.
APA, Harvard, Vancouver, ISO, and other styles
34

Jabangwe, Ronald. "Software Quality Evaluation for Evolving Systems in Distributed Development Environments." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00613.

Full text
Abstract:
Context: There is an overwhelming prevalence of companies developing software in global software development (GSD) contexts. The existing body of knowledge, however, falls short of providing comprehensive empirical evidence on the implication of GSD contexts on software quality for evolving software systems. Therefore there is limited evidence to support practitioners that need to make informed decisions about ongoing or future GSD projects. Objective: This thesis work seeks to explore changes in quality, as well as to gather confounding factors that influence quality, for software systems that evolve in GSD contexts. Method: The research work in this thesis includes empirical work that was performed through exploratory case studies. This involved analysis of quantitative data consisting of defects as an indicator for quality, and measures that capture software evolution, and qualitative data from company documentations, interviews, focus group meetings, and questionnaires. An extensive literature review was also performed to gather information that was used to support the empirical investigations. Results: Offshoring software development work, to a location that has employees with limited or no prior experience with the software product, as observed in software transfers, can have a negative impact on quality. Engaging in long periods of distributed development with an offshore site and eventually handing over all responsibilities to the offshore site can be an alternative to software transfers. This approach can alleviate a negative effect on quality. Finally, the studies highlight the importance of taking into account the GSD context when investigating quality for software that is developed in globally distributed environments. This helps with making valid inferences about the development settings in GSD projects in relation to quality. Conclusion: The empirical work presented in this thesis can be useful input for practitioners that are planning to develop software in globally distributed environments. For example, the insights on confounding factors or mitigation practices that are linked to quality in the empirical studies can be used as input to support decision-making processes when planning similar GSD projects. Consequently, lessons learned from the empirical investigations were used to formulate a method, GSD-QuID, for investigating quality using defects for evolving systems. The method is expected to help researchers avoid making incorrect inferences about the implications of GSD contexts on quality for evolving software systems, when using defects as a quality indicator. This in turn will benefit practitioners that need the information to make informed decisions for software that is developed in similar circumstances.
APA, Harvard, Vancouver, ISO, and other styles
35

Mittner, Ondřej. "Určování velikosti plochy a rozměrů vybraných objektů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218637.

Full text
Abstract:
The master’s thesis describes hardware opto-electronic instruments for contactless measurement of surfaces. It concentrates on instruments used for opto-electrical transformation of a taking scene and software processing of digital pictures. It presents selected methods of pre-processing, segmenting and following final modifications of these pictures. Then it deals with measuring of areas of surfaces and sizes of selected objects in these pictures and conversion of results from pixels to SI system of units. Possible divergences of measuring are described as well. A flow chart of a program for automatic and manual measuring of surfaces and sizes of objects in a picture, which is commented in detail, is a part of the thesis. The main product of this thesis is application Merovo, which provides measuring of an area and proportions of included objects. This application is analysed and described in detail in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
36

Rombach, Ines. "The handling, analysis and reporting of missing data in patient reported outcome measures for randomised controlled trials." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:1d038192-69ca-4d34-9974-1bc092466dee.

Full text
Abstract:
Missing data is a potential source of bias in the results of randomised controlled trials (RCTs), which can have a negative impact on guidance derived from them, and ultimately patient care. This thesis aims to improve the understanding, handling, analysis and reporting of missing data in patient reported outcome measures (PROMs) for RCTs. A review of the literature provided evidence of discrepancies between recommended methodology and current practice in the handling and reporting of missing data. Particularly, missed opportunities to minimise missing data, the use of inappropriate analytical methods and lack of sensitivity analyses were noted. Missing data patterns were examined and found to vary between PROMs as well as across RCTs. Separate analyses illustrated difficulties in predicting missing data, resulting in uncertainty about assumed underlying missing data mechanisms. Simulation work was used to assess the comparative performance of statistical approaches for handling missing available in standard statistical software. Multiple imputation (MI) at either the item, subscale or composite score level was considered for missing PROMs data at a single follow-up time point. The choice of an MI approach depended on a multitude of factors, with MI at the item level being more beneficial than its alternatives for high proportions of item missingness. The approaches performed similarly for high proportions of unit-nonresponse; however, convergence issues were observed for MI at the item level. Maximum likelihood (ML), MI and inverse probability weighting (IPW) were evaluated for handling missing longitudinal PROMs data. MI was less biased than ML when additional post-randomisation data were available, while IPW introduced more bias compared to both ML and MI. A case study was used to explore approaches to sensitivity analyses to assess the impact of missing data. It was found that trial results could be susceptible to varying assumptions about missing data, and the importance of interpreting the results in this context was reiterated. This thesis provides researchers with guidance for the handling and reporting of missing PROMs data in order to decrease bias arising from missing data in RCTs.
APA, Harvard, Vancouver, ISO, and other styles
37

Jebelli, Ali. "Development of Sensors and Microcontrollers for Underwater Robots." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31283.

Full text
Abstract:
Nowadays, small autonomous underwater robots are strongly preferred for remote exploration of unknown and unstructured environments. Such robots allow the exploration and monitoring of underwater environments where a long term underwater presence is required to cover a large area. Furthermore, reducing the robot size, embedding electrical board inside and reducing cost are some of the challenges designers of autonomous underwater robots are facing. As a key device for reliable operation-decision process of autonomous underwater robots, a relatively fast and cost effective controller based on Fuzzy logic and proportional-integral-derivative method is proposed in this thesis. It efficiently models nonlinear system behaviors largely present in robot operation and for which mathematical models are difficult to obtain. To evaluate its response, the fault finding test approach was applied and the response of each task of the robot depicted under different operating conditions. The robot performance while combining all control programs and including sensors was also investigated while the number of program codes and inputs were increased.
APA, Harvard, Vancouver, ISO, and other styles
38

Schneider, Eric B. "Studies in historical living standards and health : integrating the household and children into historical measures of living standards and health." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f2e55a37-c605-4aba-8a2e-3d699c6b82b7.

Full text
Abstract:
This dissertation attempts to integrate the household and children more fluidly into measures of well-being in the past. In part one, I develop a Monte Carlo simulation to test some of the assumptions of Allen’s welfare ratio methodology. These included his assumptions that family size was constant over time, that there were no female-headed households and that women and children did not participate in the labour force. After all of the adjustments, it appears that Allen’s welfare ratios underestimate the welfare ratios of a demographically representative group of families, especially if women and children’s labour force participation is included. However, the predicted distributions also highlight the struggles of agricultural labourers, who are given separate consideration. Even the average agricultural labourers’ family with women and children working would have had to rely of self- provisioning, gleaning, poor relief or the extension of the working year to make ends meet at the poorest point in their family life cycle. Part two adjusts Floud et al.’s estimates of calorie availability in the English economy from 1700 to 1909 for the costs of digestion, pregnancy and lactation. Taken together, these three additional costs reduced the amount calories available by around 15 per cent in 1700 but only by 5 per cent in 1909 because of the changing composition of the English diet. Part three presents a new adaptive framework for studying changes in children’s growth patterns over time and a new methodology, longitudinal growth studies, for measuring gender disparities in health in the past. An adaptive framework for understanding growth provides a more parsimonious explanation for the vast catch-up growth achieved by slave children in the antebellum American South. The slave children were only able to achieve this catch-up growth because they were programmed for a tall height trajectory by relatively good conditions in utero. Finally, impoverished girls experienced greater catch-up growth than boys in two schools in late-nineteenth century Boston, USA and early-twentieth century London, suggesting that girls were deprived relative to boys before entering these institutions.
APA, Harvard, Vancouver, ISO, and other styles
39

Simpson, Andrew C. "Safety through security." Thesis, University of Oxford, 1996. http://ora.ox.ac.uk/objects/uuid:4a690347-46af-42a4-91fe-170e492a9dd1.

Full text
Abstract:
In this thesis, we investigate the applicability of the process algebraic formal method Communicating Sequential Processes (CSP) [Hoa85] to the development and analysis of safetycritical systems. We also investigate how these tasks might be aided by mechanical verification, which is provided in the form of the proof tool Failures-Divergences Refinement (FDR) [Ros94]. Initially, we build upon the work of [RWW94, Ros95], in which CSP treatments of the security property of non-interference are described. We use one such formulation to define a property called protection, which unifies our views of safety and security. As well as applying protection to the analysis of safety-critical systems, we develop a proof system for this property, which in conjunction with the opportunity for automated analysis provided by FDR, enables us to apply the approach to problems of a sizable complexity. We then describe how FDR can be applied to the analysis of mutual exclusion, which is a specific form of non-interference. We investigate a number of well-known solutions to the problem, and illustrate how such mutual exclusion algorithms can be interpreted as CSP processes and verified with FDR. Furthermore, we develop a means of verifying the faulttolerance of such algorithms in terms of protection. In turn, mutual exclusion is used to describe safety properties of geographic data associated with Solid State Interlocking (SSI) railway signalling systems. We show how FDR can be used to describe these properties and model interlocking databases. The CSP approach to compositionality allows us to decompose such models, thus reducing the complexity of analysing safety invariants of SSI geographic data. As such, we describe how the mechanical verification of Solid State Interlocking geographic data, which was previously considered to be an intractable problem for the current generation of mechanical verification tools, is computationally feasible using FDR. Thus, the goals of this thesis are twofold. The first goal is to establish a formal encapsulation of a theory of safety-critical systems based upon the relationship which exists between safety and security. The second goal is to establish that CSP, together with FDR, can be applied to the modelling of Solid State Interlocking geographic databases. Furthermore, we shall attempt to demonstrate that such modelling can scale up to large-scale systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Karásek, Miroslav. "Aplikace rozšířené reality: Měření rozměrů objektů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399167.

Full text
Abstract:
The goal of this diploma thesis is design and implementation of an application for automated measurement of objects in augmented reality. It focuses on automating the entire process, so that the user carries out the fewest number of manual actions. The proposed interface divides the measurement into several steps in which it gives the user instructions to progress to the next stage. The result is an Android application with ARCore technology. Is capable of determining the minimal bounding box of an object of a general shape lying on a horizontal surface. Measure error depends on ambient conditions and is in units of percent.
APA, Harvard, Vancouver, ISO, and other styles
41

Bingue, Eugene W. P. "The Development of Reliable Metrics to Measure the Efficiency of Object-Oriented Dispatching using Ada 95 a High-Level Language implementing Hard-Deadline Real-time Programming." NSUWorks, 2002. http://nsuworks.nova.edu/gscis_etd/415.

Full text
Abstract:
The purpose of this study is to produce a metric to accurately capture the effects of real time dispatching using object-oriented (00) programming applied in the maintenance phase of the life cycle of hard, real-time systems. The hypothesis presented is that object-oriented programming constructs can be applied in a manner that will have beneficial life-cycle maintenance effects while avoiding adverse timing side effects. This study will use complexity measures instruments that will calculate the Cyciomatic Complexity. This study will examine the dispatching time of each program, and utilize utilities to calculate the number of machine cycles for each program component. Coding techniques will be presented for various program design dilemmas, which examine the object-oriented dispatching features.
APA, Harvard, Vancouver, ISO, and other styles
42

Kyriakides, Nicolas. "Judicial discretion and contempt power : two elements of equity that would benefit the EAPO and future EU-wide provisional and protective measures." Thesis, University of Oxford, 2016. http://ora.ox.ac.uk/objects/uuid:91c8379a-252c-475c-995d-7d71dbb0d24f.

Full text
Abstract:
A person filing a civil claim faces the risk of being unable to enforce a favourable judgment. This is because their opponent may dissipate his assets and consequently be unable to satisfy a judgment given against him. Several mechanisms seek to alleviate this risk by preserving the defendant's assets pending judgment. These are predominantly the civilian in rem order and the common law freezing order. Fundamental differences between the common and civil law traditions may be observed in the freezing order and its civilian counterpart. Primarily, these are to be found in the margin of discretion given to the judge and the sanctions against non-compliance. The latter issue is closely related to the entity against which an order is directed: in the common law it is directed against the person, while in the civil law, against the asset. The significantly diverse approaches in these areas show the different course each of the legal families has taken in the administration of justice. The problem of preserving assets pending judgment becomes more complicated when the assets are not located in the same country as the courts with jurisdiction on the merits. The recently introduced European Account Preservation Order (‘EAPO') regulation is a pre-judgment instrument which enables a litigant to obtain an order preventing the transfer of funds held by the respondent in a bank account within the EU. It is the first of what may become several EU-wide provisional and protective measures. At first glance, the EAPO resembles the continental model rather than its common law counterpart, and, thus, brings into the open the differences between the two traditions in the area of provisional and protective measures. This work examines whether the features of the common law tradition - which in fact derive from the law of equity - ie judicial discretion in granting or refusing relief and contempt of court sanctions, could improve the EAPO as well as other EU-wide provisional and protective measures that may follow. It is argued that greater judicial discretion and a contempt sanction, provided that they are kept within certain limits, would improve the EAPO and similar measures in terms of efficiency and fairness.
APA, Harvard, Vancouver, ISO, and other styles
43

Ovreiu, Elena. "Accurate 3D mesh simplification." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00838783.

Full text
Abstract:
Complex 3D digital objects are used in many domains such as animation films, scientific visualization, medical imaging and computer vision. These objects are usually represented by triangular meshes with many triangles. The simplification of those objects in order to keep them as close as possible to the original has received a lot of attention in the recent years. In this context, we propose a simplification algorithm which is focused on the accuracy of the simplifications. The mesh simplification uses edges collapses with vertex relocation by minimizing an error metric. Accuracy is obtained with the two error metrics we use: the Accurate Measure of Quadratic Error (AMQE) and the Symmetric Measure of Quadratic Error (SMQE). AMQE is computed as the weighted sum of squared distances between the simplified mesh and the original one. Accuracy of the measure of the geometric deviation introduced in the mesh by an edge collapse is given by the distances between surfaces. The distances are computed in between sample points of the simplified mesh and the faces of the original one. SMQE is similar to the AMQE method but computed in the both, direct and reverse directions, i.e. simplified to original and original to simplified meshes. The SMQE approach is computationnaly more expensive than the AMQE but the advantage of computing the AMQE in a reverse fashion results in the preservation of boundaries, sharp features and isolated regions of the mesh. For both measures we obtain better results than methods proposed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
44

Musil, Jan. "Plán opatření pro případ vzniku mimořádné události v objektu výrobního charakteru." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-232698.

Full text
Abstract:
This Diploma thesis deals with creating the Plan of Measures in Case of Exceptional Incidents on manufacture Premises. The first part of thesis is dedicated to description of Exceptional Incidents, primarily accidents with leak of hazardous chemical substances, legislation background of Exceptional Incidents and Emergency planning, Civil Protection, as a complex within Czech Republic and then individual measures (the plan are assembled of) are descripted. The Plan of Measures, as such itself, is created on the basis of processed analysis of hazardous properties connected with chemical substances, used in McBride, a.s., mainly the Acids and mixtures of alcohol. The leak of this compounds was simulated in software ALOHA, which gives information about degree of threat to inhabitants of this area. Suggested Measures are created to be practicable in particular situation in surroundings of McBride, a.s. Aim of this Diploma thesis is not just to create a particular Plan, but integrally describe the issues of Emergency planning especially for smaller subjects within municipality, that do not come under the scope of law n. 59/2006 Sb., and to create a tool, that can help to other creators of similar Plans of Measures to understand to partial relations.
APA, Harvard, Vancouver, ISO, and other styles
45

Crocker, Helen. "Coeliac disease : health-related quality of life and patients' experiences of health care services." Thesis, University of Oxford, 2016. http://ora.ox.ac.uk/objects/uuid:7758bef9-d019-4b9f-b992-d7bf453ad427.

Full text
Abstract:
Coeliac disease (CD) is a chronic gastrointestinal condition, the only treatment for which is a gluten-free diet (GFD). Following a GFD is restrictive, burdensome, and can impact health-related quality of life (HRQOL). People with CD can experience long delays to diagnosis and evidence suggests large variations in follow-up care, but the relationship between health care experiences and HRQOL is unknown. The main aim of this research was to develop a patient-reported outcome measure and patient experience questionnaire, and use these to investigate the relationship between adults' experiences of health care services and HRQOL in CD. The questionnaires, named the Coeliac Disease Assessment Questionnaire (CDAQ) and the Coeliac Disease Patient Experience Questionnaire (CD-PEQ), were developed following qualitative interviews with adults with CD, and refined with input from experts, and cognitive interviews. The CDAQ was also subject to a translatability assessment to assess its linguistic and cultural translatability, and a cross-sectional survey to assist with item reduction and scale generation. Members of Coeliac UK (n=267) completed the CDAQ and CD-PEQ, together with the SF-36v2 and demographic questions as part of a postal survey. Psychological health, vitality, general health, and dietary burden were found to have the greatest impact on HRQOL, with physical health and social isolation the least affected. HRQOL was found to have a strong correlation with patients' experiences of health care services. Aspects most strongly related were: the provision of information; communication with HCPs; difficulty obtaining prescriptions; and GPs' knowledge. This research has identified aspects of health care services that are strongly related to HRQOL in CD. Health care providers are recommended to focus service improvement efforts on these areas. A reliable and valid disease-specific patient-reported outcome measure and patient experience questionnaire have been developed as part of this study. The CDAQ is suitable for use in research studies, including clinical trials, to assess HRQOL in CD.
APA, Harvard, Vancouver, ISO, and other styles
46

Sotomayor, Carlos [Verfasser], Ralf-Jürgen [Akademischer Betreuer] Dettmar, and Ulrich [Akademischer Betreuer] Klein. "Rotation Measure Synthesis as a tool to determine the intrinsic magnetic field structure of astrophysical objects / Carlos Sotomayor. Gutachter: Ralf-Jürgen Dettmar ; Ulrich Klein." Bochum : Ruhr-Universität Bochum, 2015. http://d-nb.info/107984340X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Liddle, Alexander David. "Failure of unicompartmental knee replacement." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c5bd883f-7c6f-42fe-9231-68609acaf234.

Full text
Abstract:
Unicompartmental knee replacement (UKR) is the principal alternative to total knee replacement (TKR) in the treatment of end-stage knee osteoarthritis. It involves less tissue resection, resulting in lower rates of morbidity and faster recoveries compared to TKR. However, UKR has a significantly higher revision rate compared to TKR. As a result, whilst over a third of patients are eligible for UKR, only around 8% receive it. A comprehensive comparison of matched patients undergoing TKR and UKR was undertaken using a large dataset from the National Joint Registry for England and Wales (NJR). Failure rates (revision, reoperation, complications and mortality), length of stay and patient-reported outcomes (PROMs) were studied. Whilst patients undergoing TKR had lower reoperation and revision rates, they had higher rates of morbidity and mortality, longer hospital stays, and inferior PROMs compared to UKR. The main reason for revision in UKR was loosening. In view of the high revision rate in UKR, NJR data was studied to identify modifiable risk factors for failure in UKR. Important patient factors were identified including age, gender and pre-operative function. Surgeons with a higher UKR caseload had significantly lower revision rates and superior patient-reported outcomes. Increasing usage (offering UKR to a greater proportion of knee replacement patients) appears to be a viable method of increasing caseload and therefore of improving results. Surgeons with optimal usage (around 50% of patients, using appropriate implants) achieved revision/reoperation rates similar to matched patients undergoing TKR. Two clinical studies were conducted to establish whether the use of cementless fixation would improve fixation and reduce the revision rate of UKR. Cementless UKR was demonstrated to be safe and reliable, with PROMs similar or superior to those demonstrated in cemented UKR. Patients with suboptimal cementless fixation were examined and pre-disposing technical factors were identified. Finally, using NJR data, the effect of the introduction of cementless UKR on overall outcomes was examined. The number of cementless cases was small, and no significant effect on implant survival was demonstrated. However, patients undergoing cementless UKR demonstrated superior PROMs. These studies demonstrate that UKR has numerous advantages over TKR in terms of morbidity, mortality and PROMs. If surgeons perform high volumes of UKR (achievable by increasing their UKR usage), these advantages can be attained without the large difference in revision rates previously demonstrated. Cementless UKR is safe and provides superior fixation and outcomes in the hands of high-volume surgeons. Further work is needed to quantify the revision rate of cementless UKR, and to assess its results in the hands of less experienced surgeons.
APA, Harvard, Vancouver, ISO, and other styles
48

Moraes, Sofia Royer. "Abordagem GEOBIA para a classificação do uso e cobertura da terra em área urbana associadas ao desenvolvimento de framework para monitoramento de inundações no município de Lajeado - RS." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/180505.

Full text
Abstract:
O monitoramento, a previsão e o controle de eventos extremos, como as inundações, é imprescindível, principalmente em áreas urbanas, devido à maior densidade populacional, bens materiais, saneamento e infraestruturas envolvidos no processo. O objetivo deste estudo, consistiu em classificar de forma automática o uso e cobertura da terra em área urbana, em um ortofotomosaico com altíssima resolução espacial (16 cm), cobrindo a área do bairro centro de Lajeado (Estado do Rio Grande do Sul – Brasil) e sua posterior aplicação, com dados dos arruamentos e de um Modelo Digital de Elevação (MDE), para a estruturação de um framework automatizado, baseado em plataformas livres, com capacidade de monitorar níveis de inundações sobre essa área urbana, em escala espacial e temporal. Para a classificação do uso e cobertura da terra foram testados os classificadores por árvore de decisão Boosted C5.0, Random Forests e Classification and Regression Trees (CART). Primeiramente, foram identificadas as seguintes classes: vegetação arbórea; vegetação herbácea (gramíneas); solo exposto; sistema viário (calçamento); telhados metálicos e telhados cerâmicos claros; telhados de concreto e fibrocimento; telhados metálicos e cerâmicos escuros; e sombra. Por meio do programa eCognition foram aplicados sete níveis de segmentação do ortofotomosaico, coletadas as amostras e definidos os atributos para cada classe O treinamento para os classificadores foi realizado no programa R. Para a análise da exatidão de classificação, foram gerados pontos de checagem aleatórios, que foram comparados com as classes das três imagens classificadas, para o cálculo da matriz de erros e do índice Kappa. A imagem classificada pelo algoritmo Random Forests apresentou a maior Exatidão Global (EG = 82,20%) e Índice Kappa (K = 0,79), seguido pela imagem classificada pelo algoritmo Boosted C5.0 (EG = 80,4%; K = 0,77) e pelo CART (EG = 64,90%; K = 0,57). Já o framework foi baseado na equação de regressão fluviométrica Encantado/Lajeado. Os resultados dessa equação podem ser visualizados como mapas em uma interface WEBSIG, onde estão simuladas as áreas e infraestruturas inundadas no bairro centro de Lajeado. Foram projetados diferentes níveis históricos de inundações e esse modelo foi validado a partir da comparação dos dados simulados com os medidos de uma inundação ocorrida em 10 de outubro de 2015. O erro altimétrico obtido foi inferior a 1 m. O framework deste estudo realiza o monitoramento do nível de inundação para a área urbana de Lajeado com até 6 horas de antecedência, demonstrando a eficácia desta simulação.
The monitoring, forecasting and control of extreme events, such as floods, is essential, especially in urban settlements, due to the greater population density, material assets, sanitation and infrastructures on these areas. This work aims to classify automatically the urban land use and land cover in an orthophoto mosaic with very high spatial resolution (16 cm) covering the central district area of Lajeado (Rio Grande do Sul State - Brazil) and its subsequent application, with data of the streets and a Digital Elevation Model (MDE), for the structuring of an automated framework, based on free platforms, with capacity to monitor flood levels in this urban area, on a spatial and temporal scale. We tested the decision tree classifiers Boosted C5.0, Random Forests and Classification and Regression Trees (CART). First, the following classes of land use and land cover were identified: forest land; herbaceous land (grasses); bare soil; road system (pavement); metal roofs and clear ceramic roofs; concrete and fiber cement roofs; dark metallic and ceramic roofs; and shade. The eCognition software were used to processes seven levels of segmentation of the orthophoto mosaic, and for collecting samples and attributes from each one these classes The decision tree methods were performed in R software. For the classification accuracy assessment, we generated random check points, which were compared with the classes of the three classified images, in order to calculate the error matrix and Kappa index. The image classified by the algorithm Random Forests presented the highest Global Accuracy (GA = 82.20%) and Kappa Index ( = 0.79), followed by the image classified by the Boosted C5.0 (GA = 80.4%; = 0.77) and the CART algorithm (GA = 64.90%,  = 0.57). The Framework was based on the fluviometric regression equation Encantado/Lajeado. The results of this equation can be visualized as maps in a WEBGIS interface, where the flooded areas in the downtown neighborhood of Lajeado are simulated. Different historical flood levels were projected and this model was validated by comparing the simulated data with those measured from a flood occurred on October 10th, 2015. The altimetric error obtained was less than 1 m. The framework of this study carries out the monitoring of the flood level for the Lajeado urban area with 6 hours in advance, demonstrating the effectiveness of this simulation.
APA, Harvard, Vancouver, ISO, and other styles
49

Tylšarová, Soňa. "Metodika ohledání nemovitostí pro ocenění jednotlivých typů objektů užívaných pro bydlení." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2011. http://www.nusl.cz/ntk/nusl-232572.

Full text
Abstract:
In this diploma thesis is processed summary of bases and a method of local investigatioun for evaluation individual types of property intended for housing. The thesis is focused on the realty inspection by individual type sof evaluating and its bases necesery for implementation. In terms of this thesis was made a local inquiry of a family house and a flat. The outcome of the thesis is to determine a procedure for the realty inspection and ensuring bases, which are completing the information about the property.
APA, Harvard, Vancouver, ISO, and other styles
50

Jin, Lei. "Particle systems and SPDEs with application to credit modelling." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:07b29609-6941-4aa9-b4bc-29e7b4821b82.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography