To see the other types of publications on this topic, follow the link: Content accuracy.

Dissertations / Theses on the topic 'Content accuracy'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Content accuracy.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Apodaca, Andrew Thomas. "Content versus Context in the Accuracy of Episodic Memories." Thesis, The University of Arizona, 2009. http://hdl.handle.net/10150/192280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammadzadeh, Hadi. "Improving Retrieval Accuracy in Main Content Extraction from HTML Web Documents." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-130500.

Full text
Abstract:
The rapid growth of text based information on the World Wide Web and various applications making use of this data motivates the need for efficient and effective methods to identify and separate the “main content” from the additional content items, such as navigation menus, advertisements, design elements or legal disclaimers. Firstly, in this thesis, we study, develop, and evaluate R2L, DANA, DANAg, and AdDANAg, a family of novel algorithms for extracting the main content of web documents. The main concept behind R2L, which also provided the initial idea and motivation for the other three algorithms, is to use well particularities of Right-to-Left languages for obtaining the main content of web pages. As the English character set and the Right-to-Left character set are encoded in different intervals of the Unicode character set, we can efficiently distinguish the Right-to-Left characters from the English ones in an HTML file. This enables the R2L approach to recognize areas of the HTML file with a high density of Right-to-Left characters and a low density of characters from the English character set. Having recognized these areas, R2L can successfully separate only the Right-to-Left characters. The first extension of the R2L, DANA, improves effectiveness of the baseline algorithm by employing an HTML parser in a post processing phase of R2L for extracting the main content from areas with a high density of Right-to-Left characters. DANAg is the second extension of the R2L and generalizes the idea of R2L to render it language independent. AdDANAg, the third extension of R2L, integrates a new preprocessing step to normalize the hyperlink tags. The presented approaches are analyzed under the aspects of efficiency and effectiveness. We compare them to several established main content extraction algorithms and show that we extend the state-of-the-art in terms of both, efficiency and effectiveness. Secondly, automatically extracting the headline of web articles has many applications. We develop and evaluate a content-based and language-independent approach, TitleFinder, for unsupervised extraction of the headline of web articles. The proposed method achieves high performance in terms of effectiveness and efficiency and outperforms approaches operating on structural and visual features<br>Das rasante Wachstum von textbasierten Informationen im World Wide Web und die Vielfalt der Anwendungen, die diese Daten nutzen, macht es notwendig, effiziente und effektive Methoden zu entwickeln, die den Hauptinhalt identifizieren und von den zusätzlichen Inhaltsobjekten wie z.B. Navigations-Menüs, Anzeigen, Design-Elementen oder Haftungsausschlüssen trennen. Zunächst untersuchen, entwickeln und evaluieren wir in dieser Arbeit R2L, DANA, DANAg und AdDANAg, eine Familie von neuartigen Algorithmen zum Extrahieren des Inhalts von Web-Dokumenten. Das grundlegende Konzept hinter R2L, das auch zur Entwicklung der drei weiteren Algorithmen führte, nutzt die Besonderheiten der Rechts-nach-links-Sprachen aus, um den Hauptinhalt von Webseiten zu extrahieren. Da der lateinische Zeichensatz und die Rechts-nach-links-Zeichensätze durch verschiedene Abschnitte des Unicode-Zeichensatzes kodiert werden, lassen sich die Rechts-nach-links-Zeichen leicht von den lateinischen Zeichen in einer HTML-Datei unterscheiden. Das erlaubt dem R2L-Ansatz, Bereiche mit einer hohen Dichte von Rechts-nach-links-Zeichen und wenigen lateinischen Zeichen aus einer HTML-Datei zu erkennen. Aus diesen Bereichen kann dann R2L die Rechts-nach-links-Zeichen extrahieren. Die erste Erweiterung, DANA, verbessert die Wirksamkeit des Baseline-Algorithmus durch die Verwendung eines HTML-Parsers in der Nachbearbeitungsphase des R2L-Algorithmus, um den Inhalt aus Bereichen mit einer hohen Dichte von Rechts-nach-links-Zeichen zu extrahieren. DANAg erweitert den Ansatz des R2L-Algorithmus, so dass eine Sprachunabhängigkeit erreicht wird. Die dritte Erweiterung, AdDANAg, integriert eine neue Vorverarbeitungsschritte, um u.a. die Weblinks zu normalisieren. Die vorgestellten Ansätze werden in Bezug auf Effizienz und Effektivität analysiert. Im Vergleich mit mehreren etablierten Hauptinhalt-Extraktions-Algorithmen zeigen wir, dass sie in diesen Punkten überlegen sind. Darüber hinaus findet die Extraktion der Überschriften aus Web-Artikeln vielfältige Anwendungen. Hierzu entwickeln wir mit TitleFinder einen sich nur auf den Textinhalt beziehenden und sprachabhängigen Ansatz. Das vorgestellte Verfahren ist in Bezug auf Effektivität und Effizienz besser als bekannte Ansätze, die auf strukturellen und visuellen Eigenschaften der HTML-Datei beruhen
APA, Harvard, Vancouver, ISO, and other styles
3

Steiner, Albert. "Accuracy, alarm limits and rise times of twelve oxygen analysers /." [S.l : s.n.], 1995. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

White, Newel Kimball. "Accuracy and Bias of TDR Measurements in Compacted Sands." Scholar Commons, 2004. https://scholarcommons.usf.edu/etd/1300.

Full text
Abstract:
It is essential to properly monitor in-situ soil compaction properties during most earthwork construction projects. Traditional in-situ soil compaction monitoring methods are often limited in their application. As a result, new methods are being developed to more accurately measure in-situ compaction parameters. Time domain reflectometry (TDR) is one such method. Relying on the propagation of an electromagnetic wave through the soil sample, TDR can be used to measure both in-situ moisture content as well as soil dry density. Although TDR is relatively new to the field of geotechnical engineering, it has previously been implemented in other fields with success. Researchers at Purdue University have made several advances to further incorporate the use of TDR technology into the field of geotechnical engineering and as a result an innovative TDR measurement system has been developed for compaction control monitoring. The method was standardized in the form of ASTM D 6780 in 2002. Further advancements led to an improved method referred to as the Purdue one-step TDR method. Research has indicated that the ASTM TDR method is sufficiently accurate for application in compaction monitoring applications. A comparison between the ASTM TDR method and traditional methods was carried out to evaluate the accuracy of the TDR method to traditional methods. To further expand the application of the TDR method, a correlation was developed between the TDR spike driving process with the in-situ CBR test. A comprehensive review of previous research was conducted to examine recent advancements leading to the improved Purdue one-step method. A study was also performed to evaluate the effect of variable pore fluid conductivity on the calibration of the Purdue one-step method.
APA, Harvard, Vancouver, ISO, and other styles
5

Givon, Sharon. "Predicting and using social tags to improve the accuracy and transparency of recommender systems." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5770.

Full text
Abstract:
This thesis describes work on using content to improve recommendation systems. Personalised recommendations help potential buyers filter information and identify products that they might be interested in. Current recommender systems are based mainly on collaborative filtering (CF) methods, which suffer from two main problems: (1) the ramp-up problem, where items that do not have a sufficient amount of meta-data associated with them cannot be recommended; and (2) lack of transparency due to the fact that recommendations produced by the system are not clearly explained. In this thesis we tackle both of these problems. We outline a framework for generating more accurate recommendations that are based solely on available textual content or in combination with rating information. In particular, we show how content in the form of social tags can help improve recommendations in the book and movie domains. We address the ramp-up problem and show how in cases where they do not exist, social tags can be automatically predicted from available textual content, such as the full texts of books. We evaluate our methods using two sets of data that differ in product type and size. Finally we show how once products are selected to be recommended, social tags can be used to explain the recommendations. We conduct a web-based study to evaluate different styles of explanations and demonstrate how tag-based explanations outperform a common CF-based explanation and how a textual review-like explanation yields the best results in helping users predict how much they will like the recommended items.
APA, Harvard, Vancouver, ISO, and other styles
6

Meneguette, Arlete Aparecida Correia. "Cartographic accuracy and information content of space imagery for digital map compilation and map revision." Thesis, University College London (University of London), 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huisman, Johan Alexander. "Measuring soil water content with time domain reflectometry and ground-penetrating radar accuracy, reproducibility and feasibility /." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/64044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Schinazi, Victor Roger. "Representing space : the development, content and accuracy of mental representations by the blind and visually impaired." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1445839/.

Full text
Abstract:
This thesis reports on two studies on the perception and cognition of space by individuals who are blind and visually impaired. Research was conducted with students from Dorton College at the Royal London Society for the Blind (RLSB) in Kent. The first experiment examined the content and accuracy of mental representations of a well-known environment. Students walked a route around the RLSB campus and learned the position of ten buildings and structures. They were then asked to make pointing judgments, estimate distances and complete a spatial cued model of the campus. The second experiment considered the wayflnding strategies and spatial coding heuristics used to explore a complex novel environment. Students were asked to explore a maze and learn the position of six different locations. Their search patterns were recorded and analyzed using Geographic Information Systems (GIS) software. Students were tested using the same methods as in the previous experiment and their performance was related to the type and frequency of strategies used during exploration. Results were complemented with a mobility questionnaire, a low vision quality of life questionnaire and data from a literacy and numeracy assessment as well as ethnographic material collected by the author during the two years spent working and living at the RLSB. The thesis begins with a discussion of disability and society framed within the context of geography, urban planning and design. The concepts of blindness and visual impairment are then examined with particular attention given to the psychosocial implications of visual loss. This is followed by a discussion of growth and development, and in-depth review of research on the development, content and accuracy of mental representations by the blind and visually impaired. Finally, the methods used to collect and analyse data for both experiments are considered in light of individual differences and the inadequacy of some statistical techniques to account for the heterogeneous nature of visual impairment. Results from the first experiment revealed significant differences in the accuracy and content of mental representation between the sighted, visually impaired and blind groups for the pointing and model construction tasks. Performance in the distance estimation task was similar across groups. Large individual differences were identified, with the performance of individuals in the same group varying according to the type and requirement of the task. Results from the second experiment also revealed significant differences between the different groups, this time for all three tasks. Here again, large individual differences were found within each group. An analysis of distortions revealed that despite a disparity in accuracy, the blind and visually impaired shared many of the systematic distortions typically found in the mental representation of sighted individuals further confirming their ability develop functional mental representations of space. Performance in the pointing, distance estimation and model construction tasks were also related to the type and frequency of strategies used to explore the maze with the best performers using a combination of egocentric and allocentric strategies. In general, results from the two experiments support the amodal notion that the construction of accurate mental representations of space is not limited to any particular sensory modality but facilitated by the visual system. It also emphasizes the need for mutually supportive techniques that incorporate both quantitative and qualitative methods in the collection and analysis of cognitive data.
APA, Harvard, Vancouver, ISO, and other styles
9

Runkles, Brian David. "A study on the calibration and accuracy of the one-step TDR method." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dayaram, Moti Michael. "The extent to which teachers' judgements are influenced by linguistic accuracy when grading English compositions for content." Thesis, Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14778014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Richard Folger. "Can model-based forecasts predict stock market volatility using range-based and implied volatility as proxies?" Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/13917.

Full text
Abstract:
Mestrado em Finanças<br>This thesis attempts to evaluate the performance of parametric time series models and RiskMetrics methodology to predict volatility. Range-based price estimators and Model-free implied volatility are used as a proxy for actual ex-post volatility, with data collected from ten prominent global volatility indices. To better understand how volatility behaves, different models from the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) class were selected with Normal, Student-t and Generalized Error distribution (GED) innovations. A fixed rolling window methodology was used to estimate the models and predict the movements of volatility and, subsequently, their forecasting performances were evaluated using loss functions and regression analysis. The findings are not clear-cut; there does not seem to be a single best performing GARCH model. Depending on the indices chosen, for range-based estimator, APARCH (1,1) model with normal distribution overall outperforms the other models with the noticeable exception of HSI and KOSPI, where RiskMetrics seems to take the lead. When it comes to implied volatility prediction, GARCH (1,1) with Student-t performs relative well with the exception of UKX and SMI indices where GARCH (1,1) with Normal innovations and GED seem to do well respectively. Moreover, we also find evidence that all volatility forecasts are somewhat biased but they bear information about the future volatility.<br>info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
12

Raina, Seemin. "Critical Content Analysis of Postcolonial Texts: Representations of Muslims within Children's and Adolescent Literature." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/194400.

Full text
Abstract:
This study is based on 72 children's and young adult books that met the criteria of being about Muslims and published and circulated here in the U.S. They can be divided into the varied genres as 49 contemporary realistic fiction, 6 historical fiction, and 17 autobiographies, biographies, and memoirs. In-depth reading and coding were used to identify patterns based on a theoretical frame of postcolonial theory and the lens of cultural authenticity.The exploration of ideas focus on the following research questions related to children's and adolescent literature published and distributed in the US that depict Muslim cultures: What are the overall characteristics of the books? What are the background experiences of the authors, illustrators, and translators who write and distribute literature within the U.S. that reflect Muslim Cultures? How do the genres of contemporary realistic fiction, historical fiction, and biographies published for adolescents and children within the U.S. represent and frame the varied Muslim cultures? What are the relationships between the background experiences of the authors and the representations of Muslim cultures in their books?This work is grounded in the assumption that Muslims are presented in a certain manner in popular culture and literature in the U.S., and thus, postcolonial theory is relevant in unpacking issues within the literature about these people. This theory draws on these suppositions to unveil how knowledge is constructed and circulated in dealing with global power relations. It also sheds light on how the identities of natives become hybrids as the process of colonization in certain cases impacts the psyche of inhabitants of these regions.This study is a `critical content analysis' in comprehending how texts are based in the social, cultural, and political contexts in which they are created and read. Content analyses examine what texts are about, considering the content from a particular perspective. This method scaffolds and explained my research to support my analysis of the texts through postcolonial perspectives to observe how Muslims are portrayed within adolescent and children's literature in the U.S.
APA, Harvard, Vancouver, ISO, and other styles
13

Michaud, Danielle. "The differential influence of knowledge of signals to importance on eighth graders' accuracy in representing content and organization of essays /." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mohammadzadeh, Hadi [Verfasser], Gerhard [Akademischer Betreuer] Heyer, Gerhard [Gutachter] Heyer, and Jinan [Gutachter] Fiaidhi. "Improving Retrieval Accuracy in Main Content Extraction from HTML Web Documents / Hadi Mohammadzadeh ; Gutachter: Gerhard Heyer, Jinan Fiaidhi ; Betreuer: Gerhard Heyer." Leipzig : Universitätsbibliothek Leipzig, 2013. http://d-nb.info/1237818303/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gendron, Marie-Josée. "The effects of arousal on memorial accuracy, a comparison of arousal as part of content material and arousal as part of contextual environment." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ54381.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Kosalla, Marc [Verfasser], Michael [Akademischer Betreuer] Raupach, and Wolfgang [Akademischer Betreuer] Breit. "Critical chloride content of reinforcing steel in concrete : influence of anodic polarization, steel/concrete interface quality and sampling accuracy / Marc Kosalla ; Michael Raupach, Wolfgang Breit." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1192375416/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mousah, Alkir. "Effects of filler content and coupling agents on the mechanical properties and geometrical accuracy of selective laser sintered parts in glass bead-filled polyamide 12 composites." Thesis, Cardiff University, 2011. http://orca.cf.ac.uk/11094/.

Full text
Abstract:
Improvements in existing materials and the development of new materials for use in selective laser sintering are continually being pursued in the industrial and academic domains. This research will focus on the possibilities available for improving the mechanical properties and geometric accuracy of selective laser sintered parts produced from non-commercial polyamide 12 (PA12) composites. The specific material of interest is PA 12 filled with glass beads. This thesis describes a study of the relationship between filler particles, the polymer matrix and processing parameters and their influence on the mechanical properties and geometric accuracy of the composite. The aim of the study is to better understand the effect of coupling agents on the above mentioned properties as there is a lack of information in this area. Most of the experiments described in the thesis were performed with PA 12 filled with coated and uncoated glass beads. The production of test samples was carried out on a selective laser sintering machine (DTM Sinterstation 2000). Knowledge about different machine-material combinations has been extended by performing additional PA 12 composite experiments. The accessible information from PA 12 and glass-filled PA 12 (PA12/GF) manufacturer’s data was used as a reference in comparison and assessment of the results obtained from the new machine-material combinations. This research has shown that adding coated glass beads to PA 12 improves the tensile strength and elastic modulus but reduces the impact strength and ductility of the ii resulting material. Under suitable processing conditions, the geometrical accuracy of sintered parts also improves with the addition of glass beads. The work has also shown that good interfacial bonding between the polyamide matrix and the glass beads, particularly when a coupling agent is used, is a likely cause for the observed improvements.
APA, Harvard, Vancouver, ISO, and other styles
18

Antoniadis, Antonios. "Moisture calibration of an R.F based inline moisture sensor : An inline moisture sensor based on radio wave attenuation, Microtec M3 Scan, was calibrated to maximise correlation between real water content in wood and received signal." Thesis, Luleå tekniska universitet, Träteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77359.

Full text
Abstract:
Sensors based on radio waves are used for inline moisture determination in the sawn wood industry. Placed at the final sorting line such a device can be invaluable, giving the operators real time information on the moisture content of the boards passing through. Information that can then be used to improve process control both upstream and downstream. The equipment must first be trained to correlate radio wave attenuation and phase shift to water conten
APA, Harvard, Vancouver, ISO, and other styles
19

Landa, Arroyo César. "The Inquiry Commissions and the Differences between the Tineo Cabrera & Toledo’s Judgments." IUS ET VERITAS, 2016. http://repositorio.pucp.edu.pe/index/handle/123456789/123067.

Full text
Abstract:
The present article addresses the issue about the differences between the Tineo Cabrera and Toledo´s precedents, which is about the clarifications made by the Constitutional Court concerning the fundamentals rights like prior and detailed notification and lifting banking secrecy made in assembly parliamentary. Likewise, give its opinion about the inquiry commissions performance and the improvements that must be implement to its objective actuation and non-arbitrary.<br>En la presente entrevista el autor trata las diferencias entre la sentencia Tineo Cabrera y la sentencia Toledo, que gira en torno a las precisiones que hiciera el Tribunal Constitucional respecto a los derechos fundamentales como la comunicación previa y detallada y el levantamiento del secreto bancario realizadas en sede parlamentaria. Asimismo, da su opinión sobre el desempeño de las comisiones investigadoras y las mejoras que se deben implementar para su actuación objetiva y no arbitraria.
APA, Harvard, Vancouver, ISO, and other styles
20

Nezval, Jiří. "Odhad přesnosti řečových technologií na základě měření signálové kvality a obsahové bohatosti audia." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413168.

Full text
Abstract:
This thesis discusses theoretical analysis of the origin of speech, introduces applications of speech technologies and explains the contemporary approach to phonetical transcription of speech recordings. Furthermore, it describes the metrics of audio recordings quality assessment, which is split into two discrete classes. The first one groups signal quality metrics, while the other one groups content richness metrics. The first goal of the practical section is to create a statistical model for accuracy prediction of machine transcription of speech recordings based on a measurement of their quality. The second goal is to evaluate which partial metrics are the most essential for accuracy prediction of machine transcription.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Zhewei. "Punishment and accuracy level in contests." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4465.

Full text
Abstract:
In the literature on contests, punishments have received much less attention than prizes. One possible reason is that punishing the bottom player(s) in a contest where all contestants are not allowed to quit, while effective in increasing contestants' total effort, often violates individual rationality constraints. But what will happen in an open contest where all potential contestants can choose whether or not to participate? In chapter 1, we study a model of this type and allow the contest designer to punish the bottom participant according to their performances. We conclude that punishment is often not desirable (optimal punishment is zero) when the contest designer wants to maximize the expected total effort, while punishment is often desirable (optimal punishment is strictly positive) when the contest designer wants to maximize the expected highest individual effort. In the literature on imperfectly discriminating contests, researchers normally assume that the contest designer has a certain level of accuracy in choosing the winner, which can be represented by the discriminatory power r in the Power Contest Success Function (the Power CSF, proposed by Tullock in 1980). With symmetric contestants, it is well known that increasing accuracy (r) always increases total effort when the pure-strategy equilibrium exists. In chapter 2, we look at the cases where the contestants are heterogeneous in ability. We construct an equilibrium set on r > 0, where a unique pure-strategy equilibrium exists for any r below a critical value and a mixed-strategy equilibrium exists for any r above this critical value. We find that if the contestants are sufficiently different in ability, there always exists an optimal accuracy level for the contest designer. Additionally, as we increase the difference in their abilities, the optimal accuracy level decreases. The above conclusions provide an explanation to many phenomena in the real world and may give guidance in some applications. In chapter 3, we propose the Power Contest Defeat Function (the Power CDF)which eliminates one player out at a time over successive rounds. We show that the Power CDF has the same good qualities as the Power Contest Success Function (the Power CSF) and is more realistic in some cases. We look at both the Power CSF mechanism (selecting winners in sequence) and the Power CDF mechanism (selecting losers in sequence) and show that punishments increase expected total e¤orts signi cantly. More interestingly, we also find that when the contestants' effort levels are different, the Power CDF mechanism is more accurate in finding the correct winner (the one who makes the greatest effort) and the Power CSF mechanism is more accurate in finding the correct loser (the one who makes the smallest effort).
APA, Harvard, Vancouver, ISO, and other styles
22

König, Immanuel [Verfasser]. "An algorithmic approach to increase the context prediction accuracy by utilizing multiple context sources / Immanuel König." Kassel : Universitätsbibliothek Kassel, 2017. http://d-nb.info/1155326016/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wall, Helen. "Conversation with a cue : personality judgments and observer accuracy across context." Thesis, Lancaster University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.618819.

Full text
Abstract:
The impact of context on the 'accuracy' of zero-acquaintance personality judgments is examined to understand when and how context affects such judgments. The thesis begins by conceptualising 'context' and discussing its potential impact on the judgment process. It then describes the methodology used in this thesis. Study 1 examines the impact of context richness on accuracy and shows that more information does not always result in better judgements. Study 2 shows that differences in interaction task do not impact on information quality and subsequent accuracy. Study 3 analyses the impact of context on 'behaviour' and shows that a judge's perspective impacts on the cues employed in judgments. Study 4 found that judge characteristics relate to specific aspects of context, not context per se. Study 5 found that trait-similarity between target and judge negatively impacts on accuracy. The thesis concludes with a field study in which the findings were largely replicated.
APA, Harvard, Vancouver, ISO, and other styles
24

Hoffner, Rebecca Ann. "Measuring Personality in Context: Improving Predictive Accuracy in Selection Decision Making." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/37859.

Full text
Abstract:
This study examines the accuracy of a context-sensitive (i.e., goal dimensions) measure of personality compared to a traditional measure of personality (NEO-PI-R) and generalized self-efficacy (GSE) to predict variance in task performance. The goal dimensions measure takes a unique perspective in the conceptualization of personality. While traditional measures differentiate within person and collapse across context (e.g., Big Five), the goal dimensions measure employs a hierarchical structure where the item level (i.e., first-order) is based on behaviors in a given context, and at the dimension level (i.e., second-order) each behavior is organized by organizational goals. As such, at the item level, the person is differentiated within context, but at the dimension-level, person is undifferentiated and the situation is differentiated by goals. To develop this measure, the behavior-in-situation items were identified, a goal taxonomy that captures the work context was developed, and the items were linked to the goal dimensions. The predictive accuracy of the goal dimensions measure was compared to that of the NEO-PI-R and GSE for performance on four tasks (creative, mundane, conflict management, and persuasive) and an overall performance composite. The results were modest in that the goal dimensions models did not perform substantially better than the traditional measure of personality. Specifically, the bivariate correlations between the goal dimensions and each criterion ranged from 0.00 to 0.30 and 19 out of 80 correlations (23.75%) were significant; compared to the absolute values of the correlations between the NEO-PI-R facets and each criterion that ranged from 0.00 to 0.24 with 26/240 significant correlations (10.83%). However, the results indicate that the goal dimensions model accounted for significant variance in task performance beyond that accounted for by the best traditional model for one or more of the criteria in the conflict management task and the persuasive task. These results suggest that future research on the goal dimensions measure is warranted.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Meyers, Sandra Michelle. "Accurate measurement of brain water content by magnetic resonance." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54704.

Full text
Abstract:
Accurate measurement of total water content (TWC) is valuable for assessing changes in brain water such as edema, which occurs with many neurological diseases, as well as monitoring the effects of treatments. T2 relaxation has been used to measure TWC in brain on 1.5T magnetic resonance imaging (MRI) scanners. This method was modified for 3T in order to ensure accuracy in the presence of increased radiofrequency field inhomogeneities. Phantom validations demonstrated excellent agreement between MRI-measured TWC and known water concentrations of tubes. Simulations indicated a 3% mean error in TWC estimation. Homogeneous TWC maps were produced in the brain of 10 healthy human subjects; TWC values agreed with literature. Two different receiver coil inhomogeneity corrections were compared in the same 10 subjects, as well as 2 multiple sclerosis (MS) patients – one which requires the measurement of a low flip angle image, and the other based on comparison to a homogeneous pseudo TWC map calculated from T1. Both techniques resulted in similar, homogeneous TWC maps in healthy brain, although differences up to 2% were measured in abnormal MS brain tissue. Finally, the TWC method was implemented for two applications. 20 subjects were scanned after consuming 3L of water and again after 9 hours of fasting to determine whether hydration affects brain TWC and volume. No significant changes were measured, indicating that homeostasis mechanisms likely regulated brain TWC during the short term fluid shifts. Some MS drugs have been shown to cause initial accelerated brain volume loss, which is hypothesized to be due to water loss. In the second application, TWC of normal appearing tissue and whole brain and brain volume were measured in 16 MS patients over a 6 month course of treatment with interferon beta. A trend of decreasing brain volume between months 3 and 6 was concurrent with a reduction in whole brain TWC, suggesting that accelerated brain volume loss on interferon beta may be due to reduced inflammation or edema in abnormal appearing tissue. Here we present a useful tool that can be used at 3T to simultaneously assess changes in water and myelin in neurological diseases.<br>Science, Faculty of<br>Physics and Astronomy, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

Keylin, Alexander. "Analytical Evaluation of the Accuracy of Roller Rig Data for Studying Creepage in Rail Vehicles." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/49607.

Full text
Abstract:
The primary purpose of this research is to investigate the effectiveness of a scaled roller rig for accurately assessing the contact mechanics and dynamics between a profiled steel wheel and rail, as is commonly used in rail vehicles. The established creep models of Kalker and Johnson and Vermeulen are used to establish correction factors, scaling factors, and transformation factors that allow us to relate the results from a scaled rig to those of a tangent track. �Correction factors, which are defined as the ratios of a given quantity (such as creep coefficient) between a roller rig and a track, are derived and used to relate the results between a full-size rig and a full-size track. Scaling factors are derived to relate the same quantities between roller rigs of different scales. Finally, transformation factors are derived by combining scaling factors with correction factors in order to relate the results from a scaled roller rig to a full-size tangent track. Close-end formulae for creep force correction, scaling, and transformation factors are provided in the thesis, along with their full derivation and an explanation of their limitations; these formulae can be used to calculate the correction factors for any wheel-rail geometry and scaling.<br />For Kalker\'s theory, it is shown that the correction factor for creep coefficients is strictly a function of wheel and rail geometry, primarily the wheel and roller diameter ratio. For Johnson and Vermeulen\'s theory, the effects of creepage, scale, and load on the creep force correction factor are demonstrated. �It is shown that INRETS\' scaling strategy causes the normalized creep curve to be identical for both a full-size and a scaled roller rig. �It is also shown that the creep force correction factors for Johnson and Vermeulen\'s model increase linearly with creepage, starting with the values predicted by Kalker\'s theory. �Therefore, Kalker\'s theory provides a conservative estimate for creep force correction factors. �A case study is presented to demonstrate the creep curves, as well as the correction and transformation factors, for a typical wheel-rail configuration. �Additionally, two studies by other authors that calculate the correction factor for Kalker\'s creep coefficients for specific wheel-rail geometries are reviewed and show full agreement with the results that are predicted by the formulae derived in this study. �Based on a review of existing and past roller rigs, as well as the findings of this thesis, a number of recommendations are given for the design of a roller rig for the purpose of assessing the wheel-rail contact mechanics. �A scaling strategy (INRETS\') is suggested, and equations for power consumption of a roller rig are derived. Recommendations for sensors and actuators necessary for such a rig are also given. Special attention is given to the resolution and accuracy of velocity sensors, which are required to properly measure and plot the creep curves.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Hepford, Elizabeth Ann. "DYNAMIC SECOND LANGUAGE DEVELOPMENT: THE INTERACTION OF COMPLEXITY, ACCURACY, AND FLUENCY IN A NATURALISTIC LEARNING CONTEXT." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/432247.

Full text
Abstract:
Applied Linguistics<br>Ph.D.<br>The purpose of this study was to examine the second language development of a native-speaker of Spanish learning English over a period of 15 months. More specifically, I explored the interaction of complexity (advanced forms of grammar and vocabulary), accuracy (grammatical and semantic), and fluency, commonly referred to as the CAF constructs. While findings in CAF literature tend to focus on one construct using experimental or cross-sectional studies (Bulté & Housen, 2012; Kormos & Dénes, 2004; Vyatkina, 2012), this case study investigated non-linear and interconnected CAF development, periods of fluctuation, and the effects of motivational factors on 14 variables. In order to explore the data as a system developing simultaneously, Complex Dynamic Systems Theory (CDST) (Larsen-Freeman, 1997; 2006) was applied as the theoretical framework. Through CDST’s theoretical lens and the tools developed for it (Verspoor, de Bot & Lowie, 2011), I found that knowledge variables (lexical diversity, accuracy, and elaboration) maintained consistent correlations, whereas their relationship with fluency variables (speed, repairs, and pauses) changed based on the cognitive strain the learner was experiencing at the time. I also found that the learner shifted his focus between the knowledge variables and that the complexity and accuracy variable on which he chose to focus appeared to be affected by changing motivational factors.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
28

Cummings, Paul Christopher. "The effects of instrument type, stimulus timbre, and harmonic context on tuning accuracy /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1404343201&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (D.M.A.)--University of Oregon, 2007.<br>Typescript. Includes vita and abstract. Includes bibliographical references (leaves 155-160). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
29

Zakos, John, and n/a. "A Novel Concept and Context-Based Approach for Web Information Retrieval." Griffith University. School of Information and Communication Technology, 2005. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20060303.104937.

Full text
Abstract:
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.
APA, Harvard, Vancouver, ISO, and other styles
30

Zakos, John. "A Novel Concept and Context-Based Approach for Web Information Retrieval." Thesis, Griffith University, 2005. http://hdl.handle.net/10072/365878.

Full text
Abstract:
Web information retrieval is a relatively new research area that has attracted a significant amount of interest from researchers around the world since the emergence of the World Wide Web in the early 1990s. The problems facing successful web information retrieval are a combination of challenges that stem from traditional information retrieval and challenges characterised by the nature of the World Wide Web. The goal of any information retrieval system is to provide an information need fulfilment in response to an information need. In a web setting, this means retrieving as many relevant web documents as possible in response to an inputted query that is typically limited to only containing a few terms expressive of the user's information need. This thesis is primarily concerned with firstly reviewing pertinent literature related to various aspects of web information retrieval research and secondly proposing and investigating a novel concept and context-based approach. The approach consists of techniques that can be used together or independently and aim to provide an improvement in retrieval accuracy over other approaches. A novel concept-based term weighting technique is proposed as a new method of deriving query term significance from ontologies that can be used for the weighting of inputted queries. A technique that dynamically determines the significance of terms occurring in documents based on the matching of contexts is also proposed. Other contributions of this research include techniques for the combination of document and query term weights for the ranking of retrieved documents. All techniques were implemented and tested on benchmark data. This provides a basis for performing comparison with previous top performing web information retrieval systems. High retrieval accuracy is reported as a result of utilising the proposed approach. This is supported through comprehensive experimental evidence and favourable comparisons against previously published results.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>School of Information and Communication Technology<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
31

Srinivasan, Soorya. "Reliability and Accuracy of Assessing TAD - Tooth Root Contact using CBCT." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1553782462280014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Crockford, Chris. "Geovideoworlds : accessing and navigating video content within a geographically accurate environment." Thesis, Brunel University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Holmgren, Hanna. "Towards accurate modeling of moving contact lines." Licentiate thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-266274.

Full text
Abstract:
The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures are common. This thesis presents the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. It is assumed that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity field exists, assuming the solid wall is perfectly flat. This expression is used to implement boundary conditions for the moving contact line, at the macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant throughout the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size h at a rate of convergence p ≈ 2. Further, an analytical expression for the velocity field in the intermediate region for the case with a curved solid wall is derived. The derivation is based on perturbation analysis.<br>eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
34

Pei, Jiantao, and n/a. "The Accuracy of Time-to-Contact Estimation in the Prediction Motion Paradigm." University of Canberra. Applied Science, 2002. http://erl.canberra.edu.au./public/adt-AUC20050627.143329.

Full text
Abstract:
This thesis is concerned with the accuracy of our estimation of time to make contact with an approaching object as measured by the “Prediction Motion” (PM) technique. The PM task has commonly been used to measure the ability to judge time to contact (TTC). In a PM task, the observer's view of the target is occluded for some period leading up to the moment of impact. The length of the occlusion period is varied and the observer signals the moment of impact by pressing a response key. The interval separating the moment of occlusion and the response is interpreted as the observer's estimate of TTC made at the moment of occlusion. This technique commonly produces large variability and systematic underestimation. The possibility that this reflects genuine perceptual errors has been discounted by most writers, since this seems inconsistent with the accuracy of interceptive actions in real life. Instead, the poor performance in the PM task has been attributed to problems with the PM technique. Several hypotheses have been proposed to explain the poor PM performance. The motion extrapolation hypothesis asserts that some form of mental representation of the occluded part of the trajectory is used to time the PM response; the errors in PM performance are attributed to errors in reconstructing the target motion. The clocking hypothesis assumes that the TTC is accurately perceived at the moment of occlusion and that errors arise in delaying the response for the required period. The fear-of-collision hypothesis proposes that the underestimation seen in the PM tasks reflects a precautionary tendency to anticipate the estimated moment of contact. This thesis explores the causes of the errors in PM measurements. Experiments 1 and 2 assessed the PM performance using a range of motion scenarios involving various patterns of movement of the target, the observer, or both. The possible contribution of clocking errors to the PM performance was assessed by a novel procedure designed to measure errors in the wait-and-respond component of the PM procedure. In both experiments, this procedure yielded a pattern of systematic underestimation and high variability similar to that in the TTC estimation task. Experiment 1 found a small effect of motion scenario on TTC estimation. However, this was not evident in Experiment 2. The collision event simulated in Experiment 2 did not involve a solid collision. The target was simply a rectangular frame marked on a tunnel wall. At the moment of “contact”, the observers passed “through” the target without collision. However, there was still systematic underestimation of TTC and there was little difference between the estimates obtained in Experiments 1 and 2. Overall, the results of Experiments 1 and 2 were seen as inconsistent with either the motion extrapolation hypothesis or the fear-of-collision hypothesis. It was concluded that observers extracted an estimate of the TTC based on optic TTC information at a point prior to the moment of collision, and used a timing process to count down to the moment of response. The PM errors were attributed to failure in this timing process. The results of these experiments were seen as implying an accurate perception of TTC. It was considered possible that in Experiments 1 and 2 observers based their TTC judgements on either the retinal size or the expansion rate of the target rather than TTC. Experiments 3 and 4 therefore investigated estimation of TTC using a range of simulated target velocities and sizes. TTC estimates were unaffected by the resulting variation in expansion rate and size, indicating that TTC, rather than retinal size or image expansion rate per se, was used to time the observers' response. The accurate TTC estimation found in Experiments 1-4 indicates that the TTC processing is very robust across a range of stimulus conditions. Experiment 5 further explored this robustness by requiring estimation of TTC with an approaching target which rotated in the frontoparallel plane. It was shown that moderate but not fast rates of target rotation induced an overestimation of TTC. However, observers were able to discriminate between TTCs for all rates of rotation. This shows that the extraction of TTC information is sensitive to perturbation of the local motion of the target border, but it implies that, in spite of these perturbations, the mechanism is flexible enough to pick up the optic TTC information provided by the looming of the retinal motion envelop of the rotating stimulus.
APA, Harvard, Vancouver, ISO, and other styles
35

Enebo, Brian A. "Contact force production accuracy and consistency: Generalizations to a simulated clinical task." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3239423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chipanga, Tendai. "Determination of the accuracy of non-destructive residual stress measurements methods." Thesis, [S.l. : s.n.], 2009. http://dk.cput.ac.za/cgi/viewcontent.cgi?article=1100&context=td_cput.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Khalil, Ashraf. "Context-aware telephony and its users methods to improve the accuracy of mobile device interruptions /." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3210052.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2006.<br>Source: Dissertation Abstracts International, Volume: 67-03, Section: B, page: 1518. Adviser: Kay Connelly. "Title from dissertation home page (viewed March 21, 2007)."
APA, Harvard, Vancouver, ISO, and other styles
38

Mullins, Joel. "Evaluation of dose calculations and couch positional accuracy in the context of dynamic couch trajectories." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123099.

Full text
Abstract:
The Varian TrueBeam STx linear accelerator features a developer's mode in which treatment plans can be programmed that include patient couch motion during radiation delivery. The combination of synchronous couch/gantry trajectories with Varian volumetric modulated arc therapy (VMAT) optimizations, called RapidArc, can result in a treatment technique that has been designated Virtual Isocenter RapidArc (VIRA). Prior to its implementation, the accuracy of dose calculations in the Varian Eclipse treatment planning system, on which the RapidArc optimization depends, must be validated, as well as the positional accuracy of the TrueBeam patient couch. The dose calculation accuracy was evaluated extrinsically through the delivery of clinical dynamic multileaf collimator (DMLC) intensity modulated radiotherapy (IMRT) treatment plans as a function of source-to-surface distance (SSD) and measurement with ionization chamber and Gafchromic EBT3 film. Parameters intrinsic to dose calculations in Eclipse, the dosimetric leaf gap (DLG) and leaf transmission (LT), were also investigated for their dependence on SSD. The positional accuracy of the treatment couch was assessed through the generation of treatment plans with static couch/gantry, static couch/rotating gantry, and synchronous couch and gantry motion, with measurement of the real-time ionization chamber current positioned in a cylindrical phantom during radiation delivery. The relative agreement of ionization chamber measurements to Eclipse dose calculations for DMLC IMRT treatment plans decreased by 1.5±0.3% over SSDs in the range of 85 cm to 135 cm (less than 1.0% deviation from standard clinical reference conditions of 100 cm SSD). Gafchromic EBT3 film measurements were consistent with ionization chamber results, though noise in the film data at low doses resulted in large uncertainties. Measurements of DLG were independent of SSD, following corrections for geometric projection. LT showed a dependence on SSD of 0.09±0.02% over the SSD range investigated. The ionization chamber current measurements for synchronous couch and gantry rotation, analogous to the proposed VIRA technique, indicated a maximum deviation of 0.2 cm relative to treatment isocenter, equal to the deviation observed for the rotating gantry/static couch treatment, analogous to conventional VMAT delivery. These results indicate that the Varian TrueBeam and Eclipse maintain the necessary positional and dosimetric accuracy required for VMAT treatments involving dynamic couch trajectories.<br>L'accélérateur linéaire TrueBeam STx de Varian possède un mode d'utilisation avancé où des plans de traitement peuvent être programmés pour inclure un mouvement de la table où repose le patient pendant le traitement. La combinaison des trajectoires synchronisés de la table de traitement ainsi que du gantry avec la plate-forme d'optimisation RapidArc pour la radiothérapie conformationnelle avec modulation d'intensité volumétrique (VMAT) produit une technique de traitement appelée RapidArc avec isocentre virtuel (VIRA). En vue de réaliser cette nouvelle technique, la justesse du calcul de dose dans la plate-forme de planification de traitements Eclipse, sur laquelle l'optimisation RapidArc dépend, doit être validée ainsi que la justesse du positionnement de la table de traitement. La justesse du calcul de dose fut évaluée de façon extrinsèque en comparant le résultat de la plate-forme RapidArc pour un plan de traitement de radiothérapie conformationnelle avec modulation d'intensité (IMRT) utilisant un collimateur multilames dynamique (DMLC) à des valeurs mesurés à l'aide d'une chambre d'ionisation ainsi que des films Gafchromic EBT3 en fonction de la distance entre la source et la surface (SSD) d'un phantôme. La dépendence sur SSD de deux paramètres instrinsèques au calcul de dose dans Eclipse, l'écart dosimétrique entre les lames (DLG) et la transmission des lames (LT) fut aussi étudiée. La justesse du positionnement de la table de traitement fut évaluée en produisant des plans de traitements avec la table et le gantry stationnaire, la table stationnaire et le gantry en mouvement ainsi qu'avec le mouvement synchronisé de la table et du gantry, tout en ayant une chambre d'ionisation positionnée dans un phantôme cylindrique durant la période d'irradiation. L'accord relatif entre les valeurs obtenus de la chambre d'ionisation et ceux d'Eclipse pour les plans DMLC IMRT est descendu de 1.5±0.3% en changeant le SSD de 85 cm jusqu'à 135 cm (moins de 1% de deviation des conditions de références clinique où le SSD est de 100 cm). Les valeurs obtenus à partir des films Gafchromic EBT3 sont en accord avec ceux de la chambre d'ionisation. Par contre, le bruit dans les données du film à basses doses a produit une grande incertitude. En corrigeant pour la projection géométrique, les valeurs du DLG ont été observé comme étant indépendantes du SSD. Le LT a démontré une dépendence sur le SSD de 0.09±0.02% sur la portée de SSD étudiés. Les valeurs de la chambre d'ionisation pour le mouvement synchronisé de la table de traitement et du gantry proposé pour la technique VIRA ont indiqué une déviation maximale de 0.2 cm relativement à l'isocentre du traitement. La même déviation fut observé pour le traitement où la table était stationnaire et le gantry était en mouvement, ce qui correspond aux traitements conventionnels VMAT. Ces résultats démontrent que l'accélérateur linéaire TrueBeam de Varian ainsi q'Eclipse maintiennent la justesse dosimétrique nécéssaire pour les traitements VMAT impliquant des trajectoires dynamiques de la table de traitement.
APA, Harvard, Vancouver, ISO, and other styles
39

Galuska, Chad M. "Reducing pausing during rich-to-lean schedule transitions effects of reinforcer context and cue accuracy /." Morgantown, W. Va. : [West Virginia University Libraries], 2003. http://etd.wvu.edu/templates/showETD.cfm?recnum=2944.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2003.<br>Title from document title page. Document formatted into pages; contains viii, 76 p. : ill. Includes abstract. Includes bibliographical references (p. 70-76).
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Long Qi. "Translation accuracy comparison between machine translation and context-free machine natural language grammar–based translation." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Foroni, Daniele. "Putting Data Quality in Context. How to generate more accurate analyses." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/243318.

Full text
Abstract:
Data quality is a well-known research field that aims at providing an estimation of the quality of the data itself. The research community has, for quite some time, studied the different aspects of data quality and has developed ways to find and clean dirty and incomplete data. In particular, it has so far focused on the computation of a number of data characteristics as a mean of quantifying different quality dimensions, like freshness, consistency, accuracy, number of duplicates, or completeness. However, the proposed approaches lack of an in-depth view of the quality of the data. Actually, most of the works have focused on efficient and effective ways to identify and clean the data inconsistencies, ignoring to a large extent the task that the data is to be used for, avoiding any investment on data cleaning tasks that are needed, while prioritizing data repairing on errors that are not an issue. Nevertheless, for what concerns streaming data, the concept of quality is slightly turned, since it is more focused on the data results than to the actual input data. Hence, in the context of data quality, we focus mainly on three challenges, highlighting one aspect for each use case. First, we concentrate our attention on the TASK that the user wants to apply over the data, providing a solution to prioritize cleaning algorithms to improve the task results; second, the focus is on the USER that defines a metric to optimize for a streaming application, and we dynamically scale the resources used by the application to fit the user goal; third, the DATA is at the center and we present a solution for entity matching that focuses on the measurement of a profile of the data that is used to retrieve the similarity metric that gives better results for such data. The first work concentrates on putting the context of the task that is applied to the data. So, we introduce F4U (that stands for FITNESS FOR USE), a framework for measuring how fit is a dataset for the intended task, which means how much a dataset leads to good results for the given task. In this system, we take a dataset and perform a systematic noise generation that creates from it a set of noisy instances. Then, we apply the user given task to the provided dataset and to these noisy instances, and later we measure the difference that the noise has implied in the results of the task, by measuring the distance of the results obtained with the noisy instances compared to those obtained with the original dataset. The distance allows the user to make some analysis on which noise is mostly affecting the results of the task, which enables a prioritization of the cleaning and repairing algorithms to apply over the original data to improve the results. Other works aims at identifying the most prominent data cleaning tools for a dataset, but our work is the first that does it by optimizing the results of the task the user has in mind. The second work refers to data quality in a streaming context as a goal-oriented analysis for the given task. It is known that streaming data has different requirements with respect to relational data, and, in this context, data is considered of high quality if it is processed according to the user needs. Hence, we build MOIRA on top of Apache Flink, a tool that adapts the resources needed by a query, optimizing the goal metric defined by the user. The optimization enables improvements in the performance for what concerns the metric goal defined for the given query. Before a query is executed, we perform a static analysis that generates the improved query plan, which improves the performance of the goal defined by the user by a different scaling of the resources. The plan is then submitted to Flink and in the meantime a monitoring system collects information about the cluster and the running application. The system systematically creates, accordingly to these collected metrics, a new query plan and systematically checks whether the deployment of the new plan would improve the performance of the given user goal metric. In the third work, the focus is on the data itself by proposing a solution to a well known problem, entity matching. We propose a framework that gets the insights of the data, by computing the dataset profile. This would be extremely useful to understand what kind of data the system is analyzing, in order to apply the similarity metric that better fits the data. The system has an online and an offline phase. In the offline phase, the system trains its model to find duplicates on the incoming datasets for which the matching tuples are known. Then, the system computes the profile of the dataset by measuring the accuracy of the results according to multiple similarity metrics. This knowledge would be used in the online phase, where the system divides the records in portions, minimizing the distance of the profile of each portion from the profiles already computed that we know that would lead to interesting results.
APA, Harvard, Vancouver, ISO, and other styles
42

Foroni, Daniele. "Putting Data Quality in Context. How to generate more accurate analyses." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/243318.

Full text
Abstract:
Data quality is a well-known research field that aims at providing an estimation of the quality of the data itself. The research community has, for quite some time, studied the different aspects of data quality and has developed ways to find and clean dirty and incomplete data. In particular, it has so far focused on the computation of a number of data characteristics as a mean of quantifying different quality dimensions, like freshness, consistency, accuracy, number of duplicates, or completeness. However, the proposed approaches lack of an in-depth view of the quality of the data. Actually, most of the works have focused on efficient and effective ways to identify and clean the data inconsistencies, ignoring to a large extent the task that the data is to be used for, avoiding any investment on data cleaning tasks that are needed, while prioritizing data repairing on errors that are not an issue. Nevertheless, for what concerns streaming data, the concept of quality is slightly turned, since it is more focused on the data results than to the actual input data. Hence, in the context of data quality, we focus mainly on three challenges, highlighting one aspect for each use case. First, we concentrate our attention on the TASK that the user wants to apply over the data, providing a solution to prioritize cleaning algorithms to improve the task results; second, the focus is on the USER that defines a metric to optimize for a streaming application, and we dynamically scale the resources used by the application to fit the user goal; third, the DATA is at the center and we present a solution for entity matching that focuses on the measurement of a profile of the data that is used to retrieve the similarity metric that gives better results for such data. The first work concentrates on putting the context of the task that is applied to the data. So, we introduce F4U (that stands for FITNESS FOR USE), a framework for measuring how fit is a dataset for the intended task, which means how much a dataset leads to good results for the given task. In this system, we take a dataset and perform a systematic noise generation that creates from it a set of noisy instances. Then, we apply the user given task to the provided dataset and to these noisy instances, and later we measure the difference that the noise has implied in the results of the task, by measuring the distance of the results obtained with the noisy instances compared to those obtained with the original dataset. The distance allows the user to make some analysis on which noise is mostly affecting the results of the task, which enables a prioritization of the cleaning and repairing algorithms to apply over the original data to improve the results. Other works aims at identifying the most prominent data cleaning tools for a dataset, but our work is the first that does it by optimizing the results of the task the user has in mind. The second work refers to data quality in a streaming context as a goal-oriented analysis for the given task. It is known that streaming data has different requirements with respect to relational data, and, in this context, data is considered of high quality if it is processed according to the user needs. Hence, we build MOIRA on top of Apache Flink, a tool that adapts the resources needed by a query, optimizing the goal metric defined by the user. The optimization enables improvements in the performance for what concerns the metric goal defined for the given query. Before a query is executed, we perform a static analysis that generates the improved query plan, which improves the performance of the goal defined by the user by a different scaling of the resources. The plan is then submitted to Flink and in the meantime a monitoring system collects information about the cluster and the running application. The system systematically creates, accordingly to these collected metrics, a new query plan and systematically checks whether the deployment of the new plan would improve the performance of the given user goal metric. In the third work, the focus is on the data itself by proposing a solution to a well known problem, entity matching. We propose a framework that gets the insights of the data, by computing the dataset profile. This would be extremely useful to understand what kind of data the system is analyzing, in order to apply the similarity metric that better fits the data. The system has an online and an offline phase. In the offline phase, the system trains its model to find duplicates on the incoming datasets for which the matching tuples are known. Then, the system computes the profile of the dataset by measuring the accuracy of the results according to multiple similarity metrics. This knowledge would be used in the online phase, where the system divides the records in portions, minimizing the distance of the profile of each portion from the profiles already computed that we know that would lead to interesting results.
APA, Harvard, Vancouver, ISO, and other styles
43

Naidoo, Rohan James. "The self and psychotherapy : are the predictions ACT makes about self-as-content accurate?" Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12171/.

Full text
Abstract:
Objectives: The evidence base for Acceptance and Commitment Therapy’s (ACT) overall effectiveness is highly promising. However, the extent to which the six processes comprising ACT have been investigated is extremely variable. In particular, the process regarding the self and therapeutic change is in need of validation, having never been subjected to empirical investigation The objective of the present study was to achieve this by testing whether the predictions ACT makes regarding the self and therapeutic change are supported by quantitative data. The specific prediction to be tested were that a) those with a fixed sense of self and low psychological flexibility will display high therapeutic resistance and b) those with a fluid sense of self and high psychological flexibility will display a strong tendency towards value-based behaviour. Method: Data from 171 non-clinical participants was subjected to a two-way between subjects ANCOVA, with self-theory and psychological flexibility as independent variables and therapeutic reactance as the dependent variable, co-varying out the effects of gender. Results: A significant interaction effect between psychological flexibility and sense of self was found. Post-hoc tests revealed two specific findings: Firstly, people with low psychological flexibility and a fixed sense of self displayed therapeutic reactance that was likely to impede therapeutic change. Secondly, people with high psychological flexibility and a fluid sense of self displayed therapeutic reactance that was more likely to be consistent with value-driven, goal-oriented behaviour. Conclusions: These findings are consistent with ACT’s theorised process regarding the self and therapeutic change. Thus, ACT’s predictions regarding the self and therapeutic change have received their first empirical validation. Clinically, the overarching psychotherapeutic focus is on the client’s process of relating to their self-concept, rather than altering its contents.
APA, Harvard, Vancouver, ISO, and other styles
44

Bourne, Harrison W. "An algorithm for accurate ionospheric total electron content and receiver bias estimation using GPS measurements." Thesis, Colorado State University, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10138910.

Full text
Abstract:
<p> The ionospheric total electron content (TEC) is the integrated electron density across a unit area. TEC is an important property of the ionosphere. Accurate estimation of TEC and TEC spatial distributions are needed for many space-based applications such as precise positioning, navigation, and timing. The Global Positioning System (GPS) provides one of the most versatile methods for measuring the ionosphere TEC, as it has global coverage, high temporal resolution, and relatively high spatial resolution. The objective of this thesis is to develop an algorithm for accurate estimation of the TEC using dual-frequency GPS receiver measurements and simultaneously estimate the receiver hardware bias in order to mitigate its effect on the TEC. This method assumes the TEC in the portion of sky visible to the receiver can be represented as a two dimensional sheet with an absolute value and spacial gradients with respect to latitude and longitude. A code-phase multipath noise estimation algorithm is integrated with the TEC estimation process to mitigate environmental multipath contamination of the measurements. The integrated algorithm produces an approximate map of local TEC using a single dual-frequency receiver while minimizing both multipath induced errors and the receiver hardware bias. The goal of this method is to provide an accurate map of the ionosphere TEC, in the region local to the receiver, without the need for a network of receivers and in the absence of knowledge of the receiver hardware induced bias. This thesis describes the algorithm, its implementation, and attempts to validate the method through comparison with incoherent scatter radar (ISR) data from low, mid, and high latitude locations.</p>
APA, Harvard, Vancouver, ISO, and other styles
45

Ye, Zhihui. "A low cost, accurate instrument to measure the moisture content of building envelopes in situ." Thesis, Open University, 2005. http://oro.open.ac.uk/54633/.

Full text
Abstract:
Buildings must be designed and built to achieve a healthy environment, low energy consumption and a predictable service life. In order to achieve these goals the effects of combined heat, air and moisture (HAM) transfer must be understood. A suitable moisture measurement technique is thus required. There is a pressing need for a suitable instrument capable of <i>in situ</i> moisture measurements in building envelopes. Techniques do exist for such moisture measurement but all exhibit deficiencies in at least one critical area. A thermal dual-probe is investigated as a candidate for an appropriate instrument as such an approach offers significant potential benefits over existing methods. It is demonstrated, via extensive finite-element (FE) modelling, that the thermal dual-probe technique is indeed applicable to in situ moisture measurements in typical building fabrics. The thesis then moves on to deal with the optimisation of the design of such a probe. The results of relevant simulations using the proven two and three-dimensional FE models are detailed. Finally, the extensive experimental work undertaken to support the modelling work is described. The measured data obtained from the thermal dual-probes is compared with the results of series of gravimetric analyses. Close agreement between the two methods is obtained. The work, has successfully demonstrated that, depending upon the building fabric material,optimal probe lengths and spacing range from approximately 45-60mm and 12-20mm respectively. The experimental work clearly indicates that the thermal dual-probe is capable of accurate, <i>in situ</i> moisture measurements in building envelopes.
APA, Harvard, Vancouver, ISO, and other styles
46

Williams, Joshua Holbrook. "Examining the impact of impression management context and self-monitoring on the leniency and accuracy of self-appraisals." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-08222008-063252/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Yung-Yeh. "A Time-efficient Method for Accurate T1 Mapping of The Human Brain." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/2626.

Full text
Abstract:
The signal resulting from the IR-FSE sequence has been thoroughly analyzed in order to improve the accuracy of quantitative T1 mapping of the human brain. Several optimized post-processing algorithms have been studied and compared in terms of their T1 mapping accuracy. The modified multipoint two-parameter fitting method was found to produce less underestimation compared to the traditional multipoint three-parameter fitting method, and therefore, to result in a smaller T1 estimation error. Two correction methods were proposed to reduce the underestimation problem which is commonly seen in IR-FSE sequences used for measuring T1, especially when a large turbo factor is used. The intra-scan linear regression method corrects the systematic error effectively but the RMSE may still increase due to the increase of uncertainty in sequences with large turbo factors. The weighted fitting model corrects not only the systematic error but also the random error and therefore the aggregate RMSE for T1 mapping can be effectively reduced. A new fitting model that uses only three different TI measurements for T1 estimation was proposed. The performance for the three-point fitting method is as good as that of the multipoint fitting method with correction in the phantom simulation. In addition, a new ordering scheme that implements the three-point fitting method is proposed; it is theoretically able to reduce the total scan time by 1/3 compared to the TESO-IRFSE sequence. The performance of the three-point fitting method on the real human brain is also evaluated, and the T1 mapping results are consistent to with the conventional IR-FSE sequence. More samples of true anatomy are needed to thoroughly evaluate the performance of the proposed techniques when applied to T1 mapping of the human brain.
APA, Harvard, Vancouver, ISO, and other styles
48

Pertiwi, Yopina Galih. "How Does Intergroup Contact Predict Stereotypes in a Complex Social Reality?A Cross-Cultural Study of Intergroup Contact, Stereotypes, and Group Status." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461963876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Woolrych, Tracey. "The influence of imagination, connectivity, and social context on the assessment and measurement of empathic accuracy using photographic stimuli." Thesis, Woolrych, Tracey (2014) The influence of imagination, connectivity, and social context on the assessment and measurement of empathic accuracy using photographic stimuli. PhD thesis, Murdoch University, 2014. https://researchrepository.murdoch.edu.au/id/eprint/28174/.

Full text
Abstract:
The ability to accurately interpret the emotions of others is known as empathic accuracy, and in this thesis is referred to as Affect Recognition-Empathic Accuracy (AR-EA). This ability can facilitate pro-social behaviours while deficits may result in anti-social behaviours. Research has demonstrated that imagination, connectivity, and social context can all influence our ability to accurately interpret the emotions of others; however, there has been little research investigating how these specific factors might be enhanced, or influence AR-EA abilities when using photographic stimuli. There were two aims to this thesis. The first aim was to investigate the possibility of inserting specific empathy related elements, imagination, connectivity, and social context, into a set of photographic stimuli to assess the potential influence on AR-EA. The second aim was to develop an original set of photographic stimuli for use in this thesis, and to conduct psychometric evaluations on said photographs in order to develop a new photographic measure for the assessment and evaluation of AR-EA. The photographs consisted of both male and female models expressing six different basic emotions (happy, sad, fear, anger, surprise, disgust) at three different levels of intensity (low, medium and high intensity), plus one neutral expression. Imagination and connectivity were both facilitated through the insertion of a silhouette (blacked out full body figure, male or female) into the photographic stimuli. Social context was manipulated through the use of different social setting backgrounds in the photographs: a kitchen, a bar (as in a tavern), and a neutral background. Results demonstrated the silhouette inserted into the photographs to facilitate imagination and connectivity not only enhanced empathic processes, but also produced photographic-based measure of AR-EA that was superior in both reliability and validity to other presentation modes (full body only, and head and shoulders only stimuli). The different social settings of the photographs also impacted AR-EA abilities facilitating the accurate interpretation of some emotions, whilst inhibiting others. The overall findings of this thesis question past research methods as well as provide intriguing insights into the functioning of empathic accuracy processes which have not been previously reported. The testing and research also resulted in a new photographic measure for the assessment of AR-EA abilities, whilst the use of simple techniques to manipulate empathy-based elements within the photographs offers new opportunities for future research.
APA, Harvard, Vancouver, ISO, and other styles
50

Newey, Britney Ann. "The Classification Accuracy of a Dynamic Assessment of Inferential Word Learning for School-Age Children With and Without Language Disorder." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8672.

Full text
Abstract:
Purpose: This study examines the classification accuracy and interrater reliability of a dynamic assessment (DA) of inferential word learning designed to accurately identify kindergarten through sixth-grade students with and without language disorder. Method: The participants included 127 school-age children from a mountain west school district who were administered a DA of inferential word learning that entailed a pretest, a teaching phase, an examiner rating of the child's ability to infer word meaning (modifiability), and posttests. Results: Hierarchical logistic regression and receiver operator characteristic (ROC) analyses revealed that combining all posttests, the modifiability total, and the final examiner judgement scores from this DA yielded the strongest sensitivity (.83) and specificity (.80). The static measures and the dichotomized final examiner judgement had excellent reliability; yet the individual modifiability measures (with the exception of disruption and frustration) had poor reliability. Conclusion: In concordance with a previous study, results indicate that a dynamic assessment of inferential word learning may be an efficacious method of identifying language disorders in school-age populations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!