To see the other types of publications on this topic, follow the link: Assessment of quality.

Dissertations / Theses on the topic 'Assessment of quality'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Assessment of quality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tawbi, Hassan, of Western Sydney Macarthur University, and Faculty of Education. "Translation quality assessment." THESIS_FE_XXX_Tawbi_H.xml, 1994. http://heston.uws.edu.au:8081/1959.7/57.

Full text
Abstract:
As yet, few explicit, practical and easy to implement marking scales for evaluating the quality of translations have been proposed. The purpose of this study is to introduce a new marking guide for making quantitative assessments of the quality of non-literary translations, and to test its practicality through a case study using the Arabic language. On the basis of the results, some generalizations about translation and translation quality assessment are made. Early treatments which dealt with the evaluation of translations are discussed, showing their merits and defects. The new marking guide is then described, including classification of errors and examples of each type of error. Guidelines are presented for the holistic subjective assessment, the guidelines are evaluated and the outcome discussed<br>Master of Arts (Hons)
APA, Harvard, Vancouver, ISO, and other styles
2

Banitalebi, Dehkordi Amin. "3D video quality assessment." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54581.

Full text
Abstract:
A key factor in designing 3D systems is to understand how different visual cues and distortions affect the perceptual quality of 3D video. The ultimate way to assess video quality is through subjective tests. However, subjective evaluation is time consuming, expensive, and in most cases not even possible. An alternative solution is objective quality metrics, which attempt to model the Human Visual System (HVS) in order to assess the perceptual quality. The potential of 3D technology to significantly improve the immersiveness of video content has been hampered by the difficulty of objectively assessing Quality of Experience (QoE). A no-reference (NR) objective 3D quality metric, which could help determine capturing parameters and improve playback perceptual quality, would be welcomed by camera and display manufactures. Network providers would embrace a full-reference (FR) 3D quality metric, as they could use it to ensure efficient QoE-based resource management during compression and Quality of Service (QoS) during transmission. In this thesis, we investigate the objective quality assessment of stereoscopic 3D video. First, we propose a full-reference Human-Visual-system based 3D (HV3D) video quality metric, which efficiently takes into account the fusion of the two views as well as depth map quality. Subjective experiments verified the performance of the proposed method. Next, we investigate the No-Reference quality assessment of stereoscopic video. To this end, we investigate the importance of various visual saliency attributes in 3D video. Based on the results gathered from our study, we design a learning based visual saliency prediction model for 3D video. Eye-tracking experiments helped verify the performance of the proposed 3D Visual Attention Model (VAM). A benchmark dataset containing 61 captured stereo videos, their eye fixation data, and performance evaluations of 50 state-of-the-art VAMs is created and made publicly available online. Finally, we incorporate the saliency maps generated by our 3D VAM in the design of the state-of-the- art no-reference (NR) and also full-reference (FR) 3D quality metrics.<br>Applied Science, Faculty of<br>Electrical and Computer Engineering, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Šmida, Vladimír. "Fingerprint Image Quality Assessment." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237090.

Full text
Abstract:
Kritickým prvkem biometrického systému pro rozpoznávání otisků prstů je proces snímání. Kvalita snímku totiž ovlivňuje všechny další části systému počínaje zpracováním obrazu, přes extrakci rysů až po samotné rozhodnutí. Přestože bylo navrženo několik metod určování kvality snímku, chybějící formální specifikace kvality otisku nedovoluje ověřit jejich přesnost. Tato diplomová práce se zabývá hodnocením metod určujících kvalitu biometrického signálu otisku prstu. Popisuje jednotlivé faktory ovlivňující kvalitu spolu se současnými přístupy používanými pro její odhad. V práci je rovněž vysvětlena evaluační technika navržená za účelem porovnání schopnosti jednotlivých metod předpovědět výkon biometrického systému. Několik metod pro odhad kvality bylo implementováno a ohodnoceno touto technikou.
APA, Harvard, Vancouver, ISO, and other styles
4

Yao, Zhigang. "Digital Fingerprint Quality Assessment." Caen, 2015. http://www.theses.fr/2015CAEN2030.

Full text
Abstract:
L'empreinte digitale est l'une des modalités les plus fiables en biométrie et donc a été largement étudié et déployés dans des applications réelles. La précision d'un système d'identification automatique d'empreintes digitales (AFIS) dépend largement de la qualité des échantillons d'empreintes digitales. La dégradation de la qualité d'empreinte digitales impacte le taux d'erreur lors de l'étape de vérification biométrique. Cette thèse se concentre principalement sur l'évaluation des mesures de qualité biométriques et plus précisément l'évaluation de la qualité des empreintes digitales (FQA), à partir d'une image en niveaux de gris et ou à partir de l'ensemble de minuties associées. En faisant un examen à la fois raffinée des systèmes biométriques et des méthodes d'évaluation en préliminaire, cette thèse contribue tout d'abord par la proposition d'un nouveau cadre d'évaluation/de validation pour estimer la performance de métriques de qualité biométriques. Le cadre d'évaluation / validation est défini dans la phase d'enrôlement en utilisant des essais hors ligne. La validité d'une mesure de qualité biométrique peut être statistiquement mesurée par la dégradation du d'égale erreur (EER) et les intervalles de confiance (IC) associés. Ensuite, cette thèse porte principalement sur l'évaluation de l'empreinte digitale de plusieurs façons, qui comprend trois parties dans le contexte de la FQA, où chacune d'entre elles est positionnée à partir d'une revue systématique de la littérature des études existantes. Tout d'abord, une approche d'évaluation de la qualité basée sur de multiples fonctionnalités et un avant-connaissance du rendement correspondant est proposé dans cette thèse, image d'empreinte digitale de qualification qui réalise avec des schémas de fusion et d'apprentissage et observe certains problèmes potentiels de ce type de solution. Deuxièmement, un nouvel algorithme FQA est proposé en utilisant uniquement modèle minuties image d'empreinte digitale de. Cette approche démontre la possibilité pour estimer la qualité d'image d'empreinte digitale avec le modèle de minuties seul. Troisièmement, un autre cadre FQA est réalisée via approche multi-segmentation image d'empreinte digitale, ce qui donne une nouvelle solution de cette question de la. Pendant ce temps, toutes les approches FQA proposé dans cette thèse offrent une étude comparative de cette question, pour les algorithmes FQA proposées sont en mesure de reprèsenter chaque solution reprèsentant parmi les études existantes<br>Digital fingerprint is one of the most reliable modality in modern biometrics and hence has been widely studied and deployed in real applications. The accuracy of one Automatic Fingerprint Identification System (AFIS) largely depends on the quality of fingerprint samples, as it has an important impact on the degradation of the matching (comparison) error rates. This thesis mainly focuses on the valuation of biometric quality metrics and fingerprint quality assessment (FQA), particularly in estimating the quality of gray-level fingerprint images or represented by a minutiae set. By making a refined review of both biometric systems and relevant evaluation techniques, this thesis firstly contributes by the definition of a new evaluation/validation framework for estimating the performance of biometric quality metrics. The evaluation/validation framework is defined in the enrollment phase by using onine trials. The validity of a biometric quality metric can be statistically measured by the degradation of the global Equal Error Rates (EER) and the associated Condence Intervals (CIs). Next, this thesis makes effort mainly in assessing fingerprint image quality in several different ways which include three parts in the context of the FQA, where each of them is proposed in terms of a systematic literature review of the existing studies of this issue. First, a quality assessment approach based on multiple features and a prior-knowledge of matching performance is proposed in this thesis, which achieves qualifying fingerprint image with fusion and learning schemes and observes some potential problems of this kind of solution. Second, a new FQA algorithm using the Delaunay triangulation is proposed to estimate the quality of a digital fingerprint via only its minutiae template. This approach demonstrates the possibility for estimating the quality of digital fingerprint with the minutiae template alone. Third, another FQA framework is carried out via multi-segmentation approach of fingerprint image, which gives a new solution of this problem. Meanwhile, all the proposed FQA approaches in this thesis provide a comparative study of this issue, for the proposed FQA algorithms are able to represent each representative solution among the existing studies
APA, Harvard, Vancouver, ISO, and other styles
5

Cheng, Wu. "Corrupted Image Quality Assessment." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1335969249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Prytz, Anders. "Video Quality Assessment in Broadcasting." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10870.

Full text
Abstract:
<p>In broadcasting, the assessment of video quality is mostly done by a group of highly experienced people. This is a time consuming task and demands lot of resources. In this thesis the goal is to investigate the possibility to assess perceived video quality with the use of objective quality assessment methods. The work is done in collaboration with Telenor Satellite Broadcasting AS, to improve their quality verification process from a broadcasting perspective. The material used is from the SVT Fairytale tape and a tape from the Norwegian cup final in football 2009. All material is in the native resolution of 1080i and is encoded in the H.264/AVC format. All chosen compression settings are more or less used in daily broadcasting. A subjective video quality assessment been carried out to create a comparison basis of perceived quality. The subjective assessment sessions carried out by following ITU recommendations. Telenor SBc provided a video quality analysing system, the Video Clarity Clearview system that contains the objective PSNR, DMOS and JND. DMOS and JND are two pseudo-subjective assessment methods that use objective methods mapped to subjective results. The methods hopefully predict the perceived quality and eases quality assessment in broadcasting. The correlation between the subjective and objective results is tested with linear, exponential and polynomial fitting functions. The correlation for the different methods did not achieve a result that proved use of objective methods to assess perceived quality, independent of content. The best correlation result is 0.75 for the objective DMOS method. The analysis shows that there are possible dependencies in the relationship between subjective and objective results. By measuring spatial and temporal information possible dependent correlation results are investigated. The results for dependent relationships between subjective and objective results are good. There are some indications that the two pseudo-subjective methods, JND and DMOS, can be used to assess perceived video quality. This applies when the mapping functions are dependent on spatial and temporal information of the reference sequences. The correlation achieved for dependent fitting functions, that has a suitable progression, are in the range 0.9 -- 0.98. In the subjective tests, the subjects used were non-experts in quality evaluation. Some of the results indicate that subjects might have a problem with assessing sequences with high spatial information. This thesis creates a basis for further research on the use of objective methods to assess the perceived quality.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Dhakal, Prabesh, Prabhat Tiwari, and Pawan Chan. "Perceptual Video Quality Assessment Tool." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2576.

Full text
Abstract:
Subjective video quality is a subjective characteristic of video quality. It is concerned with how a video is perceived by the viewer and designates his or her opinion on the particular video sequence. Subjective video quality tests are quite expensive in terms of time (preparation and running) and human resources. The main objectives of this testing is how the human observes the video quality since they are the ultimate end user. There are many ways of testing the quality of the videos. We have used ITU-T Recommendation P.910.<br>In our research work, we have designed the tool that can be used to conduct a mass-scale level survey or subjective tests. ACR is the only method used to carry out the subjective video assessment. The test is very useful in the context of a video streaming quality. The survey can be used in various countries and sectors with low internet speeds to determine the kind of video or the compression technique, bit rate, or format that gives the best quality.<br>0700627491, 0760935352
APA, Harvard, Vancouver, ISO, and other styles
8

Ray, Arjun. "Quality assessment of protein models." Licentiate thesis, KTH, Beräkningsbiofysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-90830.

Full text
Abstract:
Proteins are crucial for all living organisms and they are involved in many different processes. The function of a protein is tightly coupled to its structure, yet to determine the structure experimentally is both non-trivial and expensive. Computational methods that are able to predict the structure are often the only possibility to obtain structural information for a particular protein. Structure prediction has come a long way since its inception. More advanced algorithms, refined mathematics and statistical analysis and use of machine learning techniques have improved this field considerably. Making a large number of protein models is relatively fast. The process of identifying and separating correct from less correct models, from a large set of plausible models, is also known as model quality assessment. Critical Assessment of Techniques for Protein Structure Prediction (CASP) is an international experiment to assess the various methods for structure prediction of proteins. CASP has shown the improvements of these different methods in model quality assessment, structure prediction as well as better model building. In the two studies done in this thesis, I have improved the model quality assessment part of this structure prediction problem for globular proteins, as well as trained the first such method dedicated towards membrane proteins. The work has resulted in a much-improved version of our previous model quality assessment program ProQ, and in addition I have also developed the first model quality assessment program specifically tailored for membrane proteins.<br><p>QC 20120313</p>
APA, Harvard, Vancouver, ISO, and other styles
9

MORGAN, Keith J. "Quality Assessment in English Universities." 名古屋大学高等研究教育センター, 2003. http://hdl.handle.net/2237/16569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aniche, Mauricio Finavaro. "Context-based code quality assessment." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13092016-123733/.

Full text
Abstract:
Two tasks that software engineers constantly perform are writing code that is easy to evolve and maintain, and detecting poorly written pieces of code. For the former, software engineers commonly rely on well-known software architecture styles, such as Model-View-Controller (MVC). To the latter, they rely on code metrics and code smell detection approaches. However, up to now, these code metrics and code smell approaches do not take into account underlying architectureall classes are assessed as if they were the same. In practice, software developers know that classes differ in terms of responsibilities and implementation, and thus, we expect these classes to present different levels of coupling, cohesion, and complexity. As an example, in an MVC system, Controllers are responsible for the flow between the Model and the View, and Models are responsible for representing the systems business concepts. Thus, in this thesis, we evaluate the impact of architectural roles within a system architecture on code metrics and code smells. We performed an empirical analysis in 120 open source systems, and interviewed and surveyed more than 50 software developers. Our findings show that each architectural role has a different code metric values distribution, which is a likely consequence of their specific responsibilities. Thus, we propose SATT, an approach that provides specific thresholds for architectural roles that are significantly different from others in terms of code smells. We also show that classes that play a specific architectural role contain specific code smells, which developers perceive as problems, and can impact class\' change- and defect-proneness. Based on our findings, we suggest that developers understand the responsibilities of each architectural role in their system architecture, so that code metrics and code smells techniques can provide more accurate feedback.<br>Duas tarefas que desenvolvedores de software constantemente fazem são escrever código fácil de ser mantido e evoluído, e detectar pedaços de código problemáticos. Para a primeira tarefa, desenvolvedores comumente fazem uso de conhecidos padrões arquiteturais, como Model-View-Controller (MVC). Para a segunda tarefa, desenvolvedores fazem uso de métricas de código e estratégias de detecção de maus cheiros de código (code smells). No entanto, até o momento, métricas de código e estratégias de detecção de maus cheiros de código não levam em conta a arquitetura do software em análise. Isso significa que todas classes são avaliadas como se umas fossem iguais às outras. Na prática, sabemos que classes são diferentes em suas responsibilidades e implementação, e portanto, esperamos que elas variem em termos de acoplamento, coesão e complexidade. Por exemplo, em um sistema MVC, Controladores são responsáveis pelo fluxo entre a camada de Modelo e a camada de Visão, e Modelos representam a visão de negócios do sistema. Nesta tese, nós avaliamos o impacto dos papéis arquiteturais em técnicas de medição de métricas de código e de detecção de maus cheiros de código. Nós realizamos um estudo empírico em 120 sistemas de código aberto, e entrevistamos e realizamos questionários com mais de 50 desenvolvedores. Nossos resultados mostram que cada papel arquitetural possui distribuições diferentes de valores de métrica de código, consequência das diferentes responsabilidades de cada papel. Como consequência, propomos SATT, uma abordagem que provê thresholds específicos para papéis arquiteturais que são significantemente diferentes de outros em termos de métricas de código. Mostramos também que classes que cumprem um papel arquitetural específico também contêm maus cheiros de código específicos. Esses maus cheiros são percebidos por desenvolvedores como problemas reais e podem fazer com que essas classes sejam mais modificadas e apresentem mais defeitos do que classes limpas. Sugerimos então que desenvolvedores entendam a arquitetura dos seus sistemas, bem como as responsabilidades de cada papel arquitetural que as classes desempenham, para que tanto métricas de código quanto estratégias de detecção de maus cheiros de código possam prover um melhor retorno.
APA, Harvard, Vancouver, ISO, and other styles
11

Korn, Alexandra. "Information System Quality Assessment Methods." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193230.

Full text
Abstract:
This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this work is application of gained theoretical knowledge in process assessment with CobiT 4.1 and CobiT 5.0 frameworks, and comparing results. The objective was achieved through consultation with processes owner and the completed questionnaires filled in by management of OnLine S.r.l. Additionally targeted level of capability in CobiT 5.0 is compared with actual, achieved level.
APA, Harvard, Vancouver, ISO, and other styles
12

Alam, M. (Md ). "Automatic ECG signal quality assessment." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201906052442.

Full text
Abstract:
Abstract. The quality assessment of signal has been a research topic for many years, as it is mainly related to the problem of the false alarms. Automatic quality detection/assessment and classification of signals can play a vital role in the development of robust unsupervised electrocardiogram (ECG). The development of efficient algorithms for the quality control of ECG recordings is essential to improve healthcare now. ECG signal can be intermixed with many kinds of unwanted noises. It is an important task to assess the quality of the ECG signal for further biomedical inspections. To make that happen, we made an algorithm that is efficient and uses some basic quality features to classify the ECG signals. It is a very effective way to acquire a good quality ECG signal in real-time by unskilled personnel for instance in rural areas there is not enough expertise in this field. By using this method, they can quickly know if the ECG signal is acceptable or unacceptable for further inspections. The method is used to assess the quality of the ECG signals in the training set of the Physionet/Computing in Cardiology Challenge 2011, giving a correct interpretation of the quality of the ECG signals of 93.08% which corresponded to a sensitivity of 96.53% and a specificity of 86.76%.
APA, Harvard, Vancouver, ISO, and other styles
13

Lothian, Andrew. "Landscape quality assessment of South Australia." Title page, table of contents, abstract and detailed contents only, 2000. http://hdl.handle.net/2440/37804.

Full text
Abstract:
The object of this thesis is to provide, through a thorough analysis of human perception and interaction with aesthetics and landscape quality, a comprehensive basis on which to develop a credible methodology for the large scale assessment of perceived landscape quality. The analysis of human perception and interaction with aesthetics and landscape quality is gained by inquiring in depth into a range of theoretical constructs from key disciplines, cultural aspects, and empirical studies covering : 1. the contribution of philosophers to aesthetics 2. the psychology of perception and colour 3. the contribution of Gestalt psychology to aesthetics 4. the psychoanalytical construct of human responses to aesthetics 5. the influence of culture on landscape preferences, tracing the changing perceptions of mountains, the portrayal of landscapes in art, and the design of parks and gardens 6. a review of over 200 surveys of landscape quality in the late 20th century, including typologies and theories of landscape quality Based on the analysis of these and the knowledge gained, an empirical study is formulated and conducted, comprising a study of landscape quality of South Australia, an area of nearly 1 million km - 1. This involves, firstly, the acquisition of data covering the delineation of landscape character regions for the State, photography of these landscapes, derivation of a set of representative slides, and rating of these by groups of participants. Secondly, these preference ratings are comprehensively analysed on the basis of the attributes of the scenes covering land form, land cover, land use, water bodies, naturalism, diversity and colour. Thirdly, the results are applied as follows: 1. a map of landscape quality of South Australia is derived 2. the results are used to predict the effect that changes in land use ( e.g. clearance of trees ) will have on landscape quality 3. the theoretical constructs of landscape quality are evaluated on the basis of the preference ratings 4. a protocol is detailed to guide the undertaking of large - scale landscape quality assessment. The thesis thus fulfils the objective of conducting a thorough analysis of human perception and interaction with, aesthetics and landscape quality, to provide a basis for developing a credible methodology for the large - scale assessment of perceived landscape quality.<br>Thesis (Ph.D.)--School of Social Sciences, 2000.
APA, Harvard, Vancouver, ISO, and other styles
14

Gens, Rüdiger. "Quality assessment of SAR interferometric data." Hannover : Fachrichtung Vermessungswesen der Univ, 1998. http://deposit.ddb.de/cgi-bin/dokserv?idn=95607121X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tabladillo, Mark Z. "Quality management climate assessment in healthcare." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/24162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tilney, Henry Simon. "Quality assessment in rectal cancer surgery." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Verhagen, Arianne Petra. "Quality assessment of randomised clinical trials." [Maastricht] : Maastricht : Universitaire Pers Maastricht ; University Library, Maastricht University [Host], 1999. http://arno.unimaas.nl/show.cgi?fid=6863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lindquist, Malin. "Electronic tongue for water quality assessment /." Örebro : Örebro universitetsbibliotek, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Kai-Chieh. "Perceptual quality assessment for compressed video." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3284171.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.<br>Title from first page of PDF file (viewed Mar. 14, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 149-156).
APA, Harvard, Vancouver, ISO, and other styles
20

Sharma, Monika. "New approaches to wood quality assessment." Thesis, University of Canterbury. School of Forestry, 2013. http://hdl.handle.net/10092/7549.

Full text
Abstract:
This study approaches wood quality in young trees by very early screening – and consequent selection for propagation – on the basis of physical and mechanical properties. In chapter 1 corewood properties are reviewed and the importance and problems associated with early screening are discussed. Due to randomly distributed reaction wood in young trees it is advantageous to lean trees to avoid intermixing of the two wood types and minimise any uncertainty in the results. In chapter 2 physical and mechanical properties are described for opposite and compression wood in a population of Pinus radiata comprising of 50 families, at a young (<3 years) age. The dynamic stiffness was determined using the resonance acoustic technique. Density was measured using water displacement method, and longitudinal and volumetric shrinkage were measured from green to ~5% moisture content. The compression wood and opposite wood differ significantly in all the measured properties. Compression wood was characterised by high density and high longitudinal shrinkage. The mean stiffness of opposite wood was 3.0 GPa with a mean standard deviation of 0.39, and the mean longitudinal shrinkage of opposite wood was 0.99% with mean standard deviation of 0.31 across the samples examined. This variation in stiffness and longitudinal shrinkage in opposite wood can be exploited to screen for wood quality. The variation in stiffness and longitudinal shrinkage within a family was comparable to variation among families. In spite of large within site variability it was possible to distinguish between the worst and the best families in opposite wood at young age. In chapter 3 ranking of selected families of Pinus radiata was done based on microfibril angle, which is considered as the main factor influencing both stiffness and longitudinal shrinkage. The ranking was compared with ranking done using acoustic velocity which is more practical and fast method of screening trees. The mean MFA in opposite wood was 39° with a mean standard deviation of 3.7 and in compression wood the mean MFA was 44° with a mean standard deviation of 2.9. The variation in MFA in opposite wood offers opportunities to breed for trees with low MFA. A strong negative correlation (R=-0.68) between acoustic velocity squared and MFA in opposite wood suggested that the resonance technique can be used effectively to screen very young wood rather than using MFA. At high MFA, the cell wall matrix also plays an important role in determining the mechanical and physical properties of the wood. At present the chemical composition of wood samples is determined by wet chemical analysis, which is time consuming and laborious. Therefore, it is impractical to characterise large numbers of samples. Mechanical properties, particularly tanδ (dissipation of energy), which changes with temperature and frequency as the structure of the material changes at the molecular level, was studied using dynamic mechanical analysis (DMA). The idea was to assess if it can be used as a quality trait for tree screening instead of wet chemical analysis. Compression wood and opposite wood were characterised for storage modulus and tanδ at constant moisture content. In practice the instrument used, TA instrument Q800, was unable to provide the desired range of temperature and humidity so no glass transition at 9% moisture content in the temperature range of 10°C to 85°C at 1 and 10 Hz frequency was observed that might be attributed to the hemicelluloses (or lignin). In spite of the huge difference in chemical composition of opposite and compression wood, the difference in their mean tanδ at 25°C and 1 Hz values was just 7%. The positive correlation between MFA and tanδ in opposite wood suggested that MFA also plays a role in the dissipation of energy. The strong relationship between storage modulus and dynamic modulus (R=0.74) again justifies the reliability of resonance technique to screen young wood for stiffness. Concurrently eighty seven, two-year-old leant Eucalyptus regnans were studied for growth strains along with other physical and mechanical properties, independently in tension and opposite wood. The leant trees in Eucalyptus regnans vary in their average growth strain. Strong correlation between measured and calculated strain (R=0.93) suggests that the quick split method can be used to screen large populations for growth stresses. Tension wood was characterised by high density and was three times stiffer than opposite wood and twice as high in volumetric shrinkage. The high longitudinal shrinkage in opposite wood could be due to comparatively high MFAs in opposite wood of the young trees. There was no correlation between growth strain values and other measured properties in opposite wood. It is possible to screen for growth strain at age two, without any adverse effect on stiffness and shrinkage properties.
APA, Harvard, Vancouver, ISO, and other styles
21

Oberoi, Usha. "Quality assessment of a service product." Thesis, Bournemouth University, 1989. http://eprints.bournemouth.ac.uk/369/.

Full text
Abstract:
This study brings together two bodies of literature, one concerned with the character of services and the other concerned with the nature of quality, in order to explore the nature and possible forms of measurement of service quality. It uses the conference hotel service product as a vehicle for examining judgements about overall service quality. A systematic approach, through a multi-staged methodology, is evolved by first identifying what the product consists of; secondly by establishing what the evaluative attributes are; thirdly by assessing levels of perceived performance on the evaluative attributes and, crucially, the assessment of the overall performance of the product. By using statistical techniques, the evaluative attributes of perceived net quality are examined. This is achieved by analysing which attributes fulfil minimum requirements and which attributes can increase a positive perception of net quality. In addition, the impact of the attributes on net quality is established. The study shows that the specific product consists of a multi-dimensional combination of attributes in varying degrees. The crucial attribute is shown to be dependability of management and staff. In addition, the study reveals that net quality is not only a reflection of incidents of satistaction with the physical - commodities and performed activities. It also needs to take into consideration human interaction as a component in itself. In a wider context the study gives an indication of how the perceived net quality of a product , with a high degree of an activity component, can be examined.
APA, Harvard, Vancouver, ISO, and other styles
22

Rix, Antony W. "Perceptual techniques in audio quality assessment." Thesis, University of Edinburgh, 2003. http://hdl.handle.net/1842/14286.

Full text
Abstract:
This thesis discusses quality assessment of audio communications systems, in particular telephone networks. A new technique for time-delay estimation based on a smoothed weighted histogram of frame-by-frame delays is presented. This has low complexity and is found to be more robust to non-linear distortions typical of telephone networks. This technique is further extended to identify piecewise constant delay, enabling models to be used for assessing packet-based transmission such as voice over IP, where delay may change several times during a measurement. It is shown that equalisation improves the accuracy of perceptual models for measurements that may include analogue or acoustic components. Linear transfer function estimation is found to be unreliable due to non-linear distortions. Spectral difference and phaseless cross-spectrum estimation methods for identifying and equalising the linear transfer function are implemented for this application, operating in the filter-bank and short-term Fourier spectrum domains. This thesis provides the first detailed examination of the process of selecting and mapping multiple objective perceptual distortion parameters to estimated subjective quality. The systematic variation of subjective opinion between tests is examined and addressed using a new method of monotonic polynomial regression. The effect on conventional regression techniques, and a new joint optimisation process, are considered.
APA, Harvard, Vancouver, ISO, and other styles
23

Aljumaili, Mustafa. "Data Quality Assessment : Applied in Maintenance." Doctoral thesis, Luleå tekniska universitet, Drift, underhåll och akustik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26088.

Full text
Abstract:
Godkänd; 2016; 20160126 (musalj); Nedanstående person kommer att disputera för avläggande av teknologie doktorsexamen. Namn: Mustafa Aljumaili Ämne: Drift och underhållsteknik/Operation and Maintenance Engineering Avhandling: Data Quality Assessment: Applied in Maintenance Opponent: Docent Mirka Kans, Institutionen för Maskinteknik, Linnéuniversitetet, Växjö. Ordförande: Professor Uday Kumar, Avdelning för Drift, underhåll och akustik, Institutionen för samhällsbyggnad och naturresurser, Luleå tekniska universitet. Tid: Fredag 4 mars 2016, kl 10.00 Plats: F1031, Luleå tekniska universitet
APA, Harvard, Vancouver, ISO, and other styles
24

Zhang, Wei. "Visual saliency in image quality assessment." Thesis, Cardiff University, 2017. http://orca.cf.ac.uk/100239/.

Full text
Abstract:
Advances in image quality assessment have shown the benefits of modelling functional components of the human visual system in image quality metrics. Visual saliency, a crucial aspect of the human visual system, is increasingly investigated recently. Current applications of visual saliency in image quality metrics are limited by our knowledge on the relation between visual saliency and quality perception. Issues regarding how to simulate and integrate visual saliency in image quality metrics remain. This thesis presents psychophysical experiments and computational models relevant to the perceptually-optimised use of visual saliency in image quality metrics. We first systematically validated the capability of computational saliency in improving image quality metrics. Practical guidance regarding how to select suitable saliency models, which image quality metrics can benefit from saliency integration, and how the added value of saliency depends on image distortion type were provided. To better understand the relation between saliency and image quality, an eye-tracking experiment with a reliable experimental methodology was first designed to obtain ground truth fixation data. Significant findings on the interactions between saliency and visual distortion were then discussed. Based on these findings, a saliency integration approach taking into account the impact of distortion on the saliency deployment was proposed. We also devised an algorithm which adaptively incorporate saliency in image quality metrics based on saliency dispersion. Moreover, we further investigated the plausibility of measuring image quality based on the deviation of saliency induced by distortion. An image quality metric based on measuring saliency deviation was devised. This thesis demonstrates that the added value of saliency in image quality metrics can be optimised by taking into account the interactions between saliency and visual distortion. This thesis also demonstrates that the deviation of fixation deployment due to distortion can be used as a proxy for the prediction of image quality.
APA, Harvard, Vancouver, ISO, and other styles
25

Lopes, Marta Filipa Lobão. "Ecological quality assessment in transitional systems." Doctoral thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14856.

Full text
Abstract:
Doutoramento em Biologia<br>Estuaries are poles of attraction for human settlement which is a source of pressures to surface water bodies. The implementation of the European Water Framework Directive (WDF, 2000/60/EC) has increased the investigation in order to develop methodologies to assess the Ecological Quality Status (EQS) of aquatic ecosystems. Transitional systems are naturally stressed and characterized by highly dynamic physical, chemical and hydro-morphologic conditions and by species with a higher level of tolerance to change, being more difficult to develop suitable quality indicators for these systems. The general purpose of this study is to test the ability of synthesis descriptors, including primary (S, taxa richness) and derived biological variable (H’, Shannon-Wiener diversity), biotic indices (AMBI and M-AMBI), body size properties (abundance distribution by body size classes, length, weight and length-weight relationships) and non-taxonomic indices (ISS), as well as functional indicators related to the decomposition rates of various experimental substrates, a macrophyte (Phragmites australis) and an alga (Fucus vesiculosus), to evaluate the environmental quality in transitional systems. This study was carried out in one of the most pristine channels of the Ria the Aveiro, Mira Channel, along a full salinity gradient and in a metals and metalloid sediment contamination area, the Estarreja Channel, and two reference channels (Canelas and Salreu). In this study were used different sampling techniques, the leaf-bag technique and a hand-held corer. In Mira Channel, the alga and the macrophyte presented an opposite trend in the decomposition rate along the salinity gradient, with the decomposition rates of the alga always higher than those of the macrophyte. The decomposition rates of the macrophyte and the alga were higher in the mid estuary and in higher salinity areas, respectively, corresponding to the preferencial distribution areas of each species. The macrobenthic fauna associated with the decaying and an artificial substrate (control) showed equally well the benthic succession from the marine to the freshwater areas and, despite the strong differences in the decay rates, no significant differences were found between the benthic communities associated with the alga and the macrophyte. The body size properties of the macrobenthic fauna associated with the P. australis leaf-bag (1mm and 5mm) and corer samples were studied along the full salinity gradient. The dominant species of the sub-set of measured specimens were not the same of the original macrobenthic fauna sampled but, despite that, the sub-set of measured specimens was also able to show the benthic succession from the marine to the freshwater areas. The body size abundance distribution of the benthic macroinvertebrates according to the ISS size classes did not show a particular trend in any sampler along the salinity gradient. Significant differences were found in the length, weight and length-weight relationships of Annelids, , Molluscs and even some species along the salinity gradient. No significant differences were found in the AMBI, M-AMBI and ISS values along the salinity gradient for all the samplers. The EQS of the corer samples obtained using the M-AMBI was lower than that of the leaf-bags. The EQS obtained with the ISS was higher than that obtained with the M-AMBI in the leaf-bags but not in the corer samples. The ecological effects of contaminated sediments associated with the industrial chemical effluents discharged in the Estarreja Channel were studied a decade after ceasing the emissions, using the Sediment Quality Triad approach and two reference channels. The results showed that despite the emissions ceased in 2004, the sediment remains polluted with high levels of metals and metalloid, available to bioaccumulation and with severe consequences at the community level. The sediment contamination problem was also studied using the leaf-bag technique with a macrophyte, an alga and a control substrate. The results showed that the decay rates, the associated macrofauna and the application of the AMBI, M-AMBI and ISS indices to the mesh-bag samples were not able to identify the sediment contamination. Contrarily to the AMBI, the M-AMBI and the ISS showed significant differences between the contaminated and the reference channels for the corer samples. Although such statistical significance, the interest of using these complex biotic indices could be questioned, when much simple ones, like the S and H’ allow to reach the same conclusions.<br>Os estuários são pólos de atração para a instalação de aglomerados humanos, constituindo uma fonte de pressão para as massas de água superficiais. Com a implementação da Diretiva Europeia Quadro da Água (DQA, 2000/60/CE) tem aumentado a investigação no sentido de desenvolver metodologias para avaliar o estado de qualidade ecológica (EQE) dos ecossistemas aquáticos. Os sistemas de transição são caracterizados por condições físico-químicas e hidromorfológicas extremamente dinâmicas e por espécies com uma maior tolerância à mudança, sendo difícil desenvolver indicadores de qualidade adequados para estes ecossistemas. O objetivo deste estudo é testar a capacidade de descritores de síntese, tais como a riqueza em espécies (S) e a diversidade de Shannon-Wiener (H'), índices de base taxonómica (AMBI e M-AMBI) e não taxonómica (ISS), as propriedades do tamanho corporal (distribuição de abundância por classes de tamanho corporal, comprimento, peso e relações comprimento-peso), bem como indicadores funcionais (taxas de decomposição de uma macrófita (Phragmites australis) e uma alga (Fucus vesiculosus)), para avaliar a qualidade ambiental dos sistemas de transição. Este estudo foi realizado ao longo de um gradiente completo de salinidade num dos canais com menor impacto antropogénico da Ria de Aveiro, o Canal de Mira, numa área com contaminação sedimentar por metais e metaloides, o Canal de Estarreja, e dois canais de referência (Canelas e Salreu). Neste estudo foram utilizadas diferentes técnicas de amostragem, a técnica dos sacos de folha e corers. No Canal de Mira, a alga e a macrófita apresentaram uma tendência oposta na taxa de decomposição ao longo do gradiente de salinidade, com as taxas de decomposição da alga sempre superiores. As taxas de decomposição da macrófita e da alga foram mais elevadas a meio do estuário e em áreas de maior salinidade, respetivamente, correspondendo às preferenciais áreas de distribuição de cada espécie. A fauna bentónica associada aos substratos orgânicos e a um substrato artificial (controlo) mostrou a sucessão bentónica ao longo do gradiente estuarino e, apesar das grandes diferenças nas taxas de decomposição, não foram encontradas diferenças nas comunidades bentónicas entre ambos os substratos. As propriedades do tamanho corporal da fauna bentónica dos sacos de folhas de P. australis (1mm e 5mm) e corers foram estudadas ao longo do gradiente estuarino. As espécies dominantes do sub-conjunto de espécimes medidos não são as mesmas da fauna bentónica original mas, apesar disso, foram capazes de mostrar a sucessão bentónica ao longo do gradiente salino. A distribuição da abundância pelas classes de tamanho estabelecidas para o cálculo do índice ISS não mostrou nenhuma tendência ao longo do gradiente de salinidade em nenhum dos amostradores. Foram encontradas diferenças significativas no comprimento, peso e na relação comprimento-peso dos Anelídeos, Artrópodes, Moluscos e de algumas espécies ao longo do gradiente de salinidade. Não foram encontradas diferenças significativas nos valores AMBI, M-AMBI e ISS ao longo do gradiente estuarino. O EQE das amostras dos corer foi inferior ao dos sacos de folhas. O EQE obtido com o ISS foi mais elevado do que o obtido com o M-AMBI nos sacos de folha, mas não nas amostras do corer. Os efeitos ecológicos associados à contaminação dos sedimentos por efluentes químicos lançados no Canal de Estarreja foram estudados uma década após a sua cessação, recorrendo à Tríade de Qualidade Sedimentar e dois canais de referência. Os resultados mostraram que o sedimento permanece contaminado com elevados níveis de metais e metaloide, disponíveis para serem bioacumulados e com graves consequências ao nível da comunidade. A contaminação do sedimento foi também estudada utilizando a técnica dos sacos de folhas com a macrófita, a alga e um substrato de controlo, tendo-se verificado que as taxas de decomposição, a macrofauna associada e a aplicação dos índices AMBI, M-AMBI e ISS aos sacos de folhas não foram capazes de identificar o problema. Ao contrário do AMBI, o M-AMBI e o ISS apresentaram diferenças significativas entre o canal contaminado e os canais de referência para as amostras do corer. No entanto, a utilização de índices tão complexos é questionável, na medida em que índices mais simples, tais como a S e a H', permitem chegar às mesmas conclusões.
APA, Harvard, Vancouver, ISO, and other styles
26

Komak, Wagma, Jeremy Smart, and Jennifer White. "Quality Assessment of Internet Pharmaceutical Products." The University of Arizona, 2007. http://hdl.handle.net/10150/624403.

Full text
Abstract:
Class of 2007 Abstract<br>Objectives: The purpose of this study is to assess the quality of study medications obtained without a prescription through international websites. Methods: Samples of levothyroxine, warfarin, and sildenafil were obtained through various websites and compared to U.S. standards. Each sample was physically evaluated for weight, color, shape, and external tablet markings. High performance liquid chromatography (HPLC) was performed to quantify the amount of active ingredient. Results: When physically inspected, only 3 of the 9 lots met FDA labeling requirements. Three of 60 (20 tablets from 3 lots) of the individual levothyroxine tablets were out of the USP acceptable range (90% - 110%). For warfarin, 16 of the 60 samples (20 samples from 3 lots) of the individual tablets were out of the USP acceptable range (95% - 105%). When averaged, each of the lots for both levothyroxine and warfarin were within their USP acceptable ranges. As sildenafil is not available as generic in the U.S., there is no USP standard acceptable range for comparison. All of the sildenafil samples fell within 90%- 105% of Viagra® tablets obtained from a local pharmacy. Conclusions: While there were a few samples outside of the U.S. acceptable range, the majority of samples analyzed for active ingredient were within the range published in the USP. While the outcomes of this study presented interesting findings, further evaluation in larger studies is needed to properly assess the quality of foreign medications purchased over the internet.
APA, Harvard, Vancouver, ISO, and other styles
27

Jung, Agata. "Comparison of Video Quality Assessment Methods." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15062.

Full text
Abstract:
Context: The newest standard in video coding High Efficiency Video Coding (HEVC) should have an appropriate coder to fully use its potential. There are a lot of video quality assessment methods. These methods are necessary to establish the quality of the video. Objectives: This thesis is a comparison of video quality assessment methods. Objective is to find out which objective method is the most similar to the subjective method. Videos used in tests are encoded in the H.265/HEVC standard. Methods: For testing MSE, PSNR, SSIM methods there is special software created in MATLAB. For VQM method downloaded software was used for testing. Results and conclusions: For videos watched on mobile device: PSNR is the most similar to subjective metric. However for videos watched on television screen: VQM is the most similar to subjective metric. Keywords: Video Quality Assessment, Video Quality Prediction, Video Compression, Video Quality Metrics
APA, Harvard, Vancouver, ISO, and other styles
28

Kırer, Tuğba Tayfur Gökmen. "Groundwater quality assessment in Torbalı region/." [s.l.]: [s.n.], 2002. http://library.iyte.edu.tr/tezler/master/cevremuh/T000144.rar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ivkovic, Goran. "An Algorithm for Image Quality Assessment." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Djedaini, Mahfoud. "Automatic assessment of OLAP exploration quality." Thesis, Tours, 2017. http://www.theses.fr/2017TOUR4038/document.

Full text
Abstract:
Avant l’arrivée du Big Data, la quantité de données contenues dans les bases de données était relativement faible et donc plutôt simple à analyser. Dans ce contexte, le principal défi dans ce domaine était d’optimiser le stockage des données, mais aussi et surtout le temps de réponse des Systèmes de Gestion de Bases de Données (SGBD). De nombreux benchmarks, notamment ceux du consortium TPC, ont été mis en place pour permettre l’évaluation des différents systèmes existants dans des conditions similaires. Cependant, l’arrivée de Big Data a complètement changé la situation, avec de plus en plus de données générées de jour en jour. Parallèlement à l’augmentation de la mémoire disponible, nous avons assisté à l’émergence de nouvelles méthodes de stockage basées sur des systèmes distribués tels que le système de fichiers HDFS utilisé notamment dans Hadoop pour couvrir les besoins de stockage technique et le traitement Big Data. L’augmentation du volume de données rend donc leur analyse beaucoup plus difficile. Dans ce contexte, il ne s’agit pas tant de mesurer la vitesse de récupération des données, mais plutôt de produire des séquences de requêtes cohérentes pour identifier rapidement les zones d’intérêt dans les données, ce qui permet d’analyser ces zones plus en profondeur, et d’extraire des informations permettant une prise de décision éclairée<br>In a Big Data context, traditional data analysis is becoming more and more tedious. Many approaches have been designed and developed to support analysts in their exploration tasks. However, there is no automatic, unified method for evaluating the quality of support for these different approaches. Current benchmarks focus mainly on the evaluation of systems in terms of temporal, energy or financial performance. In this thesis, we propose a model, based on supervised automatic leaming methods, to evaluate the quality of an OLAP exploration. We use this model to build an evaluation benchmark of exploration support sys.terns, the general principle of which is to allow these systems to generate explorations and then to evaluate them through the explorations they produce
APA, Harvard, Vancouver, ISO, and other styles
31

Kerr, Grégoire Henry Gérard. "Quality assessment for hyperspectral airborne systems." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17275.

Full text
Abstract:
Im Rahmen dieser Arbeit wird ein Konzept entwickelt und umgesetzt, welches eine umfassende Bewertung von Daten flugzeuggetragener hyperspektraler Systeme ermöglicht. Es baut auf mehreren aktuellen Initiativen zur Erfassung der Datenqualität flugzeuggetragener Sensoren auf: Der ''European Facility for Airborne Reseach'', der ''Quality Assessment for Earth Observation Workgroup'' und dem ''Joint Committee for Guides in Metrology''. Bei der Befliegung eines Gebietes mit hyperspektralen Sensorsystemen werden mehrere, teilweise sich überlappende, Flugstreifen aufgenommen. Es wird vorgeschlagen, die Bildinformationen dieser Überlappungsbereiche als redundant anzusehen und so die innere Variabilität der Daten zu erfassen. Die jeweils zwischen zwei Flugstreifen auftretende Variabilität kann (aufgrund unterschiedlicher Blickrichtungen) als ungünstigster anzunehmender Fall (''worst-case'') betrachtet werden und ergänzt daher existierende Ansätze, die sich auf die Auswertung homogener Flächen konzentrieren. Das entwickelte Konzept ist auf unterschiedliche Sensorsysteme anwendbar, somit generisch und kann problemlos in die aktuelle Datenprozessierungskette des Deutschen Zentrums für Luft- und Raumfahrt e.V. integriert werden. Im ersten Abschnitt der Arbeit wird dargelegt, wie korrespondierende Pixelpaare, die in den jeweiligen Streifen an gleicher Geolokation liegen, ermittelt werden können. Darauf aufbauend erfolgt eine Plausibilitätsüberprüfung der erfaßten Pixelpaare unter Verwendung von Zuverlässigkeitsmetriken, die auf Basis höherwertigerer Datenprodukte berechnet werden. In einem weiteren Schritt werden die Ergebnisse genutzt, um die notwendigen Parameter für eine optimierte Bildauswertung - hier im Sinne der Zuverlässigkeit - abzuleiten. Abschließend werden die Pixelpaare benutzt, um die globale Variabilität der Reflektanzwerte abzuschätzen. Insgesamt werden durch diese Arbeit die existierenden Methoden zur Qualitätskontrolle optischer Bilddaten umfassend ergänzt.<br>This work proposes a methodology for performing a quality assessment on the complete airborne hyperspectral system, thus ranging from data acquisition up to land-product generation. It is compliant with other quality assessment initiatives, such as the European Facility for Airborne Research (EUFAR), the Quality Assessment for Earth observation work-group (QA4EO) and the Joint Committee for Guides in Metrology (JCGM). These are extended into a generic framework allowing for a flexible but reliable quality assessment strategy. Since airborne hyperspectral imagery is usually acquired in several partially overlapping flight-lines, it is proposed to use this information redundancy to retrieve the imagery''s internal variability. The underlying method is generic and can be easily introduced in the German Aerospace Center DLR''s hyperspectral processing chain. The comparison of two overlapping flight-lines is not straightforward, should it only be because the presence of geo-location errors present in the data. A first step consists in retrieving the relative variability of the pixel''s geo-locations, hence providing pairs of pixels imaging the same areas. Subsequently, these pairs of pixels are used to obtain quality indicators accounting for the reproducibility of mapping-products, thus extending the EUFAR''s quality layers up to land-products. The third stage of the analysis consists of using these reliability results to improve the mapping-products: it is proposed to maximise the reliability over the mapping-methods'' parameters. Finally, the repeatability assessment is back propagated to the hyperspectral data itself. As a result, an estimator of the reflectance variability (including model-, and scene-induced uncertainties) is proposed by means of a blind-deconvolution approach. Altogether, this complements and extends the EUFAR quality layers with estimates of the data and products repeatability while providing confidence intervals as recommended by JCGM and QA4EO.
APA, Harvard, Vancouver, ISO, and other styles
32

BOARELLI, Maria Chiara. "Chemical tools for food quality assessment." Doctoral thesis, Università degli Studi di Camerino, 2019. http://hdl.handle.net/11581/429270.

Full text
Abstract:
There is nowadays more awareness on the impact on health of pollutants emitted even during cooking both from commercial as well as from domestic activities. In this study, it has been set up a new system allowing to analyse by solid-phase microextraction and gas chromatography coupled to mass spectrometry (SPME-GC-MS) the volatile organic compounds (VOCs) emitted during cooking. This could be done by aspiring into a polyethylene terephthalate (PET, Nalophan) bag the air over a cooking process. The bag allows to transport the sample to the instrument location and to perform the SPME extraction of the sampled air. The efficiency of different systems to perform the SPME extraction from the air contained in the bag was assessed by using a standard mixture of alkanes in order to obtain a sufficient sensitivity. Then the defined system was used to extract and analyse VOCs in air obtained during frying fries in sunflower oil. Several SPME extraction times (1h, 3h, 5h, 7h and 24h) were evaluated bringing to results that can be useful both with short extraction times and with long extraction times. Then the evaluation of three different filters was performed. Thus, the developed system, combining the use of olfactometric bags and the SPME-GC-MS, is applied for the first time to the study VOCs emitted during cooking and it allows to perform the analysis, even on samples produced in sites far from the instrument location, in an easy way and with instrumentations available in most of the laboratories. The results show a different retention effect for each filter under investigation on the classes of molecules detected. In particular, it has been found that, one of the three examined filters gives better filtering performance than the other two, confirmed also by the statistical analysis. This study draws the attention on the characterization of high quality monovarietal extra virgin olive oils (EVOOs) produced in Marche Region (Italy), in order to find new possible markers of quality. Five different cultivars were selected and investigated: Ascolana Tenera, Coroncina, Mignola, Piantone di Mogliano and Raggia. The study was developed in two different years (2015 and 2016) to underline possible correlation between the same varieties. Meanwhile, a comparison with EVOOs from the large scale distribution was carried out. Chemical analysis and sensory characterization were performed, paying particular attention to the determination of molecules responsible for the sensory and healthy properties, such as volatile and polyphenols substances. Ergosterol ((3β,22E)-ergosta-5,7,22-trien-3-ol) is produced by fungi and yeasts, i.e. organisms involved in the degradation of olives. In the present study, it was investigated whether ergosterol could be used as a marker to assess the quality of the olives from which the oil is produced. Ergosterol was quantified in extra virgin olive oils (EVOOs) having different level of quality developing an on-line HPLC-GC-MS method based on the on-line HPLC-GC method previously used to determine total sterol content. Oils were transmethylated to liberate ergosterol that was isolated from the far larger amounts of other sterols by HPLC on silica gel and transferred to GC by concurrent eluent evaporation. Preliminary results will be presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

CORCHS, SILVIA ELENA. "Image quality assessment for Digital documents." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50461.

Full text
Abstract:
This thesis focuses on No Reference (NR) methods for Image Quality Assessment (IQA). A review of the IQA field is presented in Chapter 2; where the different IQA methods are described and classified. In particular, the application of IQA methods within a workflow chain is discussed. In Chapter 3 we focus on NR metrics for JPEG-blockiness and noise artifacts. It is in general assumed that subjective methods produce an actual estimate of the perceived quality while objective methods produce values that should be correlated with human perceptions as best as possible. From the analysis of the regression curves that correlate objective and subjective data we have found that in some cases the metric's predictions are not in correspondence with the subjective scores. After reviewing the available databases, we realize that the distortion ranges considered are not in general representative of real case applications. Therefore, in Chapter 4 the Imaging and Vision Lab (IVL) database is introduced. It was generated with the aim of assessing the quality of images corrupted by JPEG and noise. In Chapter 5 we approach the NR-IQA field by focusing on a classification problem. A framework based on machine learning classification is proposed that let us evaluate how images can be classified within different groups or classes, according to their quality. NR metrics are considered as features and the assigned classes are obtained from the psychovisual data. For the JPEG distortion case, the feature space of the classifiers is built using each NR metric as single feature and also a pool of eleven NR metrics. Classification within five and three classes was addressed. In the former case, the five classes are in correspondence to the five categories recommended by the ITU (excellent, good, fair, poor, and bad) when designing image quality experiments. In the latter case we were interested in classifying images as high, medium or low quality ones. The classifiers are trained and tested on different databases. The classifier obtained using the pool of metrics outperforms each single metric classifier. Better performance is obtained in the case of three classes. Considering an image as the combining of two signals, content and distortion, we note that the crosstalk between both signals influences both subjective and objective quality assessment. We address this problem in Chapter 6 where our working hypothesis is that regression can be improved if performed within a group of images that present similar contents in terms of low level features. The criteria chosen to divide the images in different groups is the image complexity. The proposed strategy consists on two steps: the images (of a given database) are first classified in three groups of low, medium and high complexity. In a second step, regression is performed within each of these groups separately. The strategy is tested for different NR metrics for JPEG-blockiness and noise artifacts, different databases are considered. Correlation coefficients are computed and statistical significance tests are applied. The gain in performance depends on the metric and distortion considered. Summarizing, the two main proposals of this research work, i.e. the classification approach that combines several NR metrics and the grouping strategy, are able to outperform the correlation between subjective and objective data for the case of JPEG-blockiness. Both strategies can be extended to consider other type of distortions.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Zhengyu. "Quality Assessment of Light Field Images." Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0002.

Full text
Abstract:
Les images de champs de lumière (LFI) suscitent un intérêt et une fascination remarquables en raison de leur importance croissante dans les applications immersives. Étant donné que les LFI peuvent être déformés à différentes étapes, de l'acquisition à la visualisation, l'évaluation de la qualité des images de champs de lumière (LFIQA) est d'une importance vitale pour surveiller les dégradations potentielles de la qualité des LFI.La première contribution (Chapitre 3) de ce travail se concentre sur le développement de deux métriques LFIQA sans référence (NR) fondées sur des caractéristiques ad-hoc, dans lesquelles les informations de texture et les coefficients ondelettes sont exploités pour l'évaluation de la qualité.Puis dans la deuxième partie (Chapitre 4), nous explorons le potentiel de la technologie de l’apprentissage profond (deep learning) pour l'évaluation de la qualité des LFI, et nous proposons quatre métriques LFIQA basées sur l’apprentissage profond, dont trois métriques sans référence (NR) et une métrique Full-Reference (FR).Dans la dernière partie (Chapitre 5), nous menons des évaluations subjectives et proposons une nouvelle base de données normalisée pour la LFIQA. De plus, nous fournissons une étude comparative (benchmark) de nombreuses métriques objectives LFIQA de l’état de l’art, sur la base de données proposée<br>Light Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Since LFIs may be distorted at various stages from acquisition to visualization, Light Field Image Quality Assessment (LFIQA) is vitally important to monitor the potential impairments of LFI quality. The first contribution (Chapter 3) of this work focuses on developing two handcrafted feature-based No-Reference (NR) LFIQA metrics, in which texture information and wavelet information are exploited for quality evaluation. Then in the second part (Chapter 4), we explore the potential of combining deep learning technology with the quality assessment of LFIs, and propose four deep learning-based LFIQA metrics according to different LFI characteristics, including three NR metrics and one Full-Reference (FR) metric. In the last part (Chapter 5), we conduct subjective experiments and propose a novel standard LFIQA database. Moreover, a benchmark of numerous state-of-the-art objective LFIQA metrics on the proposed database is provided
APA, Harvard, Vancouver, ISO, and other styles
35

Tian, Shishun. "Image Quality Assessment of 3D Synthesized Views." Thesis, Rennes, INSA, 2019. http://www.theses.fr/2019ISAR0002/document.

Full text
Abstract:
Depth-Image-Based Rendering (DIBR) est une technologie fondamentale dans plusieurs applications liées à la 3D, telles que la vidéo en mode point de vue libre (FVV), la réalité virtuelle (VR) et la réalité augmentée (AR). Cependant, l'évaluation de la qualité des vues synthétisées par DIBR a également posé de nouveaux problèmes, car ce processus induit de nouveaux types de distorsions, qui sont intrinsèquement différentes des distorsions provoquées par le codage vidéo. Ce travail est destiné à mieux évaluer la qualité des vues synthétisées par DIBR en multimédia immersif. Au chapitre 2, nous proposons deux métriques complètements sans référence (NR). Le principe de la première métrique NR NIQSV consiste à utiliser plusieurs opérations morphologiques d’ouverture et de fermeture pour détecter et mesurer les distorsions, telles que les régions floues et l’effritement. Dans la deuxième métrique NR NIQSV+, nous améliorons NIQSV en ajoutant un détecteur de “black hole” et une détection “stretching”.Au chapitre 3, nous proposons deux métriques de référence complète pour traiter les distorsions géométriques à l'aide d'un masque de désocclusion et d'une méthode de correspondance de blocs multi-résolution. Au chapitre 4, nous présentons une nouvelle base de données d'images synthétisée par DIBR avec ses scores subjectifs associés. Ce travail se concentre sur les distorsions uniquement induites par différentes méthodes de synthèse de DIBR qui déterminent la qualité d’expérience (QoE) de ces applications liées à DIBR. En outre, nous effectuons également une analyse de référence des mesures d'évaluation de la qualité objective de pointe pour les vues synthétisées par DIBR sur cette base de données. Le chapitre 5 conclut les contributions de cette thèse et donne quelques orientations pour les travaux futurs<br>Depth-Image-Based Rendering (DIBR) is a fundamental technology in several 3D-related applications, such as Free viewpoint video (FVV), Virtual Reality (VR) and Augmented Reality (AR). However, new challenges have also been brought in assessing the quality of DIBR-synthesized views since this process induces some new types of distortions, which are inherently different from the distortions caused by video coding. This work is dedicated to better evaluate the quality of DIBRsynthesized views in immersive multimedia. In chapter 2, we propose a completely No-reference (NR) metric. The principle of the first NR metrics NIQSV is to use a couple of opening and closing morphological operations to detect and measure the distortions, such as “blurry regions” and “crumbling”. In the second NR metric NIQSV+, we improve NIQSV by adding a “black hole” and a “stretching” detection. In chapter 3, we propose two Fullreference metrics to handle the geometric distortions by using a dis-occlusion mask and a multi-resolution block matching methods.In chapter 4, we present a new DIBR-synthesized image database with its associated subjective scores. This work focuses on the distortions only induced by different DIBR synthesis methods which determine the quality of experience (QoE) of these DIBR related applications. In addition, we also conduct a benchmark of the state-of-the-art objective quality assessment metrics for DIBR-synthesized views on this database. The chapter 5 concludes the contributions of this thesis and gives some directions of future work
APA, Harvard, Vancouver, ISO, and other styles
36

Fickel, Jacqueline Jean. "Quality of care assessment : state Medicaid administrators' use of quality information." Full text (PDF) from UMI/Dissertation Abstracts International Access restricted to users with UT Austin EID, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3077639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Alkhattabi, Mona A. "Information quality assessment in e-learning systems." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4867.

Full text
Abstract:
E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers¿ viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
APA, Harvard, Vancouver, ISO, and other styles
38

Alkhattabi, Mona Awad. "Information quality assessment in e-learning systems." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4867.

Full text
Abstract:
E-learning systems provide a promising solution as an information exchanging channel. Improved technology could mean faster and easier access to information but does not necessarily ensure the quality of this information. Therefore it is essential to develop valid and reliable methods of quality measurement and carry out careful information quality evaluations. Information quality frameworks are developed to measure the quality of information systems, generally from the designers' viewpoint. The recent proliferation of e-services, and e-learning particularly, raises the need for a new quality framework in the context of e-learning systems. The main contribution of this thesis is to propose a new information quality framework, with 14 information quality attributes grouped in three quality dimensions: intrinsic, contextual representation and accessibility. We report results based on original questionnaire data and factor analysis. Moreover, we validate the proposed framework using an empirical approach. We report our validation results on the basis of data collected from an original questionnaire and structural equation modeling (SEM) analysis, confirmatory factor analysis (CFA) in particular. However, it is difficult to measure information quality in an e-learning context because the concept of information quality is complex and it is expected that the measurements will be multidimensional in nature. Reliable measures need to be obtained in a systematic way, whilst considering the purpose of the measurement. Therefore, we start by adopting a Goal Question Metrics (GQM) approach to develop a set of quality metrics for the identified quality attributes within the proposed framework. We then define an assessment model and measurement scheme, based on a multi element analysis technique. The obtained results can be considered to be promising and positive, and revealed that the framework and assessment scheme could give good predictions for information quality within e-learning context. This research generates novel contributions as it proposes a solution to the problems raised from the absence of consensus regarding evaluation standards and methods for measuring information quality within an e-learning context. Also, it anticipates the feasibility of taking advantage of web mining techniques to automate the retrieval process of the information required for quality measurement. This assessment model is useful to e-learning systems designers, providers and users as it gives a comprehensive indication of the quality of information in such systems, and also facilitates the evaluation, allows comparisons and analysis of information quality.
APA, Harvard, Vancouver, ISO, and other styles
39

Sarikan, Selim Sefa. "Visual Quality Assessment For Stereoscopic Video Sequences." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613689/index.pdf.

Full text
Abstract:
The aim of this study is to understand the effect of different depth levels on the overall 3D quality and develop an objective video quality metric for stereoscopic video sequences. Proposed method is designed to be used in video coding stages to improve overall 3D video quality. This study includes both objective and subjective evaluation. Test sequences with different coding schemes are used. Computer simulation results show that overall quality has a strong correlation with the quality of the background, where disparity is smaller relative to the foreground. This correlation indicates that background layer is more prone to coding errors. The results also showed that content type is an important factor in determining the visual quality.
APA, Harvard, Vancouver, ISO, and other styles
40

Gehrmann, Christoffer. "Translation Quality Assessment : A Model in Practice." Thesis, Högskolan i Halmstad, Sektionen för humaniora (HUM), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16041.

Full text
Abstract:
When J. R. R. Tolkien’s trilogy The Lord of the Rings was published in Swedish 1959-1961, the translation by Åke Ohlmarks was considered by most critics to be excellent. According to Ohlmarks, even J. R. R. Tolkien himself and his son Christopher were very pleased with it, which Ohlmarks was told by Christopher when he met him in 1975. This is, however, contradicted in the authorised biography of Tolkien by Carpenter (1978), in which Tolkien is said to have been most negative towards the way Ohlmarks handled the text. Before the biography was published, Christopher Tolkien and Ohlmarks had become bitter enemies, which might explain the re-evaluation. The schism has been described by Ohlmarks in his book Tolkiens arv (1978). But ever since The Lord of the Rings came out in paperback in 1971 there has been a discussion about the translation quality also in Sweden. When I first read the books in English I had the Swedish translation beside me. I soon discovered that Ohlmarks had taken great liberties with the text. I noticed that the descriptions were often more detailed in the Swedish translation than in the original and it was this fact that first roused my interest. Therefore, I decided to try to make a translation quality assessment of a part of the text, using a model by Juliane House.
APA, Harvard, Vancouver, ISO, and other styles
41

Sendashonga, Mireille. "Image quality assessment using frequency domain transforms." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99537.

Full text
Abstract:
Measurement of image quality plays a central role in optimization and evaluation of imaging systems. The most straight-forward way to assess image quality is subjective evaluations by human observers, where the mean value of their scores is used as the quality measure. However, objective (quantitative) measures are needed because subjective evaluations are impractical and expensive. The aim of this thesis is to develop simple and low-complexity metrics for quality assessment of digital images.<br>Traditionally, the most widely used quantitative measures are the mean squared error and measures that model the human visual system. The proposed method uses the Discrete Cosine Transform and the Discrete Wavelet Transform to divide images into four frequency bands and relates the visual quality of the distorted images to the weighted average of the mean squared error between original and distorted images within each band.<br>The performance of the metrics presented in this thesis is tested and validated on a large database of subjective quality ratings. Simulations show that the proposed metrics accurately predict visual quality and outperform current state-of-the-art methods with simple and easily implemented processing steps.<br>Extensions of the proposed image quality metrics are investigated. More particularly, this thesis explores image quality assessment when the reference image is only partially available (reduced reference settings), and presents a method for successfully quantifying the quality of distorted images in such settings.
APA, Harvard, Vancouver, ISO, and other styles
42

Al-Dossari, Hmood Zafer. "Quality of service assessment over multiple attributes." Thesis, Cardiff University, 2011. http://orca.cf.ac.uk/55108/.

Full text
Abstract:
The development of the Internet and World Wide Web have led to many services being offered electronically. When there is sufficient demand from consumers for a certain service, multiple providers may exist, each offering identical service functionality but with varying qualities. It is desirable therefore that we are able to assess the quality of a service (QoS), so that service consumers can be given additional guidance in se lecting their preferred services. Various methods have been proposed to assess QoS using the data collected by monitoring tools, but they do not deal with multiple QoS attributes adequately. Typically these methods assume that the quality of a service may be assessed by first assessing the quality level delivered by each of its attributes individ ually, and then aggregating these in some way to give an overall verdict for the service. These methods, however, do not consider interaction among the multiple attributes of a service when some packaging of qualities exist (i.e. multiple levels of quality over multiple attributes for the same service). In this thesis, we propose a method that can give a better prediction in assessing QoS over multiple attributes, especially when the qualities of these attributes are monitored asynchronously. We do so by assessing QoS attributes collectively rather than indi vidually and employ a k nearest neighbour based technique to deal with asynchronous data. To quantify the confidence of a QoS assessment, we present a probabilistic model that integrates two reliability measures: the number of QoS data items used in the as sessment and the variation of data in this dataset. Our empirical evaluation shows that the new method is able to give a better prediction over multiple attributes, and thus provides better guidance for consumers in selecting their preferred services than the existing methods do.
APA, Harvard, Vancouver, ISO, and other styles
43

Taji, Bahareh. "Signal Quality Assessment in Wearable ECG Devices." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38851.

Full text
Abstract:
There is a current trend towards the use of wearable biomedical devices for the purpose of recording various biosignals, such as electrocardiograms (ECG). Wearable devices have different issues and challenges compared to nonwearable ones, including motion artifacts and contact characteristics related to body-conforming materials. Due to this susceptibility to noise and artifacts, signals acquired from wearable devices may lead to incorrect interpretations, including false alarms and misdiagnoses. This research addresses two challenges of wearable devices. First, it investigates the effect of applied pressure on biopotential electrodes that are in contact with the skin. The pressure affects skin–electrode impedance, which impacts the quality of the acquired signal. We propose a setup for measuring skin–electrode impedance during a sequence of applied calibrated pressures. The Cole–Cole impedance model is utilized to model the skin–electrode interface. Model parameters are extracted and compared in each state of measurement with respect to the amount of pressure applied. The results indicate that there is a large change in the magnitude of skin–electrode impedance when the pressure is applied for the first time, and slight changes in impedance are observed with successive application and release of pressure. Second, this research assesses the quality of ECG signals to reduce issues related to poor-quality signals, such as false alarms. We design an algorithm based on Deep Belief Networks (DBN) to distinguish clean from contaminated ECGs and validate it by applying real clean ECG signals taken from the MIT-BIH arrhythmia database of Physionet and contaminated signals with motion artifacts at varying signal-to-noise ratios (SNR). The results demonstrate that the algorithm can recognize clean from contaminated signals with an accuracy of 99.5% for signals with an SNR of -10 dB. Once low- and high-quality signals are separated, low-quality signals can undergo additional pre-processing to mitigate the contaminants, or they can simply be discarded. This approach is applied to reduce the false alarms caused by poor-quality ECG signals in atrial fibrillation (AFib) detection algorithms. We propose a signal quality gating system based on DBN and validate it with AFib signals taken from the MIT-BIH Atrial Fibrillation database of Physionet. Without gating, the AFib detection accuracy was 87% for clean ECGs, but it markedly decreased as the SNR decreased, with an accuracy of 58.7% at an SNR of -20 dB. With signal quality gating, the accuracy remained high for clean ECGs (87%) and increased for low SNR signals (81% for an SNR of -20 dB). Furthermore, since the desired level of quality is application dependent, we design a DBN-based algorithm to quantify the quality of ECG signals. Real ECG signals with various types of arrhythmias, contaminated with motion artifacts at several SNR levels, are thereby classified based on their SNRs. The results show that our algorithm can perform a multi-class classification with an accuracy of 99.4% for signals with an SNR of -20 dB and an accuracy of 91.2% for signals with an SNR of 10 dB.
APA, Harvard, Vancouver, ISO, and other styles
44

Schnetzer, Matthias, Franz Astleithner, Predrag Cetkovic, Stefan Humer, Manuela Lenk, and Mathias Moser. "Quality Assessment of Imputations in Administrative Data." De Gruyter, 2015. http://dx.doi.org/10.1515/JOS-2015-0015.

Full text
Abstract:
This article contributes a framework for the quality assessment of imputations within a broader structure to evaluate the quality of register-based data. Four quality-related hyperdimensions examine the data processing from the raw-data level to the final statistics. Our focus lies on the quality assessment of different imputation steps and their influence on overall data quality. We suggest classification rates as a measure of accuracy of imputation and derive several computational approaches. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
45

Freitas, Pedro Garcia. "Using texture measures for visual quality assessment." reponame:Repositório Institucional da UnB, 2017. http://repositorio.unb.br/handle/10482/31686.

Full text
Abstract:
Tese (doutorado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2017.<br>Submitted by Raquel Viana (raquelviana@bce.unb.br) on 2018-04-19T17:18:07Z No. of bitstreams: 1 2017_PedroGarciaFreitas.pdf: 42146492 bytes, checksum: 48f490751ac049a6ed8f8255d1da4b66 (MD5)<br>Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2018-04-19T17:22:15Z (GMT) No. of bitstreams: 1 2017_PedroGarciaFreitas.pdf: 42146492 bytes, checksum: 48f490751ac049a6ed8f8255d1da4b66 (MD5)<br>Made available in DSpace on 2018-04-19T17:22:16Z (GMT). No. of bitstreams: 1 2017_PedroGarciaFreitas.pdf: 42146492 bytes, checksum: 48f490751ac049a6ed8f8255d1da4b66 (MD5) Previous issue date: 2018-04-19<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).<br>Na última década, diversas aplicações multimídia tem gerado e distribuído conteúdos de imagens e vídeos digitais. Serviços de multimídia que tem ganhado um vasto interesse incluem televisão digital, jogos de vídeo e aplicações em tempo real operando sobre a Internet. De acordo com predições da CiscoTM, a percentagem do tráfego de dados de vídeo sobre a Internet era de 53% em 2014 e superará os 67% em 2018. Devido à esse aumento na demanda de conteúdo de dados visuais, a necessidade de métodos e ferramentas que estimem a qualidade da experiência (QoE) do consumidor é enorme. Entre os aspectos que contribuem para a QoE, a qualidade dos estímulos visuais é uma das maiores propriedades, pois pode ser alterada em diversos estágios da cadeia de comunicação, tal como na captura, na transmissão, ou na reprodução do conteúdo. Considerando que os avaliadores naturais da qualidade visual são seres humanos, a estratégia básica para medir a qualidade visual consiste na realização de experimentos subjetivos. Esses experimentos são geralmente realizados com participantes humanos em laboratórios preparados com um ambiente controlado. Esses participantes avaliam a qualidade de um dado estimulo visual (imagem ou vídeo) e atribuem a eles um valor numérico associado à qualidade. Para avaliar a qualidade, os participantes seguem um conjunto de passos experimentais. Geralmente, esses passos são padronizados para favorecer a reprodutibilidade experimental. Os padrões de experimentos incluem metodologias de avaliação, tais como condições de visualização, escala de avaliação, materiais, etc. Após um conjunto de participantes avaliarem individualmente a qualidade de um dado estímulo, a média dos valores é calculada para gerar o valor médio das opiniões subjetivas (MOS). O MOS é frequentemente utilizado para representar a qualidade geral de um dado estímulo visual. Como a coleta dos MOS é realizada a partir de experimentos com seres humanos, esse processo é demorado, cansativo, caro, e laborioso. Devido ao custo dos experimentos subjetivos, um grande esforço tem sido dedicado ao desenvolvimento de técnicas objetivas para a avaliação de estímulos visuais. Essas técnicas objetivas consistem em predizer o MOS automaticamente por meio de algoritmos computacionais. Tal automação torna possível a implementação de procedimentos computacionais rápidos e baratos para monitorar e controlar a qualidade de estímulos visuais. As técnicas objetivas para a avaliação de estímulos visuais podem ser classificadas em três tipos, dependendo da quantidade de informação necessária pelo método. Se todo o estímulo de referência (original) é requerido para a estimação da qualidade do estímulo testado, então essa técnica é classificada como sendo de referência completa. Quando somente alguma informação parcial da referência é necessária, a técnica é classificada como sendo de referência reduzida. Por outro lado, quando nenhuma informação sobre o estímulo de referência é necessária, a técnica é dita como sendo sem referência. Uma vez que a exigência de uma referência completa ou parcial é um obstáculo no desenvolvimento de diversas aplicações multimídia, as técnicas de sem referência são as mais convenientes na maioria dos casos. Diversas técnicas objetivas para avaliação de qualidade visual têm sido propostas, embora ainda existam algumas questões em aberto no seu desenvolvimento. No caso de técnicas de avaliação de imagens, diversas técnicas de referência completa têm sido produzidas com uma excelente performance. Por outro lado, técnicas que não utilizam referências ainda apresentam limitações quando múltiplas distorções estão presentes. Além disso, as técnicas sem referência para imagens mais eficientes ainda apresentam modelos computacionalmente custosos, o que limita a utilização desses métodos em várias aplicações multimídia. No caso de vídeos, o atual estado da arte ainda possui performance na predição dos MOS pior do que os métodos de imagens. Quando consideramos a acurácia da predição, os métodos objetivos para vídeos possuem uma correlação entre valores preditos e MOS ainda pequena se comparada com a correlação observada em métodos para imagens. Além disso, a complexidade computacional é ainda mais crítica no caso de vídeos, uma vez que a quantidade de informação processada é muito maior do que aquela presente em imagens. O desenvolvimento de uma técnica objetiva de avaliação de qualidade visual requer resolver três grandes problemas. O primeiro problema é determinar um conjunto de características que sejam relevantes na descrição da qualidade visual. Essas características, geralmente, referem-se às medidas de estímulos físicos, tais como quantificação da nitidez de borda, estatísticas de cenas naturais, estatísticas no domínio de curvlets, filtros de Prewitt, etc. Além disso, múltiplos tipos de características podem ser combinados para gerar um vetor de características que descrevem melhor a qualidade de um dado estímulo. O segundo problema é estabelecer uma estratégia de agrupamento das características de forma que os valores numéricos sejam descritivos dentro de um modelo. Esse agrupamento se refere a uma combinação de medidas através de um subespaço de medidas para representar o estímulo analisado. Finalmente, o terceiro problema é a criação de um modelo que mapeie as características agrupadas de forma que se correlacione com os dados preditos com os subjetivos. Neste trabalho, nós apresentamos uma investigação de métodos de avaliação de qualidade visual baseada na medição de texturas. A pressuposição é que degradações visuais alteram as texturas e as estatísticas dessas texturas em imagens e vídeos. Essas medidas são executadas em termos das estatísticas extraídas do operador de padrões binários locais (LBP) e suas extensões. Este operador foi escolhido porque ele unifica outros modelos de análise de texturas mais tradicionais, tais como o espectro de textura, o nível de cinza de comprimento (GLRLM) e as matrizes de co-ocorrência de níveis de cinza (GLCM). O operador LBP, sendo um algoritmo simples e que favorece implementações rápidas, possui propriedades muito úteis para sistemas de processamento em tempo real de imagens e vídeos. Devido às vantagens supracitadas, nós analisamos o operador LBP e algumas das suas extensões no estado da arte com o objetivo de investigar sua adequabilidade para o problema de avaliação de qualidade de imagens. Para isso, neste trabalho nós apresentamos uma extensa revisão do estado da arte dos operadores. Entre os operadores no estado da arte, podemos mencionar os padrões ternários locais (LTP), a quantização de fase local (LPQ), as estatísticas binarizadas de características de imagem (BSIF), os padrões locais binários rotacionados (RLBP), os padrões binários locais completos (CLBP), os padrões de configuração locais (LCP), entre outros. Ademais, nós também propomos novas extensões que melhoram a predição de qualidade. Entre as extensões propostas para a medida de características de qualidade, estão os padrões binários locais de múltipla escala (MLBP), os padrões ternários locais de múltipla escala (MLTP), os padrões de variância local (LVP), os padrões de planos ortogonais de cores (OCPP), os padrões binários locais salientes (SLBP) e os padrões binários locais salientes de múltipla escala (MSLBP). Para testar a adequabilidade dos operadores de texturas supracitados, propomos um arcabouço para utilizar esses operadores na produção de novas métricas de qualidade de imagens. Dessa forma, muitas métricas sem referência podem ser geradas a partir da estratégia proposta. Utilizando as métricas geradas a partir do arcabouço proposto, uma extensa análise comparativa é apresentada neste trabalho. Essa análise foi feita com três das mais populares bases de dados de qualidade imagens disponíveis, sendo elas a LIVE, CSIQ e TID 2013. Os resultados gerados a partir dos testes nessas bases demonstram que os operadores no estado da arte mais adequados para mensurar a qualidade de imagens são o BSIF, o LPQ e o CLBP. Todavia, os resultados também indicaram que os operadores propostos atingiram resultados ainda mais promissores, com as abordagens baseadas em múltiplas escalas apresentando os melhores desempenhos entre todas variações testadas. Inspirado nos resultados experimentais das métricas de imagens geradas, nós escolhemos um operador de textura conveniente para implementar uma métrica de avaliação de qualidade de vídeos. Além de incorporar informações de textura, nós também incorporamos informações de atividade espacial e informação temporal. Os resultados experimentais obtidos indicam que a métrica proposta tem uma performance consideravelmente superior quando testada em diversas bases de dados de vídeo de referência e supera os atuais modelos de qualidade vídeo.<br>In the last decade, many visual quality models have been proposed. However, there are some open questions involving the assessment of image and video quality. In the case of images, most of the proposed methods are very complex and require a reference content to estimate the quality, limiting their use in several multimedia application. For videos, the current state-of-the-art methods still perform worse than images in terms of prediction accuracy. In this work, we present an investigation of visual quality assessment methods based on texture measurements. The premise is that visual impairments alter image and video textures and their statistics. These measurements are performed regarding the statistics of the local binary pattern (LBP) operator and its extensions. We chosen LBP because it unifies traditional texture analysis models. In addition, LBP is a simple but effective algorithm that performs only fundamental operations, which favors fast and simple implementations, which is very useful for real-time image and video processing systems. Because of the abovementioned advantages, we analyzed the LBP operator and some of its state-of-the-art extensions addressing the problem of assessing image quality. Furthermore, we also propose new quality-aware LBP extensions to improve the prediction of quality. Then, we propose a framework for using these operators in order to produce new image quality metrics. Therefore, many no-reference image quality metrics can be generated from the proposed strategy. Inspired by experimental results of generated no-reference image quality metrics, we chosen a convenient texture operator to implement a full-reference video quality metric. In addition to the texture information, we also incorporate features including spatial activity, and temporal information. Experimental results indicated that our metric presents a superior performance when tested on several benchmark video quality databases, outperforming current state-of-the-art full-reference video quality metrics.
APA, Harvard, Vancouver, ISO, and other styles
46

Кулік, Євгенія Сергіївна, Евгения Сергеевна Кулик, and Yevheniia Serhiivna Kulik. "System of quality assessment of educational process." Thesis, Sumy State University, 2016. http://essuir.sumdu.edu.ua/handle/123456789/46813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Гнаповська, Людмила Вадимівна, Людмила Вадимовна Гнаповская, and Liudmyla Vadymivna Hnapovska. "Quality Language Assessment in University EFL Classroom." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/67254.

Full text
Abstract:
In recent years, teachers have become increasingly interested in the methodology by which the attitudes, knowledge and skills of EFL learners can be constructively developed. In line with this strand, European language examinations focus upon assessing a learner’s ability to use the language, and do not concentrate on testing whether learners can recite the rules of the language, or how many words they have learned, or whether they sound like a perfect native speaker. Modern language assessments are not interested in whether students can transform isolated sentences into paraphrased versions, or whether they can give a definition of a word out – or even within – the context. They are also rarely interested in whether the learner can translate sentences in his/her first language into the target language, or whether (s)he can translate sentences from the target language into the mother tongue or, indeed, whether (s)he can give the mother tongue equivalent of an underlined word in an English passage.
APA, Harvard, Vancouver, ISO, and other styles
48

Jomaa, Fadel, and Olga Cherednichenko. "Issues of after-sales service quality assessment." Thesis, Національний технічний університет "Харківський політехнічний інститут", 2011. http://repository.kpi.kharkov.ua/handle/KhPI-Press/46404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sultana, Sharmin. "Groundwater Quality Vulnerability Assessment in North Dakota." Thesis, North Dakota State University, 2017. https://hdl.handle.net/10365/28374.

Full text
Abstract:
In North Dakota, arsenic and nitrate are two major groundwater contaminants. These contaminants originate from either natural geologic or anthropogenic sources. Differences in geology, hydrology, geochemistry, and chemical use explain how and why concentrations of these groundwater contaminants vary across the regions. Based on these properties, a research was carried out to identify the potential groundwater quality vulnerable regions. For vulnerability assessment, modified DRASTIC-G and Susceptibility Index model were used for arsenic and nitrate, respectively. Our research showed that approximately 21 and 28 % of the study area fall within high arsenic and nitrate vulnerable areas, respectively. Our study also identified 33 out of the 84 high risk arsenic and 16 out of 28 high risk nitrate observation wells fall within the high arsenic and nitrate vulnerability areas, respectively. These developed maps can be used as a starting point for identifying probable groundwater vulnerable areas and future decision making.<br>USDA NIFA (National Institute of Food and Agriculture)
APA, Harvard, Vancouver, ISO, and other styles
50

GREEN, CHRISTOPHER FRANK. "ASSESSMENT AND MODELING OF INDOOR AIR QUALITY." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1029515955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!