To see the other types of publications on this topic, follow the link: Coverage metric.

Dissertations / Theses on the topic 'Coverage metric'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Coverage metric.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Milne, Andrew Steven. "A benchmark fault coverage metric for analogue circuits." Thesis, University of Huddersfield, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bansal, Kunal. "Increasing Branch Coverage with Dual Metric RTL Test Generation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/96581.

Full text
Abstract:
In this thesis, we present a new register-transfer level (RTL) test generation method that makes use of two coverage metrics, Branch Coverage, and Mutation Coverage across two stages, to cover hard-to-reach points previously unreached. We start with a preprocessing stage by converting the RTL source to a C++ equivalent using a modified Verilator, which also automatically creates mutants and the corresponding mutated C++ design, based on arithmetic, logical and relational operators during conversion. With the help of extracted Data Dependency and Control Flow Graphs, in every pair, branches containing variables dependent on the mutated statement are instrumented to track them. The first stage uses Evolutionary algorithms with Ant Colony Optimization to generate test vectors with mutation coverage as the metric. Two new filtering techniques are also proposed which optimize the first stage by eliminating the need for generating tests for redundant mutants. The next stage is the original BEACON which now takes the generated mutation test vectors as the initial population unlike random vectors, and output final test vectors. These test vectors succeed in improving the coverage up to 70%, compared to the previous approaches for most of the ITC99 benchmarks. With the application of filtering techniques, we also observed a speedup by 85% in the test generation runtime and also up to 78% reduction in test vector size when compared with those generated by the previous techniques.
MS
APA, Harvard, Vancouver, ISO, and other styles
3

Linn, Jane Ostergar. "A Coverage Metric to Aid in Testing Multi-Agent Systems." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6666.

Full text
Abstract:
Models are frequently used to represent complex systems in order to test the systems before they are deployed. Some of the most complicated models are those that represent multi-agent systems (MAS), where there are multiple decision makers. Brahms is an agent-oriented language that models MAS. Three major qualities affect the behavior of these MAS models: workframes that change the state of the system, communication activities that coordinate information between agents, and the schedule of workframes. The primary method to test these models that exists is repeated simulation. Simulation is useful insofar as interesting test cases are used that enable the simulation to explore different behaviors of the model, but simulation alone cannot be fully relied upon to adequately cover the test space, especially in the case of non-deterministic concurrent systems. It takes an exponential number of simulation trials to uncover schedules that reveal unexpected behaviors. This thesis defines a coverage metric to make simulation more meaningful before verification of the model. The coverage metric is divided into three different metrics: workframe coverage, communication coverage, and schedule coverage. Each coverage metric is defined through static analysis of the system, resulting in the coverage requirements of that system. These coverage requirements are compared to the logged output of the simulation run to calculate the coverage of the system. The use of the coverage metric is illustrated in several empirical studies and explored in a detailed case study of the SATS concept (Small Aircraft Transportation System). SATS outlines the procedures aircraft follow around runways that do not have communication towers. The coverage metric quantifies the test effort, and can be used as a basis for future automated test generation and active test.
APA, Harvard, Vancouver, ISO, and other styles
4

Mathaikutty, Deepak Abraham. "Metamodeling Driven IP Reuse for System-on-chip Integration and Microprocessor Design." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29598.

Full text
Abstract:
This dissertation addresses two important problems in reusing intellectual properties (IPs) in the form of reusable design or verification components. The first problem is associated with fast and effective integration of reusable design components into a System-on-chip (SoC), so faster design turn-around time can be achieved, leading to faster time-to-market. The second problem has the same goals of faster product design cycle, but emphasizes on verification model reuse, rather than design component reuse. It specifically addresses reuse of reusable verification IPs to enable a "write once, use many times" verification strategy. This dissertation is accordingly divided into part I and part II which are related but describe the two problems and our solutions to them. These two related but distinctive problems faced by system design companies have been tackled through a unique approach which hither-to-fore only have been used in the software engineering domain. This approach is called metamodeling, which allows creating customized meta-language to describe the syntax and semantics for a modeling domain. It provides a way to create, transform and analyze domain specific languages, which are themselves described by metamodels, and the transformation and processing of models in such languages are also described by metamodels. This makes machine based interpretation and translation from these models an easier and formal task. In part I, we consider the problem of rapid system-level model integration of existing reusable components such that (i) the required architecture of the SoC can be expressed formally, (ii) automatic selection of components from an IP library to match the need of the system being integrated can be done, (iii) integrability of the components is provable, or checkable automatically, and (iv) structural and behavioral type systems for each component can be utilized through inferencing and matching techniques to ensure their compatibility. Our solutions include a component composition language, algorithms for component selection, type matching and inferencing algorithms, temporal property based behavioral typing, and finally a software system on top of an existing metamodeling environment. In part II, we use the same metamodeling environment to create a framework for modeling generative verification IPs. Our main contributions relate to INTEL's microprocessor verification environment, and our solution spans various abstraction levels (System, architectural, and microarchitecture) to perform verification. We provide a unified language that can be used to model verification IPs at all abstraction levels, and verification collaterals such as testbenches, simulators, and coverage monitors can be generated from these models, thereby enhancing reuse in verification.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Mishra, Shashank. "Analysis of test coverage metrics in a business critical setup." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-213698.

Full text
Abstract:
Test coverage is an important parameter of analyzing how well the product is being tested in any domain within the IT industry. Unit testing is one of the important processes that have gained even more popularity with the rise in Test driven development (TDD) culture.This degree project, conducted at NASDAQ Technology AB, analyzes the existing unit tests in one of the products, and compares various coverage models in terms of quality. Further, the study examines the factors that affect code coverage, presents the best practices for unit testing, and a proven test process used in a real world project.To conclude, recommendations are given to NASDAQ based on the findings of this study and industry standards.
Testtäckning är en viktig parameter för att analysera hur väl en produkt är testad inom alla domäner i IT-industrin. Enhetstestning är en av de viktiga processerna som har ökat sin popularitet med testdriven utveckling. Detta examensarbete, utfört på NASDAQ Technology AB, analyserar de befintliga testen i en av produkterna, och jämför olika kvalitetsmodeller. Vidare undersöker undersökningen de faktorer som påverkar koddekning, presenterar de bästa metoderna för enhetstestning och en beprövad testprocess som används i ett verkligt världsprojekt. Avslutningsvis ges rekommendationer till NASDAQ baserat på resultaten från denna studie och industristandarder.
APA, Harvard, Vancouver, ISO, and other styles
6

Acharya, Vineeth Vadiraj. "Branch Guided Metrics for Functional and Gate-level Testing." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/51661.

Full text
Abstract:
With the increasing complexity of modern day processors and system-on-a-chip (SOCs), designers invest a lot of time and resources into testing and validating these designs. To reduce the time-to-market and cost, the techniques used to validate these designs have to constantly improve. Since most of the design activity has moved to the register transfer level (RTL), test methodologies at the RTL have been gaining momentum. We present a novel functional test generation framework for functional test generation at RTL. A popular software-based metric for measuring the effectiveness of an RTL test suite is branch coverage. But exercising hard-to-reach branches is still a challenge and requires good understanding of the design semantics. The proposed framework uses static analysis to extract certain semantics of the circuit and uses several data structures to model these semantics. Using these data structures, we assist the branch-guided search to exercise these hard-to-reach branches. Since the correlation between high branch coverage and detecting defects and bugs is not clear, we present a new metric at the RTL which augments the RTL branch coverage with state values. Vectors which have higher scores on the new metric achieve higher branch and state coverages, and therefore can be applied at different levels of abstraction such as post-silicon validation. Experimental results show that use of the new metric in our test generation framework can achieve a high level of branch and fault coverage for several benchmark circuits, while reducing the length of the vector sequence. This work was supported in part by the NSF grant 1016675.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Santa, Marek. "Zpětnovazební funkční verifikace hardware." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237045.

Full text
Abstract:
In the development process of digital circuits, it is often not possible to avoid introducing errors into systems that are being developed. Early detection of such errors saves money and time. This project deals with automation of feedback in functional verification of various data processing components. The goal of automatic feedback is not only to shorten the time needed to verify the functionality of a system, but mainly to improve verification coverage of corner cases and thus increase the confidence in the verified system. General functional and formal verification principles and practices are discussed, coverage metrics are presented, limitations of both techniques are mentioned and room for improvement of current status is identified. Design of feedback verification environment using a genetic algorithm is described in detial. The verification results are summarized and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
8

Pagliarini, Samuel Nascimento. "VEasy : a tool suite towards the functional verification challenges." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/34758.

Full text
Abstract:
Esta dissertação descreve um conjunto de ferramentas, VEasy, o qual foi desenvolvido especificamente para auxiliar no processo de Verificação Funcional. VEasy contém quatro módulos principais, os quais realizam tarefas-chave do processo de verificação como linting, simulação, coleta/análise de cobertura e a geração de testcases. Cada módulo é comentado em detalhe ao longo dos capítulos. Todos os módulos são integrados e construídos utilizando uma Interface Gráfica. Esta interface possibilita o uso de uma metodologia de criação de testcases estruturados em camadas, onde é possível criar casos de teste complexos através do uso de operações do tipo drag-and-drop. A forma de uso dos módulos é exemplificada utilizando projetos simples escritos em Verilog. As funcionalidades da ferramenta, assim como o seu desempenho, são comparadas com algumas ferramentas comerciais e acadêmicas. Assim, algumas conclusões são apresentadas, mostrando que o tempo de simulação é consideravelmente menor quando efetuada a comparação com as ferramentas comerciais e acadêmicas. Os resultados também mostram que a metodologia é capaz de permitir um alto nível de automação no processo de criação de testcases através do modelo baseado em camadas.
This thesis describes a tool suite, VEasy, which was developed specifically for aiding the process of Functional Verification. VEasy contains four main modules that perform linting, simulation, coverage collection/analysis and testcase generation, which are considered key challenges of the process. Each of those modules is commented in details throughout the chapters. All the modules are integrated and built on top of a Graphical User Interface. This framework enables the testcase automation methodology which is based on layers, where one is capable of creating complex test scenarios using drag-anddrop operations. Whenever possible the usage of the modules is exemplified using simple Verilog designs. The capabilities of this tool and its performance were compared with some commercial and academic functional verification tools. Finally, some conclusions are drawn, showing that the overall simulation time is considerably smaller with respect to commercial and academic simulators. The results also show that the methodology is capable of enabling a great deal of testcase automation by using the layering scheme.
APA, Harvard, Vancouver, ISO, and other styles
9

Zachariášová, Marcela. "Metody akcelerace verifikace logických obvodů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-261278.

Full text
Abstract:
Při vývoji současných číslicových systémů, např. vestavěných systému a počítačového hardware, je nutné hledat postupy, jak zvýšit jejich spolehlivost. Jednou z možností je zvyšování efektivity a rychlosti verifikačních procesů, které se provádějí v raných fázích návrhu. V této dizertační práci se pozornost věnuje verifikačnímu přístupu s názvem funkční verifikace. Je identifikováno několik výzev a problému týkajících se efektivity a rychlosti funkční verifikace a ty jsou následně řešeny v cílech dizertační práce. První cíl se zaměřuje na redukci simulačního času v průběhu verifikace komplexních systémů. Důvodem je, že simulace inherentně paralelního hardwarového systému trvá velmi dlouho v porovnání s během v skutečném hardware. Je proto navrhnuta optimalizační technika, která umisťuje verifikovaný systém do FPGA akcelerátoru, zatím co část verifikačního prostředí stále běží v simulaci. Tímto přemístěním je možné výrazně zredukovat simulační režii. Druhý cíl se zabývá ručně připravovanými verifikačními prostředími, která představují výrazné omezení ve verifikační produktivitě. Tato režie však není nutná, protože většina verifikačních prostředí má velice podobnou strukturu, jelikož využívají komponenty standardních verifikačních metodik. Tyto komponenty se jen upravují s ohledem na verifikovaný systém. Proto druhá optimalizační technika analyzuje popis systému na vyšší úrovni abstrakce a automatizuje tvorbu verifikačních prostředí tím, že je automaticky generuje z tohoto vysoko-úrovňového popisu. Třetí cíl zkoumá, jak je možné docílit úplnost verifikace pomocí inteligentní automatizace. Úplnost verifikace se typicky měří pomocí různých metrik pokrytí a verifikace je ukončena, když je dosažena právě vysoká úroveň pokrytí. Proto je navržena třetí optimalizační technika, která řídí generování vstupů pro verifikovaný systém tak, aby tyto vstupy aktivovali současně co nejvíc bodů pokrytí a aby byla rychlost konvergence k maximálnímu pokrytí co nejvyšší. Jako hlavní optimalizační prostředek se používá genetický algoritmus, který je přizpůsoben pro funkční verifikaci a jeho parametry jsou vyladěny pro tuto doménu. Běží na pozadí verifikačního procesu, analyzuje dosažené pokrytí a na základě toho dynamicky upravuje omezující podmínky pro generátor vstupů. Tyto podmínky jsou reprezentovány pravděpodobnostmi, které určují výběr vhodných hodnot ze vstupní domény. Čtvrtý cíl diskutuje, zda je možné znovu použít vstupy z funkční verifikace pro účely regresního testování a optimalizovat je tak, aby byla rychlost testování co nejvyšší. Ve funkční verifikaci je totiž běžné, že vstupy jsou značně redundantní, jelikož jsou produkovány generátorem. Pro regresní testy ale tato redundance není potřebná a proto může být eliminována. Zároveň je ale nutné dbát na to, aby úroveň pokrytí dosáhnutá optimalizovanou sadou byla stejná, jako u té původní. Čtvrtá optimalizační technika toto reflektuje a opět používá genetický algoritmus jako optimalizační prostředek. Tentokrát ale není integrován do procesu verifikace, ale je použit až po její ukončení. Velmi rychle odstraňuje redundanci z původní sady vstupů a výsledná doba simulace je tak značně optimalizována.
APA, Harvard, Vancouver, ISO, and other styles
10

Starigazda, Michal. "Optimalizace testování pomocí algoritmů prohledávání prostoru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234928.

Full text
Abstract:
Testing of multi-threaded programs is a demanding work due to the many possible thread interleavings one should examine. The noise injection technique helps to increase the number of tested thread interleavings by noise injection to suitable program locations. This work optimizes meta-heuristics search techniques in the testing of concurrent programs by utilizing deterministic heuristic in the application of genetic algorithms in a space of legal program locations suitable for the noise injection. In this work, several novel deterministic noise injection heuristics without dependency on the random number generator are proposed in contrary to the most of currently used heuristic. The elimination of the randomness should make the search process more informed and provide better, more optimal, solutions thanks to increased stability in the results provided by novel heuristics. Finally, a benchmark of programs, used for the evaluation of novel noise injection heuristics is presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Letko, Zdeněk. "Analýza a testování vícevláknových programů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261265.

Full text
Abstract:
V disertační práci je nejprve uvedena taxonomie chyb v souběžném zpracování dat a přehled technik pro jejich dynamickou detekci. Následně jsou navrženy nové metriky pro měření synchronizace a souběžného chování programů společně s metodologií jejich odvozování. Tyto techniky se zejména uplatní v testování využívajícím techniky prohledávání prostoru a v saturačním testování. Práce dále představuje novou heuristiku vkládání šumu, jejímž cílem je maximalizace proložení instrukcí pozorovaných během testování. Tato heuristika je porovnána s již existujícími heuristikami na několika testech. Výsledky ukazují, že nová heuristika překonává ty existující v určitých případech. Nakonec práce představuje inovativní aplikaci stochastických optimalizačních algoritmů v procesu testování vícevláknových aplikací. Principem metody je hledání vhodných kombinací parametrů testů a metod vkládání šumu. Tato metoda byla prototypově implementována a otestována na množině testovacích příkladů. Výsledky ukazují, že metoda má potenciál vyznamně vylepšit testování vícevláknových programů.
APA, Harvard, Vancouver, ISO, and other styles
12

Cruz, Robinson Crusoé da. "Análise empírica sobre a influência das métricas CK na testabilidade de software orientado a objetos." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-29012018-135506/.

Full text
Abstract:
Teste de Software tem o objetivo de executar um programa sob teste com o objetivo de revelar suas falhas, portanto é uma das fases mais importante do ciclo de vida do desenvolvimento de um software. A testabilidade é um atributo de qualidade fundamental para o sucesso da atividade de teste, pois ela pode ser entendida como o esforço necessário para criar, executar e avaliar os casos de teste em um software. Este atributo não é uma qualidade intrínseca do software, portanto não pode ser medido diretamente como a quantidade de linhas de código, por exemplo. Entretanto, ela pode ser inferida por meio das características ou métricas internas e externas de um software. Entre as características comumente utilizadas na análise da testabilidade estão as métricas CK, que foram propostas por Chidamber e Kemerer com objetivo de analisar software orientado a objetos. A maioria dos trabalhos nesta linha, entretanto, relaciona o tamanho e a quantidade de casos testes com a testabilidade de um software. Entretanto, é fundamental analisar a qualidade dos testes para saber se eles atingem os objetivos para os quais foram propostos, independente de quantidade e tamanho. Portanto, este trabalho de mestrado apresenta um estudo empírico sobre a relação entre as métricas CK e a testabilidade de um software com base na análise da adequação de seus casos de teste unitários, critérios de teste estrutural e de mutação. Inicialmente foi realizada uma Revisão Sistemática cujo objetivo foi avaliar o estado da arte da testabilidade e as métricas CK. Os resultados mostraram que apesar de existirem várias pesquisas relacionadas ao tema, existem lacunas que motivam novas pesquisas no que concerne a análise da qualidade dos testes e identificação das características das métricas que podem ser inferidas para medir e analisar a testabilidade. Em seguida, foram realizadas duas análises empíricas. Na primeira análise, as métricas foram analisadas por meio da correlação das métricas CK com a cobertura de linha de código, cobertura de \\textit (arestas, ramos ou desvio de fluxo) e escore de mutação. Os resultados desta análise demonstraram a importância de cada métrica dentro do contexto da testabilidade. Na segunda análise, foi realizada uma proposta de clusterização das métricas para tentar identificar grupos de classes com características semelhantes relacionadas à testabilidade. Além das análises empíricas, foi desenvolvida e apresentada uma ferramenta de coleta e análise de métricas CK com objetivo de contribuir com novas pesquisas relacionados a proposta deste projeto. Apesar das limitações das análises, os resultados deste trabalho mostraram a importância de cada métrica CK dentro do contexto da testabilidade e fornece aos desenvolvedores e projetistas uma ferramenta de apoio e dados empíricos para melhor desenvolverem e projetarem seus sistemas com o objetivo de facilitar a atividade de teste de software
Software testing have aim to run a program under test with the aim of revealing its failures, so it is one of the most important phases of the software development lifecycle. Testability is a key quality attribute for the success of the test activity, because it can be understood as the effort required to create, execute and evaluate test cases in software. This attribute is not an intrinsic quality of the software, so it can not be measured directly as the number of lines code, for example. However, it can be inferred through the or internal and external metrics of a software. Among the features commonly used in testability analysis are CK metrics, which were proposed by Chidamber and Kemerer in order to analyze object-oriented software. Most of the works in this line, however, relate the size and quantity of test cases with software testability. However, it\'s critical to analyze the quality of the tests to see if they achieve the objectives for which they were proposed, independent of quantity and size. Therefore, this Master\'s degree work presents an empirical study on the relationship between CK metrics and software testability based on the analysis of the adequacy of its unit test cases, structural test criteria and mutation. Initially, a Systematic Review was carried out to evaluate the state of the art of testability and CK metrics. The results showed that although there are several researches related to the subject, there are gaps that motivate new research in what concerns the analysis of the quality of the tests and identification of the features of the metrics that can be inferred to measure and analyze the testability. Two empirical analyzes were performed. In the first analysis, the metrics were analyzed through the correlation of the CK metrics with the code line coverage, branch coverage or mutation score. The results of this analysis showed the importance of each metric within the context of testability. In the second analysis, a metric clustering proposal was made to try to identify groups of classes with similar features related to testability. In addition to the empirical analysis, a tool for the collection and analysis of CK metrics was developed and presented, with aim to contribute with new researches related to the proposal of this project. Despite the limitations of the analyzes, the results of this work showed the importance of each CK metric within the context of testability and provides developers and designers with a support tool and empirical data to better develop and design their systems with the aim of facilitate the activity of software testing
APA, Harvard, Vancouver, ISO, and other styles
13

Jelassi, Mohamed Nidhal. "Un système personnalisé de recommandation à partir de concepts quadratiques dans les folksonomies." Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22693/document.

Full text
Abstract:
Les systèmes de recommandation ont acquis une certaine popularité parmi les chercheurs, où de nombreuses approches ont été proposées dans la littérature. Les utilisateurs des folksonomies partagent des items (e.g., livres, films, sites web, etc.) en les annotant avec des tags librement choisis. Avec l'essor du Web 2.0, les utilisateurs sont devenus les principaux acteurs du système étant donné qu'ils sont à la fois les contributeurs et créateurs de l'information. Ainsi, il est important de répondre à leurs besoins en leur proposant une recommandation plus ciblée. Pour ce faire, nous considérons une nouvelle dimension dans une folksonomie classiquement composée de trois dimensions et nous proposons une approche afin de regrouper les utilisateurs ayant des intérêts proches à travers des structures appelées concepts quadratiques. Ensuite, nous utilisons ces structures afin de proposer un nouveau système personnalisé de recommandation. Nous évaluons nos approches sur divers jeux de données du monde réel. Ces expérimentations ont démontré de bons résultats en termes de précision et de rappel ainsi qu'une bonne évaluation sociale. De plus, nous étudions quelques unes des métriques utilisées pour évaluer le systèmes de recommandations, comme la couverture, la diversité, l'adaptivité, la sérendipité ou encore la scalabilité. Par ailleurs, nous menons une étude de cas sur quelques utilisateurs comme complément à notre évaluation afin d'avoir l'avis des utilisateurs sur notre système. Enfin, nous proposons un nouvel algorithme qui permet de mettre à jour un ensemble de concepts triadiques sans avoir à re-scanner l'entière folksonomie. Les premiers résultats comparant les performances de notre proposition par rapport au redémarrage du processus d'extraction des concepts triadiques sur quatre jeux de données du monde réel a démontré son efficacité
Recommender systems are now popular both commercially as well as within the research community, where many approaches have been suggested for providing recommendations. Folksonomies' users are sharing items (e.g., movies, books, bookmarks, etc.) by annotating them with freely chosen tags. Within the Web 2.0 age, users become the core of the system since they are both the contributors and the creators of the information. In this respect, it is of paramount importance to match their needs for providing a more targeted recommendation. For such purpose, we consider a new dimension in a folksonomy classically composed of three dimensions and propose an approach to group users with close interests through quadratic concepts. Then, we use such structures in order to propose our personalized recommendation system of users, tags and resources. We carried out extensive experiments on two real-life datasets, i.e., MovieLens and BookCrossing which highlight good results in terms of precision and recall as well as a promising social evaluation. Moreover, we study some of the key assessment metrics namely coverage, diversity, adaptivity, serendipity and scalability. In addition, we conduct a user study as a valuable complement to our evaluation in order to get further insights. Finally, we propose a new algorithm that aims to maintain a set of triadic concepts without the re-scan of the whole folksonomy. The first results comparing the performances of our proposition andthe running from scratch the whole process over four real-life datasets show its efficiency
APA, Harvard, Vancouver, ISO, and other styles
14

Souza, Izabel Maria Matos de. "Avalia??o da cobertura e monitoramento do branqueamento de corais nos recifes de Maracaja?/RN." Universidade Federal do Rio Grande do Norte, 2012. http://repositorio.ufrn.br:8080/jspui/handle/123456789/14046.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:33:09Z (GMT). No. of bitstreams: 1 IzabelMMS_DISSERT.pdf: 5861857 bytes, checksum: 79bb0912ed83687a1e7f40e3d441ee7c (MD5) Previous issue date: 2012-06-18
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Coral bleaching has been increasingly the focus of research around the world since the early 1980s, when it was verified to be increasing in frequency, intensity and amount of areas affected. The phenomenon has been recorded since 1993, associated with elevation of the sea surface temperature due to El Ni?os and water thermal anomalies, according to most reports around the world. On the coast of Rio Grande do Norte, Brazil, a mass coral bleaching event was recorded in the Environmental Protection Area of Coral Reefs (APARC) during March and April 2010, when the water temperature reached 34?C for several days. About 80% of the corals in Maracaja? reef-complex exhibited partial or total bleaching. The aims of this study were to verify representativeness of coral coverage and how the bleaching dynamic has developed among different species. Coral coverage was estimated according to Reef Check Brazil protocol associated with quadrant method, and bleaching was evaluated from biweekly visual surveys in 80 colonies of Favia gravida, Porites astreoides, Siderastrea stellata and Millepora alcicornis. At the same time temperature, pH, salinity and horizontal transparency, as well as mortality and disease occurrence were monitored. Analysis of variance and Multiple Regression from the perspective of time lag concept were used to evaluate the bleaching dynamics among species and the relationship between variation of means of bleaching and variations of abiotic parameters, respectively. Species showed significant differences among themselves as to variation of means of bleaching over time, but the dynamic of variation exhibited similar patterns
O branqueamento de corais tem sido o foco de um n?mero crescente de estudos desde a d?cada de 1980 quando foi verificado o aumento na frequ?ncia, intensidade e n?mero de ?reas atingidas. No Brasil o fen?meno tem sido registrado desde 1993, associado ? eleva??o da temperatura das ?guas superficiais do mar decorrente de eventos de El-Ni?os e anomalias t?rmicas, conforme a maioria dos relatos em todo o mundo. No litoral do Rio Grande do Norte registrou-se branqueamento em massa de corais nos recifes da ?rea de Prote??o Ambiental dos Recifes de Corais (APARC) em Mar?o e Abril de 2010, quando a temperatura da ?gua atingiu valor de 34?C durante v?rios dias. Cerca de 80% dos corais do complexo recifal de Maracaja? exibiram branqueamento parcial ou total. Os objetivos deste trabalho foram verificar qual a representatividade do recobrimento de corais no Parracho de Maracaja? e como a din?mica de branqueamento se desenvolve entre as esp?cies. A cobertura de corais foi estimada de acordo com o protocolo Reef Check Brasil associado ao m?todo de quadrado, e o branqueamento foi avaliado a partir de censos visuais quinzenais em 80 col?nias de Favia gravida, Porites astreoides, Siderastrea stellata e Millepora alcicornis. Ao mesmo tempo foram monitorados a temperatura da ?gua, pH, salinidade e transpar?ncia horizontal, e a ocorr?ncia de mortalidade e sintomas de doen?as. Foram utilizadas a An?lise de Vari?ncia e a Regress?o M?ltipla sob a perspectiva do conceito do time lag para avaliar a din?mica de branqueamento entre as esp?cies e a rela??o da varia??o das m?dias com a varia??o dos fatores abi?ticos, respectivamente. As esp?cies apresentaram diferen?a significativa entre si quanto ? varia??o das m?dias de branqueamento ao longo do tempo, mas a din?mica de varia??o exibiu padr?es semelhantes
APA, Harvard, Vancouver, ISO, and other styles
15

Alilovic-Curgus, Jadranka. "A metric-based theory of test selection and coverage for communication protocols." Thesis, 1993. http://hdl.handle.net/2429/1871.

Full text
Abstract:
A metric-based theory is developed that gives a solution to the problem of test selection and coverage evaluation for the control behaviour of network protocol implementations. The key idea is that a fast, completely automated process can uniformly cover the execution subspace of a network protocol control behaviour when characterized by appropriate metric functions, each concerned with some aspect of the protocol behaviour. Efficient systematic approximation of complex systems behaviours is a crucial problem in software testing. This thesis gives a theoretically sound and completely automated solution to the approximation problem for the control behaviour space of network protocols generated by many concurrent and highly recursive network connections. This objective is accomplished in a series of steps. First, a metric-based theory is developed which introduces a rigorous mathematical treatment of the discipline of testing, through the definition of testing distance, test coverage metrics, and metric-based test selection method. It involves a metric characterization of infinite trace sets of protocol behaviour within complete and totally bounded metric spaces, and captures approximations of different patterns of system behaviour due to recursion and parallelism. It is shown that classes of fault coverages of well known protocol test methods form a metric hierarchy within these metric spaces. Next, a general mathematical framework is developed for reasoning about the interoperability of communicating systems. An interoperability relation is obtained which gives a theoretical upper bound for the test selection process. The two threads are drawn together in a specific test selection algorithm, showing that the generation of arbitrarily dense sets of test sequences that approximate some original test suite to some target accuracy within the theoretical upper bound, is a convergent process. Finally, the theory itself is tested on the examples of a multi-media network protocol. It is shown that very high densities of selected sets and coverage calculation can be achieved within reasonable time limits in a completely automated manner. These results indicate that there is no practical impediment to applying rigorous theoretical treatment to the discipline of testing for the case of systems that derive their complexity from highly concurrent and recursive subprocesses.
APA, Harvard, Vancouver, ISO, and other styles
16

Sha, Yuan-Bin, and 夏源斌. "The Study on Code Coverage Metris for Verilog-A." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/25187024637892680986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography