Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Clone Detection and Analysis.

Dissertationen zum Thema „Clone Detection and Analysis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Clone Detection and Analysis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Krutz, Daniel Edward. „Code Clone Discovery Based on Concolic Analysis“. NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/203.

Der volle Inhalt der Quelle
Annotation:
Software is often large, complicated and expensive to build and maintain. Redundant code can make these applications even more costly and difficult to maintain. Duplicated code is often introduced into these systems for a variety of reasons. Some of which include developer churn, deficient developer application comprehension and lack of adherence to proper development practices. Code redundancy has several adverse effects on a software application including an increased size of the codebase and inconsistent developer changes due to elevated program comprehension needs. A code clone is defined as multiple code fragments that produce similar results when given the same input. There are generally four types of clones that are recognized. They range from simple type-1 and 2 clones, to the more complicated type-3 and 4 clones. Numerous clone detection mechanisms are able to identify the simpler types of code clone candidates, but far fewer claim the ability to find the more difficult type-3 clones. Before CCCD, MeCC and FCD were the only clone detection techniques capable of finding type-4 clones. A drawback of MeCC is the excessive time required to detect clones and the likely exploration of an unreasonably large number of possible paths. FCD requires extensive amounts of random data and a significant period of time in order to discover clones. This dissertation presents a new process for discovering code clones known as Concolic Code Clone Discovery (CCCD). This technique discovers code clone candidates based on the functionality of the application, not its syntactical nature. This means that things like naming conventions and comments in the source code have no effect on the proposed clone detection process. CCCD finds clones by first performing concolic analysis on the targeted source code. Concolic analysis combines concrete and symbolic execution in order to traverse all possible paths of the targeted program. These paths are represented by the generated concolic output. A diff tool is then used to determine if the concolic output for a method is identical to the output produced for another method. Duplicated output is indicative of a code clone. CCCD was validated against several open source applications along with clones of all four types as defined by previous research. The results demonstrate that CCCD was able to detect all types of clone candidates with a high level of accuracy. In the future, CCCD will be used to examine how software developers work with type-3 and type-4 clones. CCCD will also be applied to various areas of security research, including intrusion detection mechanisms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Anbalagan, Sindhuja. „On Occurrence Of Plagiarism In Published Computer Science Thesis Reports At Swedish Universities“. Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-5377.

Der volle Inhalt der Quelle
Annotation:
In recent years, it has been observed that software clones and plagiarism are becoming an increased threat for one?s creativity. Clones are the results of copying and using other?s work. According to the Merriam – Webster dictionary, “A clone is one that appears to be a copy of an original form”. It is synonym to duplicate. Clones lead to redundancy of codes, but not all redundant code is a clone.On basis of this background knowledge ,in order to safeguard one?s idea and to avoid intentional code duplication for pretending other?s work as if their owns, software clone detection should be emphasized more. The objective of this paper is to review the methods for clone detection and to apply those methods for finding the extent of plagiarism occurrence among the Swedish Universities in Master level computer science department and to analyze the results.The rest part of the paper, discuss about software plagiarism detection which employs data analysis technique and then statistical analysis of the results.Plagiarism is an act of stealing and passing off the idea?s and words of another person?s as one?s own. Using data analysis technique, samples(Master level computer Science thesis report) were taken from various Swedish universities and processed in Ephorus anti plagiarism software detection. Ephorus gives the percentage of plagiarism for each thesis document, from this results statistical analysis were carried out using Minitab Software.The results gives a very low percentage of Plagiarism extent among the Swedish universities, which concludes that Plagiarism is not a threat to Sweden?s standard of education in computer science.This paper is based on data analysis, intelligence techniques, EPHORUS software plagiarism detection tool and MINITAB statistical software analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nilsson, Erik. „Abstract Syntax Tree Analysis for Plagiarism Detection“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-80888.

Der volle Inhalt der Quelle
Annotation:
Today, universities rely heavily on systems for detecting plagiarism in students’essays and reports. Code submissions however require specific tools. A numberof approaches to finding plagiarisms in code have already been tried, includingtechniques based on comparing textual transformations of code, token strings,parse trees and graph representations. In this master’s thesis, a new system, cojac,is presented which combines textual, tree and graph techniques to detect a broadspectrum of plagiarism attempts. The system finds plagiarisms in C, C++ and Adasource files. This thesis discusses the method used for obtaining parse trees fromthe source code and the abstract syntax tree analysis. For comparison of syntaxtrees, we generate sets of fingerprints, digest forms of trees, which makes thecomparison algorithm more scalable. To evaluate the method, a set of benchmarkfiles have been constructed containing plagiarism scenarios which was analyzedboth by our system and Moss, another available system for plagiarism detection incode. The results show that our abstract syntax tree analysis can effectively detectplagiarisms such as changing the format of the code and renaming of identifiersand is at least as effective as Moss for detecting plagiarisms of these kinds
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Elva, Rochelle. „Detecting Semantic Method Clones in Java Code using Method IOE-Behavior“. Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5731.

Der volle Inhalt der Quelle
Annotation:
The determination of semantic equivalence is an undecidable problem; however, this dissertation shows that a reasonable approximation can be obtained using a combination of static and dynamic analysis. This study investigates the detection of functional duplicates, referred to as semantic method clones (SMCs), in Java code. My algorithm extends the input-output notion of observable behavior, used in related work [1, 2], to include the effects of the method. The latter property refers to the persistent changes to the heap, brought about by the execution of the method. To differentiate this from the typical input-output behavior used by other researchers, I have coined the term method IOE-Behavior; which means its input-output and effects behavior [3]. Two methods are defined as semantic method clones, if they have identical IOE-Behavior; that is, for the same inputs (actual parameters and initial heap state), they produce the same output (that is result- for non-void methods, and final heap state). The detection process consists of two static pre-filters used to identify candidate clone sets. This is followed by dynamic tests that actually run the candidate methods, to determine semantic equivalence. The first filter groups the methods by type. The second filter refines the output of the first, grouping methods by their effects. This algorithm is implemented in my tool JSCTracker, used to automate the SMC detection process. The algorithm and tool are validated using a case study comprising of 12 open source Java projects, from different application domains and ranging in size from 2 KLOC (thousand lines of code) to 300 KLOC. The objectives of the case study are posed as 4 research questions: 1. Can method IOE-Behavior be used in SMC detection? 2. What is the impact of the use of the pre-filters on the efficiency of the algorithm? 3. How does the performance of method IOE-Behavior compare to using only input-output for identifying SMCs? 4. How reliable are the results obtained when method IOE-Behavior is used in SMC detection? Responses to these questions are obtained by checking each software sample with JSCTracker and analyzing the results. The number of SMCs detected range from 0 45 with an average execution time of 8.5 seconds. The use of the two pre-filters reduces the number of methods that reach the dynamic test phase, by an average of 34%. The IOE-Behavior approach takes an average of 0.010 seconds per method while the input-output approach takes an average of 0.015 seconds. The former also identifies an average of 32% false positives, while the SMCs identified using input-output, have an average of 92% false positives. In terms of reliability, the IOE-Behavior method produces results with precision values of an average of 68% and recall value of 76% on average. These reliability values represent an improvement of over 37% (for precision) of the values in related work [4]. Thus, it is my conclusion that IOE-Behavior can be used to detect SMCs in Java code with reasonable reliability.
Ph.D.
Doctorate
Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rieger, Matthias. „Effective clone detection without language barriers /“. [S.l.] : [s.n.], 2005. http://www.zb.unibe.ch/download/eldiss/05rieger_m.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ersson, Sara. „Code Clone Detection for Equivalence Assurance“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284329.

Der volle Inhalt der Quelle
Annotation:
To support multiple programming languages, the concept of offering applicationprogramming interfaces (APIs) in multiple programming languages hasbecome commonplace. However, this also brings the challenge of ensuringthat the APIs are equivalent regarding their interface. To achieve this, codeclone detection techniqueswere adapted to match similar function declarationsin the APIs. Firstly, existing code clone detection tools were investigated. Asthey did not perform well, a tree-based syntactic approach was used, where allheader files were compiled with Clang. The abstract syntax trees, which wereobtained during the compilation, were then traversed to locate the functiondeclaration nodes, and to store function names and parameter variable names.When matching the function names, a textual approach was used, transformingthe function names according to a set of implemented rules.A strict rule compares transformations of full function names in a preciseway, whereas a loose rule only compares transformations of parts of functionnames, and matches anything for the remainder. The rules were appliedboth by themselves, and in different combinations, starting with the strictestrule, followed by the second strictest rule, and so fourth.The best-matching rules showed to be the ones which are strict, and are notaffected by the order of the functions in which they are matched. These rulesshowed to be very robust to API evolution, meaning an increase in number ofpublic functions. Rules which are less strict and stable, and not robust to APIevolution, can still be used, such as matching functions on the first or last wordin the function names, but preferably as a complement to the stricter and morestable rules, when most of the functions already have been matched.The tool has been evaluated on the two APIs in King’s software developmentkit, and covered 94% of the 124 available function matches.
För att stödja flera olika programmingsspråk har det blivit alltmer vanligt atterbjuda applikationsprogrammeringsgränssnitt (API:er) på olika programmeringsspråk.Detta resulterar dock i utmaningen att säkerställa att API:erna ärekvivalenta angående deras gränssnitt. För att uppnå detta har kodklonsdetekteringsteknikeranpassats, för att matcha liknande funktionsdeklarationeri API:erna. Först undersöktes existerande kodklonsverktyg. Eftersom de intepresterade bra, användes ett trädbaserat syntaktiskt tillvägagångssätt, där allaheader-filer kompilerades med Clang. De abstrakta syntaxträden, som erhöllsunder kompileringen, traverserades sedan för att lokalisera funktionsdeklarationsnoderna,och för att lagra funktionsnamnen och parametervariabelnamnen.När funktionsnamnen matchades, användes ett textbaserat tillvägagångssätt,som omvandlade funktionsnamnen enligt en uppsättning implementeraderegler.En strikt regel jämför omvandlingar av hela funktionsnamn på ett exakt sätt,medan en lös regel bara jämför omvandlingar av delar of funktionsnamn, ochmatchar den resterande delen med vadsomhelst. Reglerna applicerades bådasjälva och i olika kombinationer, där den striktaste regeln applicerades först,följt av den näst strikaste, och så vidare.De regler som matchar bäst visade sig vara de som är striktast, och som intepåverkas av ordningen på funktionerna i vilken de matchas. Dessa reglervisade sig vara väldigt robusta mot API-evolution, dvs. ett ökat antal publikafunktioner i API:erna. Regler som är mindre strikta och stabila, och interobusta mot API-evolution kan fortfarande användas, men helst som ett komplementtill de striktare och mer stabila reglerna, när de flesta av funktionernaredan har blivit matchade.Verktyget har evaluerats på de två API:erna i Kings mjukvaruutvecklarkit, ochtäckte 94% av de tillgängliga funktionsmatchningarna.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Xianpeng. „Software Clone Detection Basedon Context Information“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324959.

Der volle Inhalt der Quelle
Annotation:
Software clone detection is very promising and innovative within the industryfield. Existing mainstream clone detection techniques mainly focus ondetecting the similarity of source code itself, which makes them capable ofdetecting Type I and Type II clones (Type I clones are two identical codefragments except for variations in format and Type II clones are twostructurally identical code fragments except for variations in format). Butthey rarely pay attention to the relationship between codes. It becomes animportant research area to detect Type III code clones, which are clones withminor difference in statements, by using the context information in thesource code. I carry out a detailed analysis of existing software clone detectiontechniques in this thesis. It raises issues of existing software clone detectiontechniques in theory and practice. On the basis of the analysis, I propose anew method to improve existing clone detection techniques with a detailedtheory analysis and experimental verification. This method makes detectionof Type III software clones possible.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bahtiyar, Muhammed Yasin. „JClone: Syntax tree based clone detection for Java“. Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-5455.

Der volle Inhalt der Quelle
Annotation:

An unavoidable amount of money is spent on maintaining existing software systems today. Software maintenance cost generally higher than development cost of the system therefore lowering maintenance cost is highly appreciated in software industry.

A significant part of maintenance activities is related to repeating the investigation of problems and applying repeated solutions several times. A software system may contain a common bug in several different places and it might take extra effort and time to fix all existences of this bug. This operation commonly increases the cost of Software Maintenance Activities.

Detecting duplicate code fragments can significantly decrease the time and effort therefore the maintenance cost. Clone code detection can be achieved via analyzing the source code of given software system. An abstract syntax tree based clone detector for java systems is designed and implemented through this study.

This master thesis examines a software engineering process to create an abstract syntax tree based clone detector for the projects implemented in Java programming language.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Khan, Mohammed Salman. „A Topic Modeling approach for Code Clone Detection“. UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/874.

Der volle Inhalt der Quelle
Annotation:
In this thesis work, the potential benefits of Latent Dirichlet Allocation (LDA) as a technique for code clone detection has been described. The objective is to propose a language-independent, effective, and scalable approach for identifying similar code fragments in relatively large software systems. The main assumption is that the latent topic structure of software artifacts gives an indication of the presence of code clones. It can be hypothesized that artifacts with similar topic distributions contain duplicated code fragments and to prove this hypothesis, an experimental investigation using multiple datasets from various application domains were conducted. In addition, CloneTM, an LDA-based working prototype for code clone detection was developed. Results showed that, if calibrated properly, topic modeling can deliver a satisfactory performance in capturing different types of code clones, showing particularity good performance in detecting Type III clones. CloneTM also achieved levels of performance comparable to already existing practical tools that adopt different clone detection strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Saini, Vaibhav Pratap Singh. „Towards Accurate and Scalable Clone Detection Using Software Metrics“. Thesis, University of California, Irvine, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10981732.

Der volle Inhalt der Quelle
Annotation:

Code clone detection tools find exact or similar pieces of code, known as code clones. Code clones are categorized into four types of increasing difficulty of detection, ranging from purely textual (Type I) to purely semantic (Type IV). Most clone detectors reported in the literature, work well up to Type III, which accounts for syntactic differences. In between Type III and Type IV, however, there lies a spectrum of clones that, although still exhibiting some syntactic similarities, are extremely hard to detect—the Twilight Zone. Besides correctness, scalability has become a must-have requirement for modern clone detection tools. The increase in amount of source code in web-hosted open source repository services has presented opportunities to improve the state of the art in various modern use cases of clone detection such as detecting similar mobile applications, license violation detection, mining library candidates, code repair, and code search among others. Though these opportunities are exciting, scaling such vast corpora poses critical challenge.

Over the years, many clone detection techniques and tools have been developed. One class of these techniques is based on software metrics. Metrics based clone detection has potential to identify clones in the Twilight Zone. For various reasons, however, metrics-based techniques are hard to scale to large datasets. My work highlights issues which prohibit metric based clone detection techniques to scale large datasets while maintaining high levels of correctness. The identification of these issues allowed me to rethink how metrics could be used for clone detection.

This dissertation starts by presenting an empirical study using software metrics to understand if metrics can be used to identify differences in cloned and non-cloned code. The study is followed by another large scale study to explore the extent of cloning in GitHub. Here, the dissertation highlights scalability challenges in clone detection and how they were addressed. The above two studies provided a strong base to use software metrics for clone detection in a scalable manner. To this end, the dissertation presents Oreo, a novel approach capable of detecting harder-to-detect clones in the Twilight Zone. Oreo is built using a combination of machine learning, information retrieval, and software metrics. This dissertation evaluates the recall of Oreo on BigCloneBench, a benchmark of real world code clones. In experiments to compare the detection performance of Oreo with other five state of the art clone detectors, we found that Oreo has both high recall and precision. More importantly, it pushes the boundary in detection of clones with moderate to weak syntactic similarity, in a scalable manner. Further, to address the issues identified in precision evaluations, the dissertation presents InspectorClone, a semi automated approach to facilitate precision studies of clone detection tools. InspectorClone makes use of some of the concepts introduced in the design of Oreo to automatically resolve different types of clone pairs. Experiments demonstrate that InspectorClone has a very high precision and it significantly reduces the number of clone pairs that need human validation during precision experiments. Moreover, InspectorClone aggregates the individual effort of multiple teams into a single evolving dataset of labeled clone pairs, creating an important asset for software clone research. Finally, the dissertation concludes with a discussion on the lessons learned during the design and development of Oreo and lists down a few areas for the future work in code clone detection.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Simko, Thomas J. „Cloneless: Code Clone Detection via Program Dependence Graphs with Relaxed Constraints“. DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2040.

Der volle Inhalt der Quelle
Annotation:
Code clones are pieces of code that have the same functionality. While some clones may structurally match one another, others may look drastically different. The inclusion of code clones clutters a code base, leading to increased costs through maintenance. Duplicate code is introduced through a variety of means, such as copy-pasting, code generated by tools, or developers unintentionally writing similar pieces of code. While manual clone identification may be more accurate than automated detection, it is infeasible due to the extensive size of many code bases. Software code clone detection methods have differing degree of success based on the analysis performed. This thesis outlines a method of detecting clones using a program dependence graph and subgraph isomorphism to identify similar subgraphs, ultimately illuminating clones. The project imposes few constraints when comparing code segments to potentially reveal more clones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Al, Hakami Hosam Hasan. „Stable marriage problem based adaptation for clone detection and service selection“. Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/11041.

Der volle Inhalt der Quelle
Annotation:
Current software engineering topics such as clone detection and service selection need to improve the capability of detection process and selection process. The clone detection is the process of finding duplicated code through the system for several purposes such as removal of repeated portions as maintenance part of legacy system. Service selection is the process of finding the appropriate web service which meets the consumer’s request. Both problems can be converted into a matching problem. Matching process forms an essential part of software engineering activities. In this research, a well-known mathematical algorithm Stable Marriage Problem (SMP) and its variations are investigated to fulfil the purposes of matching processes in software engineering area. We aim to provide a competitive matching algorithm that can help to detect cloned software accurately and ensure high scalability, precision and recall. We also aim to apply matching algorithm on incoming request and service profile to deal with the web service as a clever independent object so that we can allow the services to accept or decline requests (equal opportunity) rather than the current state of service selection (search-based), in which service lacks of interacting as an independent candidate. In order to meet the above aims, the traditional SMP algorithm has been extended to achieve the cardinality of many-to-many. This adaptation is achieved by defining the selective strategy which is the main engine of the new adaptations. Two adaptations, Dual-Proposed and Dual-Multi-Allocation, have been proposed to both service selection and clone detection process. The proposed approach (SMP-based) shows very competitive results compare to existing software clone approaches, especially in identifying type 3 (copy with further modifications such update, add and delete statements) of cloned software. It performs the detection process with a relatively high precision and recall compare to the CloneDR tool and shows good scalability on a middle sized program. For service selection, the proposed approach has several advantages such as service protection and service quality. The services gain equal opportunity against the incoming requests. Therefore, the intelligent service interaction is achieved, and both stability and satisfaction of the candidates are ensured. This dissertation contributes to several contributions firstly, the new extended SMP algorithm by introducing selective strategy to accommodate many-to-many matching problems, to improve overall features. Secondly, a new SMP-based clone detection approach to detect cloned software accurately and ensures high precision and recall. Ultimately, a new SMPbased service selection approach allows equal opportunity between services and requests. This led to improve service protection and service quality. Case studies are carried out for experiments with the proposed approach, which show that the new adaptations can be applied effectively to clone detection and service selection processes with several features (e.g. accuracy). It can be concluded that the match based approach is feasible and promising in software engineering domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Mayo, Quentin R. „Detection of Generalizable Clone Security Coding Bugs Using Graphs and Learning Algorithms“. Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1404548/.

Der volle Inhalt der Quelle
Annotation:
This research methodology isolates coding properties and identifies the probability of security vulnerabilities using machine learning and historical data. Several approaches characterize the effectiveness of detecting security-related bugs that manifest as vulnerabilities, but none utilize vulnerability patch information. The main contribution of this research is a framework to analyze LLVM Intermediate Representation Code and merging core source code representations using source code properties. This research is beneficial because it allows source programs to be transformed into a graphical form and users can extract specific code properties related to vulnerable functions. The result is an improved approach to detect, identify, and track software system vulnerabilities based on a performance evaluation. The methodology uses historical function level vulnerability information, unique feature extraction techniques, a novel code property graph, and learning algorithms to minimize the amount of end user domain knowledge necessary to detect vulnerabilities in applications. The analysis shows approximately 99% precision and recall to detect known vulnerabilities in the National Institute of Standards and Technology (NIST) Software Assurance Metrics and Tool Evaluation (SAMATE) project. Furthermore, 72% percent of the historical vulnerabilities in the OpenSSL testing environment were detected using a linear support vector classifier (SVC) model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Parkkila, Christoffer. „CLONE DETECTION IN MODEL-BASED DESIGN: AN EVALUATION IN THE SAFETY-CRITICAL RAILWAY DOMAIN“. Thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54936.

Der volle Inhalt der Quelle
Annotation:
Introduction: Software reuse by copying and modifying components to fit new systems is common in industrial settings. However, it can lead to multiple variants that complicate testing and maintenance. Therefore, it is beneficial to detect the variants in existing codebases to document or incorporate them into a systematic reuse process. For this purpose, model-based clone detection and variability management can be used. Unfortunately, current tools have too high computational complexity to process multiple Simulink models while finding commonalities and differences between them. Therefore, we explore a novel approach called MatAdd that aims to enable large-scale industrial codebases to be processed. Objective: The primary objective is to process large-scale industrial Simulink codebases to detect the commonalities and differences between the models. Context and method: The work was conducted in collaboration with Addiva and Alstom to detect variants in Alstom's codebase of Simulink models. Alstom has specific modeling guidelines and conventions that the developers follow. Therefore, we used an exploratory case study to change the research direction depending on Alstom's considerations. Results and Conclusions: The results show that MatAdd can process large-scale industrial Simulink codebases and detect the commonalities and differences between its models. MatAdd processed Alstom's codebase that contained 157 Simulink models with 7820 blocks and 9627 lines in approximately 90 seconds and returned some type-1, type-2, and type-3 clones. However, current limitations cause some signals to be missed, and a more thorough evaluation is needed to assess its future potential. MatAdd's current state assists developers in finding clones to manually encapsulate into reusable library components or find variants to document to facilitate maintenance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Patience, Trudy. „Sequence analysis of a Cowdria ruminantium lamdba (sic) GEM-11 clone“. Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53052.

Der volle Inhalt der Quelle
Annotation:
Thesis (MSc)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: Heartwater is a major threat to livestock in Africa due to its high mortality rate. The intracellular nature of the causative organism, Cowdria ruminantium, makes it difficult to study, hence an effective and user-friendly vaccine has been extremely difficult to obtain. Two C. ruminantium DNA libraries have recently been constructed, the lambda GEM11 bacteriophage DNA library and the lambda ZAPII bacteriophage DNA library, and this has lead to a renewed search for protective genes that could be used as a vaccine against heartwater. In this study, several molecular techniques including PCR, cloning and sequencing were used to identify genes in the lambda GEM11 bacteriophage DNA library that code for proteins, which could be used as vaccines to protect susceptible animals against heartwater. The lambda GEM11 library was screened with a rickettsial secretory protein gene sequence, known as seeD. One positive colony was selected from which the bacteriophage DNA was isolated. The C. ruminantium DNA was amplified from the bacteriophage DNA by using PCR and C. ruminantium-specific primers. The C. ruminantium DNA was screened with Mycoplasma, bovine and Cowdria DNA probes. The amplified DNA was subeloned into two vectors and the clones were screened by restriction analysis to identify clones containing inserts. The appropriate clones were sequenced and overlapping sequences matched, ordered and aligned. Two sequences were continuous with a short sequence of unidentified bases in between. Oligonucleotide primers were designed to amplify the DNA sequence between the two contiguous sequences. This led to the identification of the entire sequence of the C. ruminantium genome contained within the bacteriophage plaque. The single contiguous sequence was analysed and the putative protein-coding sequences were obtained and compared to DNA sequences of known organisms using the BLAST program. Five open reading frames were identified with homology to genes encoding specific proteins in bacteria. Two open reading frames showed homology to the genes encoding the transporter proteins, FtsY and the ABC transporter, and three open reading frames were found to be homologous to genes encoding the essential enzymes dethiobiotin synthetase, pro lipoprotein diacylglycerol transferase and the putative NADH-ubiquinone oxidoreductase subunit. The five open reading frames encode for genes, which are essential for the normal functioning of the C. ruminantium organism. However, these open reading frames might not be effective for use in a DNA vaccine since none of the open reading frames showed homology to obvious genes that could play a role in immunity and therefore confer protection. The open reading frames can be used in mutagenesis studies to produce attenuated strains of the organism that possess mutated versions of these proteins. These attentuated strains could be used for the vaccination of cattle, and thereby confer protection against viable pathogenic C. ruminantium isolates.
AFRIKAANSE OPSOMMING: Hartwater is 'n bedreiging vir vee in Afrika weens die hoë mortaliteitssyfer verbonde aan die siekte. Die intrasellulêre aard van die organisme wat hartwater veroorsaak, Cowdria ruminantium, bemoeilik navorsing aangaande die organisme. Dit het tot gevolg dat 'n effektiewe en gebruikersvriendelike entstof moeilik bekombaar is. Daar is onlangs sukses behaal met die konstruksie van twee C. ruminantium DNA genoteke, die lambda GEM11 bakteriofaag genoteek en die lambda ZAPII bakteriofaag genoteek. Dit het gelei tot 'n herlewing in die soektog na beskermende gene, wat in 'n entstof teen hartwater gebruik kan word. In hierdie studie is verskeie molekulêre tegnieke insluitende PKR, klonering en geenopeenvolging bepaling, gebruik om gene te identifiseer in die lambda GEM11 bakteriofaag genoteek wat kodeer vir proteïene wat in entstowwe gebruik kan word as beskerming teen hartwater. Die secD geen is gebruik om die lambda GEM11 bakteriofaag genoteek te sif. Een positiewe plaak is gevind waarna die DNA uit die bakteriofaag plaak geïsoleer en die C. ruminantium DNA vanuit die bakteriofaag plaak geamplifiseer is deur gebruik te maak van PKR en spesifieke C. ruminantium inleiers. Die C. ruminantium DNA is gesif met Mycoplasma, bees en Cowdria radioaktief gemerkte DNA peilers. Die C. ruminantium DNA is vervolgens in twee vektore gekloneer. Die klone is gesif deur middel van restriksie analise. Die DNA volgorde van die klone is bepaal en twee ononderbroke sekwense is geïdentifiseer met 'n gaping in die middel tussen die twee sekwense. Oligonukleotied inleiers is daarna ontwerp om die geenopeenvolging van die gaping tussen die twee sekwense te vul. Hierdeur kon die volledige geenopeenvolging van die genoom van C. ruminantium wat in die lambda GEM 11 bakteriofaag plaak voorkom, bepaal word. Hierdie volledige geenopeenvolging is vervolgens geanaliseer en die oop leesrame wat daarin voorkom geïdentifiseer. Vyf leesrame is gevind om homologie met gene wat kodeer vir proteïene wat in bakterieë voorkom, te toon. Twee leesrame het homologie met die gene wat kodeer vir transport proteïene, FtsYen die ABC transporter getoon, en drie leesrame het homologie met gene wat kodeer vir die essensiële ensieme detiobiotin sintetase, prolipoproteïen diasielgliserol transferase en die NADHubikinoon oksidoreduktase subeenheid getoon. Dié vyf leesrame het die potensiaal om as entstowwe gebruik te word aangesien al vyf leesrame kodeer vir gene wat 'n belangrike rol speel in die oorlewing van die C. ruminantium organisme. Alhoewel die leesrame moontlik nie so effektief sal wees in 'n DNA entstof nie, toon dit potensiaal om in mutasieeksperimente gebruik te word. Organismes wat die gemuteerde weergawe van die geen besit sal nie-funksionele proteïene produseer, wat 'n invloed kan hê op die normale fisiologiese funksies van die organisme en dus sal lei tot 'n minder virulente organisme. Die geattenueerde organisme kan moontlik gebruik word om diere te immuniseer en daardeur immuniteit aan diere lewer wat beskerming sal bied teen patogeniese C. ruminantium isolate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Mohd, Aris Siti Norismah. „Molecular and biochemical analysis of the ERT1b ripening clone from tomato“. Thesis, University of Nottingham, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285460.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ly, Kevin. „Normalizer: Augmenting Code Clone Detectors using Source Code Normalization“. DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1722.

Der volle Inhalt der Quelle
Annotation:
Code clones are duplicate fragments of code that perform the same task. As software code bases increase in size, the number of code clones also tends to increase. These code clones, possibly created through copy-and-paste methods or unintentional duplication of effort, increase maintenance cost over the lifespan of the software. Code clone detection tools exist to identify clones where a human search would prove unfeasible, however the quality of the clones found may vary. I demonstrate that the performance of such tools can be improved by normalizing the source code before usage. I developed Normalizer, a tool to transform C source code to normalized source code where the code is written as consistently as possible. By maintaining the code's function while enforcing a strict format, the variability of the programmer's style will be taken out. Thus, code clones may be easier to detect by tools regardless of how it was written. Reordering statements, removing useless code, and renaming identifiers are used to achieve normalized code. Normalizer was used to show that more clones can be found in Introduction to Computer Networks assignments by normalizing the source code versus the original source code using a small variety of code clone detection tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Sood, Rachit K. „Clone Detection & Cataloging Method (CDCM) Towards an automatic approach for bootstrapping reuse efforts in an organization“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345436534.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kasprzyk, Arkadiusz. „Investigation of clonality and minimal residual disease in haematological malignancy using fluorescent in situ hybridization“. Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/1317904/.

Der volle Inhalt der Quelle
Annotation:
Cytogenetic analysis of the malignant clone is clinically important in haematological malignancy. Analysis by metaphase cytogenetics is restricted to the small proportion of malignant cells which are actively dividing. This thesis explores the dynamics of malignant clones using the technique of fluorescence in situ hybridization (FISH) to visualize chromosomal abnormalities in interphase (non-dividing) cells. Hyperdiploid (>46 chromosomes) clones have been investigated by interphase FISH in acute lymphoblastic leukaemia (ALL), acute myeloid leukaemia (AML) and myelodysplastic syndrome (MDS) using appropriate chromosome-specific probes. A hyperdiploid clone was detected in interphase cells in 9/65 patients with ALL in whom metaphase cytogenetics had failed or was normal. A single hyperdiploid cell was identified as clonal in one patient with MDS but not in six others with AML, MDS or ALL. The involvement of different cell lineages in the malignant clone was investigated by simultaneous FISH and identification of the cell type by morphology or monoclonal antibodies. In ALL, hyperdiploid clones were restricted to the lymphoid blasts in 9/9 cases, while Philadelphia (Ph) positive clones, (identified by probes to the genes m- BCR or M-BCR and ABL which fuse as a result of the translocation) were found either in lymphoid blasts alone (1/3 cases) or in both lymphoid and myeloid cells (2/3 cases). In AML trisomy 8 (using a chromosome 8-specific probe) and an 11q23 abnormality (which split YAC 13HH4) were both found only in the myeloid blasts, in 3/3 and 2/2 cases respectively. A sensitive method for the detection of hyperdiploid \geq 50 clones in ALL was developed for minimal residual disease detection. Simultaneous probing of three chromosomes enabled detection of one hyperdiploid cell in 10,000. Heterogeneity in the speed with which the clone was eliminated in remission was seen in 16 patients and early relapse was detected in one patient.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Chen, Mengsu. „How Reliable is the Crowdsourced Knowledge of Security Implementation?“ Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/86885.

Der volle Inhalt der Quelle
Annotation:
The successful crowdsourcing model and gamification design of Stack Overflow (SO) Q&A platform have attracted many programmers to ask and answer technical questions, regardless of their level of expertise. Researchers have recently found evidence of security vulnerable code snippets being possibly copied from SO to production software. This inspired us to study how reliable is SO in providing secure coding suggestions. In this project, we automatically extracted answer posts related to Java security APIs from the entire SO site. Then based on the known misuses of these APIs, we manually labeled each extracted code snippets as secure or insecure. In total, we extracted 953 groups of code snippets in terms of their similarity detected by clone detection tools, which corresponds to 785 secure answer posts and 644 insecure answer posts. Compared with secure answers, counter-intuitively, insecure answers has higher view counts (36,508 vs. 18,713), higher score (14 vs. 5), more duplicates (3.8 vs. 3.0) on average. We also found that 34% of answers provided by the so-called trusted users who have administrative privileges are insecure. Our finding reveals that there are comparable numbers of secure and insecure answers. Users cannot rely on community feedback to differentiate secure answers from insecure answers either. Therefore, solutions need to be developed beyond the current mechanism of SO or on the utilization of SO in security-sensitive software development.
Master of Science
Stack Overflow (SO), the most popular question and answer platform for programmers today, has accumulated and continues accumulating tremendous question and answer posts since its launch a decade ago. Contributed by numerous users all over the world, these posts are a type of crowdsourced knowledge. In the past few years, they have been the main reference source for software developers. Studies have shown that code snippets in answer posts are copied into production software. This is a dangerous sign because the code snippets contributed by SO users are not guaranteed to be secure implementations of critical functions, such as transferring sensitive information on the internet. In this project, we conducted a comprehensive study on answer posts related to Java security APIs. By labeling code snippets as secure or insecure, contrasting their distributions over associated attributes such as post score and user reputation, we found that there are a significant number of insecure answers (644 insecure vs 785 secure in our study) on Stack Overflow. Our statistical analysis also revealed the infeasibility of differentiating between secure and insecure posts leveraging the current community feedback system (eg. voting) of Stack Overflow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Kerkhoff, Sebastian. „A Connection Between Clone Theory and FCA Provided by Duality Theory“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-73938.

Der volle Inhalt der Quelle
Annotation:
The aim of this paper is to show how Formal Concept Analysis can be used for the bene t of clone theory. More precisely, we show how a recently developed duality theory for clones can be used to dualize clones over bounded lattices into the framework of Formal Concept Analysis, where they can be investigated with techniques very di erent from those that universal algebraists are usually armed with. We also illustrate this approach with some small examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Aziz, Ahmad Naseer. „Genetic analysis of the anther-derived progeny and isolated pollen grains of a culture-responsive Solanum clone“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0023/NQ38342.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Strouts, Fiona Rosalind. „The complete genome and functional analysis of the Brazilian Purpuric Fever clone of Haemophilus influenzae biogroup Aegyptius“. Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/6351.

Der volle Inhalt der Quelle
Annotation:
The Brazilian Purpuric Fever (BPF) clone of Haemophilus influenzae biogroup Aegyptius (Hae) emerged in São Paulo in 1984, causing epidemic outbreaks of a life-threatening childhood infection characterised by shock and purpura fulminans. Strains of Hae have long been known to cause highly contagious and purulent conjunctivitis, but never previously implicated in invasive disease. Laboratory studies have revealed phenotypic and genetic differences between the BPF clone and other Hae strains, but failed to identify virulence factors responsible for the unusual virulence of this clone. This thesis describes the exhaustive annotation of the whole genome sequences of the invasive BPF clone isolate F3031 and non-invasive Hae conjunctivitis isolate F3047, inferring gene function through sequence homologies, and identifying insertions, deletions, pseudogenes and regulatory sites that may underlie phenotypic variation between these strains. Pan-genomic comparison of F3031 and F3047 to 5 other complete H. influenzae genomes allowed delineation of the 'Hae accessory genome', revealing a suite of novel adhesins not previously described for H. influenzae, presumably reflecting the conjunctival niche to which Hae has specialised. These include a striking ten-member family of trimeric autotransporter adhesins (TAAs) that share homology with TAAs established to play a role in virulence of other bacterial pathogens, and were selected for further study. Functional evaluation of variants of one of these genes, b/caaA1, through cloning and expression in E. coli, revealed differences in autoaggregation and in adherence of transformants to human epithelial cells in culture. Investigating gene function in Hae has been hampered by difficulties in genetically manipulating these strains. Competence for DNA uptake and transformation in Hae was investigated through in silico analysis of the genes involved in these processes, and the development of a plate transformation protocol that appears to reliably transform certain strains of Hae, providing a valuable tool for future work investigating the virulence functions of genes in their natural background.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Padmanabhan, Sivasankar. „Drowsiness detection using HRV analysis“. Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1596988.

Der volle Inhalt der Quelle
Annotation:

The field of drowsiness detection is gaining more attention these days. An estimate by National Highway Traffic Safety Administration states that the total number of people falling asleep on the wheel is increasing day by day. If there is an effective way to monitor this condition and alert the drivers, many fatal accidents can be prevented. This thesis work elaborates on one such simple, yet effective drowsiness detection algorithm, the HRV - Heart Rate Variability analysis. Many psychological researchers have found out that when a person becomes drowsy, there is a variation in their heart signal. Monitoring this physiological variation would be more efficient than monitoring their facial movements such as blinking, eye brow contraction, and yawning, which are said to happen after much longer time when compared to the immediate changes in the heart rate. Hence, an algorithm that detects drowsiness based on HRV analysis is developed and implemented by analyzing heart signals. Simple hardware setups were used to collect the ECG data, and digital filters were used to remove noise and extract the desired information for further analysis. The developed algorithm was implemented successfully and the results obtained were more precise and satisfactory. This approach of monitoring drowsiness is more reliable and accurate and when implemented with its necessary features, it can monitor drowsiness more effectively and save hundreds of lives every day.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Orvalho, André. „Botnet Detection by Correlation Analysis“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-105096.

Der volle Inhalt der Quelle
Annotation:
When a bot master uses a control and commander (C&C) mechanism to assemble a large number of bots, infecting them by using well known vulnerabilities, it forms a botnet. Botnets can vary in C&C architecture (Centralized C&C or P2P are the most common), communication protocols used (IRC, HTTP or others like P2P) and observable botnet activities. They are nowadays one of the largest threats on cyber security and it is very important to specify the different characteristics of botnets in order to detect them, the same way a hunter needs to know its prey before preparing methods to catch it. There are 2 important places to look for botnet activity: The network and the infected host. This project intends to present a study that correlates the behavior on the network with the behavior on the host in order to help detection, studies like [SLWL07] (based on network behavior) and [SM07] (based on host behavior) are two good start points to help on the research. The choice of the architecture was done by looking at the botnet characteristics especially the capacity of changing and evolving which makes methods for detection by misuse obsolete. The system is designed to first look at 4 features of system calls on the host side: First which system call it is, second the name of the application using the system call, third the time between this system call and the last system call and for last the sequence of the past three system calls. A technique of unsupervised learning (the K-means algorithm) will be used to calculate the values for the threshold using an unclassified training set. when on the real world the collection is used to calculate the values to compare with the threshold. If it passes the threshold than the necessary information is passed to the network evaluation block. On the network side and before receiving any data from the host side, it will calculate the threshold for the flows given on the training set. When using the data from the host to narrow down the number of flows to look at, it very if their values pass the threshold. The feature used to calculate the threshold is the time between flows. If the network finds flows that pass the threshold for the network evaluation block than it will emit reports and alarms to the user. The small experiences done show some promising signs for use on the real world even though a lot more further testing is needed especially on the network bit. The prototype shows some limitations that can be overcome by further testing and using other techniques to evolve the prototype.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Tait, Crawford. „Wavelet analysis for onset detection“. Thesis, University of Glasgow, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363156.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Nikolaidis, Dimitrios. „Detection of mines using hyperspectral analysis“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA311744.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Applied Physics) Naval Postgraduate School, June 1996.
Thesis advisor(s): David D. Cleary, Suntharalingam Gnanalingam. "June 1996." Includes bibliographical references. Also Available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Al, Amro Sulaiman. „Behaviour-based virus analysis and detection“. Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/9488.

Der volle Inhalt der Quelle
Annotation:
Every day, the growing number of viruses causes major damage to computer systems, which many antivirus products have been developed to protect. Regrettably, existing antivirus products do not provide a full solution to the problems associated with viruses. One of the main reasons for this is that these products typically use signature-based detection, so that the rapid growth in the number of viruses means that many signatures have to be added to their signature databases each day. These signatures then have to be stored in the computer system, where they consume increasing memory space. Moreover, the large database will also affect the speed of searching for signatures, and, hence, affect the performance of the system. As the number of viruses continues to grow, ever more space will be needed in the future. There is thus an urgent need for a novel and robust detection technique. One of the most encouraging recent developments in virus research is the use of formulae, which provides alternatives to classic virus detection methods. The proposed research uses temporal logic and behaviour-based detection to detect viruses. Interval Temporal Logic (ITL) will be used to generate virus specifications, properties and formulae based on the analysis of the behaviour of computer viruses, in order to detect them. Tempura, which is the executable subset of ITL, will be used to check whether a good or bad behaviour occurs with the help of ITL description and system traces. The process will also use AnaTempura, an integrated workbench tool for ITL that supports our system specifications. AnaTempura will offer validation and verification of the ITL specifications and provide runtime testing of these specifications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wang, Chen-Shan. „Moving object detection by track analysis“. Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA241007.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 1990.
Thesis Advisor(s): Lee, Chin-Hwa. Second Reader: Hippenstiel, Ralph. "September 1990." Description based on title screen as viewed on December 18, 2009. DTIC Identifier(s): Underwater Object Locators, Underwater Tracking, Underwater Targets, Acoustic Detection, Computerized Simulation, Acoustic Data. Author(s) subject terms: Hough Transform, ILS, LAS, Sorting, Modification, Similarity. Includes bibliographical references (p. 76). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kumar, Surinder. „Lane Detection based on Contrast Analysis“. Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-206227.

Der volle Inhalt der Quelle
Annotation:
Computer vision and image processing systems are ubiquitous in automotive domain and manufacturing industry. Lane detection warning systems has been an elementary part of the modern automotive industry. Due to the recent progress in the computer vision and image processing methods, economical and flexible use of computer vision is now pervasive and computing with images is not just for the realm of the science, but also for the arts and social science and even for hobbyists. Image processing is a key technology in automotive industry, even now there is hardly a single manufacturing process that is thinkable without imaging. The applications of image processing and computer vision methods in embedded systems platform, is an ongoing research area since many years. OpenCV, an open-source computer vision library containing optimized algorithms and methods for designing and implementing applications based on video and image processing techniques. These method are organized in the form of modules for specific field including, user-graphic interface, machine learning, feature extraction etc [43]. Vision-based automotive application systems become an important mechanism for lane detection and warning systems to alert a driver about the road in localization of the vehicle [1]. In automotive electronic market, for lane detection problem, vision-based approaches has been designed and developed using different electronic hardware and software components including wireless sensor, camera module, Field-Programmable Gate Array (FPGA) based systems, GPU and digital signal processors (DSP) [13]. The software module consists on the top of real-time operating systems and hardware description programming language including Verilog, or VHDL. One of the most time critical task of vision based systems is to test system applications in real physical environment with wide variety of driving scenarios and validating the whole systems as per the automotive industry standards. For validating and testing the advanced driver assistance systems, there are some commercial tools available including Assist ADTF from Elektrobit, EB company [43]. In addition to the design and strict real-time requirements for advanced driver assistance systems applications based on electronic components and embedded platform, the complexity and characteristics of the implemented algorithms are two parameters that need to be taken into consideration choosing hardware and software component [13]. The development of vision-based automotive application, based on alone electronic and micro-controller is not a feasible solution approach [35] [13] and GPU based solution are attractive but has many other issues including power consumption. In this thesis project, image and video processing module is used from OpenCV library for road lane detection problems. In proposed lane detection methods, low-level image processing algorithms and methods are used to extract relevant information for lane detection problem by applying contrast analysis at pixel level intensity values. Furthermore, the work at hand presents different approaches for solving relevant partial problems in the domain of lane detection. The aim of the work is to apply contrast analysis based on low-level image processing methods to extract relevant lane model information from the grid of intensity values of pixel elements available in image frame. The approaches presented in this project work are based on contrast analysis of binary mask image frame extracted after applying range threshold. A set of points, available in an image frame, based lane feature models are used for detecting lanes on color image frame captured from video. For the performance measurement and evaluation, the proposed methods are tested on different systems setup, including Linux, Microsoft Windows, CodeBlocks, Visual Studio 2012 and Linux based Rasbian-Jessie operating systems running on Intel i3, AMD A8 APU, and embedded systems based (Raspberry Pi 2 Model B) ARM v7 processor respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Linguraru, Marius George. „Feature detection in mammographic image analysis“. Thesis, University of Oxford, 2004. http://ora.ox.ac.uk/objects/uuid:b92185f0-c7bf-40e1-bc17-bf71065f001f.

Der volle Inhalt der Quelle
Annotation:
In modern society, cancer has become one of the most terrifying diseases because of its high and increasing death rate. The disease's deep impact demands extensive research to detect and eradicate it in all its forms. Breast cancer is one of the most common forms of cancer, and approximately one in nine women in the Western world will develop it over the course of their lives. Screening programmes have been shown to reduce the mortality rate, but they introduce an enormous amount of information that must be processed by radiologists on a daily basis. Computer Aided Diagnosis (CAD) systems aim to assist clinicians in their decision-making process, by acting as a second opinion and helping improve the detection and classification ratios by spotting very difficult and subtle cases. Although the field of cancer detection is rapidly developing and crosses over imaging modalities, X-ray mammography remains the principal tool to detect the first signs of breast cancer in population screening. The advantages and disadvantages of other imaging modalities for breast cancer detection are discussed along with the improvements and difficulties encountered in screening programmes. Remarkable achievements to date in breast CAD are equally presented. This thesis introduces original results for the detection of features from mammographic image analysis to improve the effectiveness of early cancer screening programmes. The detection of early signs of breast cancer is vital in managing such a fast developing disease with poor survival rates. Some of the earliest signs of cancer in the breast are the clusters of microcalcifications. The proposed method is based on image filtering comprising partial differential equations (PDE) for image enhancement. Subsequently, microcalcifications are segmented using characteristics of the human visual system, based on the superior qualities of the human eye to depict localised changes of intensity and appearance in an image. Parameters are set according to the image characteristics, which makes the method fully automated. The detection of breast masses in temporal mammographic pairs is also investigated as part of the development of a complete breast cancer detection tool. The design of this latter algorithm is based on the detection sequence used by radiologists in clinical routine. To support the classification of masses into benign or malignant, novel tumour features are introduced. Image normalisation is another key concept discussed in this thesis along with its benefits for cancer detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Marshall, Jonathan Peter. „Detection and analysis of debris discs“. Thesis, Open University, 2011. http://oro.open.ac.uk/54869/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hallam, Robert Kenneth. „Dual optical detection and multivariate analysis“. Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/33747.

Der volle Inhalt der Quelle
Annotation:
The application of flow injection analysis into the simultaneous determination of two or more components has been challenging for many years. Various detectors such as ultraviolet/visible absorption, fluorescence, and electrochemical detectors, have been used individually or in combination with each other. Combining two optical detectors such as fluorescence and ultraviolet/visible absorbance, however, has always been challenging due to their incompatibilities. However, the recent developments in fibre optics, solid-state light sources and miniaturised charged coupled devices (CCD), allow novel designs and most of the incompatibilities be circumvented. A flow injection manifold can now be adapted so that only one flow cell is used along with a diode array CCD detector that can detect both fluorescence and absorbance simultaneously. The initial development and testing of such dual detection system is described in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Crane, Nicola. „Debiasing reasoning : a signal detection analysis“. Thesis, Lancaster University, 2016. http://eprints.lancs.ac.uk/82265/.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on deductive reasoning and how the belief bias effect can be reduced or ameliorated. Belief bias is a phenomenon whereby the evaluation of the logical validity of an argument is skewed by the degree to which the reasoner believes the conclusion. There has been little research examining ways of reducing such bias and whether there is some sort of effective intervention which makes people reason more on the basis of logic. Traditional analyses of this data has focussed on simple measures of accuracy, typically deducting the number of incorrect answers from the number of correct answers to give an accuracy score. However, recent theoretical developments have shown that this approach fails to separate reasoning biases and response biases. A reasoning bias, is one which affects individuals’ ability to discriminate between valid and invalid arguments, whereas a response bias is simply the individual’s tendency to give a particular answer, independent of reasoning. A Signal Detection Theory (SDT) approach is used to calculate measures of reasoning accuracy and response bias. These measures are then analysed using mixed effects models. Chapter 1 gives a general introduction to the topic, and outlines the content of subsequent chapters. In Chapter 2, I review the psychological literature around belief bias, the growth of the use of SDT models, and approaches to reducing bias. Chapter 3 covers the methodology, and includes a a thorough description of the calculation of the SDT measures, and an explanation of the mixed effects models I used to analyse these. Chapter 4 presents an experiment in which the effects of feedback on reducing belief bias is examined. In Chapter 5, the focus shifts in the direction of individual differences, and looks at the effect of different instructions given to participants, and Chapter 6 examines the effects of both feedback and specific training. Chapter 7 provides a general discussion of the implications of the previous three chapters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Moussaileb, Routa. „Log analysis for malicious software detection“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0211.

Der volle Inhalt der Quelle
Annotation:
Les rançongiciels demeurent la menace informatique principale pour les particuliers, les entreprises et les gouvernements. Les conséquences de ces attaques peuvent causer des pertes irréversibles si les exigences des attaquants ne sont pas satisfaites à temps. Cette thèse cible les rançongiciels Windows. Ils affectent les données des utilisateurs sauvegardées sur les ordinateurs ainsi que de nombreux services publics. Quatre étapes de l’attaque des rançongiciels sont définies : infection, déploiement, destruction et transaction. Les contre-mesures sont regroupées selon les techniques utilisées et attribuées à chaque phase de l'attaque. Cette thèse présente trois contributions. Le premier mécanisme de détection est situé dans la couche du système de fichiers. Il est basé sur la traversée du système qui permet d’exposer les comportements malveillants. Cette thèse propose également une analyse du trafic réseau. Les échantillons sont collectés pour une détection au niveau des paquets. Une étude des notes de rançon est faite pour situer la contre-mesure réseau dans l'étape appropriée de l’intrusion. La dernière contribution donne un aperçu des attaques, particulièrement des Doxware. Un modèle de quantification qui explore le système de fichiers Windows à la recherche de données importantes est présenté et complémenté par les pots de miels pour protéger les fichiers sensibles. Enfin, cette thèse offre des perspectives permettant d'établir un meilleur plan d’action pour les chercheurs
Ransomware remains the number one cyberthreat for individuals, enterprises, and governments. Malware’s aftermath can cause irreversible casualties if the requirements of the attackers are not met in time. This thesis targets Windows ransomware. It affects users’ data and undermines many public services. Four stages of this malware attack are defined: delivery, deployment, destruction, and dealing. The corresponding countermeasures are assigned to each phase of the attack and clustered according to the techniques used. This thesis presents three contributions. The first detection mechanism is located in the file system layer. It is based on the system traversal that is sufficient to highlight the malicious behavior. This thesis proposes also an analysis of the network traffic. It is generated by collected ransomware samples to perform a packet-level detection. A study of the ransom notes is made to define where it takes place in a ransomware workflow. The last contribution provides an insight into plausible attacks, especially Doxware. A quantification model that explores the Windows file system in search of valuable data is presented. It is based on the term frequency-inverse document frequency solution provided in the literature for information retrieval. Honeypot techniques are also used to protect the sensitive files of the users. Finally, this thesis provides future perspectives granting a better roadmap for researchers
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Sheikhalishahi, Mina. „Spam campaign detection, analysis, and formalization“. Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26935.

Der volle Inhalt der Quelle
Annotation:
Tableau d'honneur de la Faculté des études supérieures et postdoctorales, 2016-2017
Les courriels Spams (courriels indésirables ou pourriels) imposent des coûts annuels extrêmement lourds en termes de temps, d’espace de stockage et d’argent aux utilisateurs privés et aux entreprises. Afin de lutter efficacement contre le problème des spams, il ne suffit pas d’arrêter les messages de spam qui sont livrés à la boîte de réception de l’utilisateur. Il est obligatoire, soit d’essayer de trouver et de persécuter les spammeurs qui, généralement, se cachent derrière des réseaux complexes de dispositifs infectés, ou d’analyser le comportement des spammeurs afin de trouver des stratégies de défense appropriées. Cependant, une telle tâche est difficile en raison des techniques de camouflage, ce qui nécessite une analyse manuelle des spams corrélés pour trouver les spammeurs. Pour faciliter une telle analyse, qui doit être effectuée sur de grandes quantités des courriels non classés, nous proposons une méthodologie de regroupement catégorique, nommé CCTree, permettant de diviser un grand volume de spams en des campagnes, et ce, en se basant sur leur similarité structurale. Nous montrons l’efficacité et l’efficience de notre algorithme de clustering proposé par plusieurs expériences. Ensuite, une approche d’auto-apprentissage est proposée pour étiqueter les campagnes de spam en se basant sur le but des spammeur, par exemple, phishing. Les campagnes de spam marquées sont utilisées afin de former un classificateur, qui peut être appliqué dans la classification des nouveaux courriels de spam. En outre, les campagnes marquées, avec un ensemble de quatre autres critères de classement, sont ordonnées selon les priorités des enquêteurs. Finalement, une structure basée sur le semiring est proposée pour la représentation abstraite de CCTree. Le schéma abstrait de CCTree, nommé CCTree terme, est appliqué pour formaliser la parallélisation du CCTree. Grâce à un certain nombre d’analyses mathématiques et de résultats expérimentaux, nous montrons l’efficience et l’efficacité du cadre proposé.
Spam emails yearly impose extremely heavy costs in terms of time, storage space, and money to both private users and companies. To effectively fight the problem of spam emails, it is not enough to stop spam messages to be delivered to end user inbox or be collected in spam box. It is mandatory either to try to find and persecute the spammers, generally hiding behind complex networks of infected devices, which send spam emails against their user will, i.e. botnets; or analyze the spammer behavior to find appropriate strategies against it. However, such a task is difficult due to the camouflage techniques, which makes necessary a manual analysis of correlated spam emails to find the spammers. To facilitate such an analysis, which should be performed on large amounts of unclassified raw emails, we propose a categorical clustering methodology, named CCTree, to divide large amount of spam emails into spam campaigns by structural similarity. We show the effectiveness and efficiency of our proposed clustering algorithm through several experiments. Afterwards, a self-learning approach is proposed to label spam campaigns based on the goal of spammer, e.g. phishing. The labeled spam campaigns are used to train a classifier, which can be applied in classifying new spam emails. Furthermore, the labeled campaigns, with the set of four more ranking features, are ordered according to investigators priorities. A semiring-based structure is proposed to abstract CCTree representation. Through several theorems we show under some conditions the proposed approach fully abstracts the tree representation. The abstract schema of CCTree, named CCTree term, is applied to formalize CCTree parallelism. Through a number of mathematical analysis and experimental results, we show the efficiency and effectiveness of our proposed framework as an automatic tool for spam campaign detection, labeling, ranking, and formalization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ghazali, Mohd Fairusham. „Leak detection using instantaneous frequency analysis“. Thesis, University of Sheffield, 2012. http://etheses.whiterose.ac.uk/2545/.

Der volle Inhalt der Quelle
Annotation:
Leaking pipes are a primary concern for water utilities around the globe as they compose a major portion of losses. Contemporary interest surrounding leaks is well documented and there is a proliferation of leak detection techniques. Although the reasons for these leaks are well known, some of the current methods for leak detection and location are either complicated, inaccurate and most of them are time consuming. Transient analyses offer a plausible route towards leak detection due to their robustness and simplicity. These approaches use the change of pressure response of the fluid in a pipeline to identify features. The method used in the current study employ a single pressure transducer to obtain the time domain signal of the pressure transient response caused by a sudden opening and closing of a solenoid valve. The device used is fitted onto a standard UK hydrant and both cause a pressure wave and acquire the pressure history. The work described here shows that the analysis using Hilbert transform (HT), Hilbert Huang transform (HHT) and EMD based method is a promising tool for leak detection and location in pipeline network. In the first part of the work, the analysis of instantaneous characteristics of transient pressure signal has been calculated using HT and HHT for both simulated and experimental data. These instantaneous properties of the signals are shown to be capable of detecting the reflection from the features of the pipe such as leakages and outlet. When tested with leak different locations, the processed results still show the existing of the features in the system. In the second part of the work, the study is based on newly method of analysing non-stationary data called empirical mode decomposition (EMD) for instantaneous frequency calculation for leak detection. First, the pressure signals were filtered in order to remove the noise using EMD. Then the instantaneous frequency was calculated and compared using different methods. With this method, it is possible to identify the leaks and also the features in the pipeline network. These were tested at different locations of a real water distribution system in the Yorkshire Water region.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Lehtomäki, J. (Janne). „Analysis of energy based signal detection“. Doctoral thesis, University of Oulu, 2005. http://urn.fi/urn:isbn:9514279255.

Der volle Inhalt der Quelle
Annotation:
Abstract The focus of this thesis is on the binary signal detection problem, i.e., if a signal or signals are present or not. Depending on the application, the signal to be detected can be either unknown or known. The detection is based on some function of the received samples which is compared to a threshold. If the threshold is exceeded, it is decided that signal(s) is (are) present. Energy detectors (radiometers) are often used due to their simplicity and good performance. The main goal here is to develop and analyze energy based detectors as well as power-law based detectors. Different possibilities for setting the detection threshold for a quantized total power radiometer are analyzed. The main emphasis is on methods that use reference samples. In particular, the cell-averaging (CA) constant false alarm rate (CFAR) threshold setting method is analyzed. Numerical examples show that the CA strategy offers the desired false alarm probability, whereas a more conventional strategy gives too high values, especially with a small number of reference samples. New performance analysis of a frequency sweeping channelized radiometer is presented. The total power radiometer outputs from different frequencies are combined using logical-OR, sum and maximum operations. An efficient method is presented for accurately calculating the likelihood ratio used in the optimal detection. Also the effects of fading are analyzed. Numerical results show that although sweeping increases probability of intercept (POI), the final probability of detection is not increased if the number of observed hops is large. The performance of a channelized radiometer is studied when different CFAR strategies are used to set the detection threshold. The proposed iterative methods for setting the detection threshold are the forward consecutive mean excision (FCME) method with the CA scaling factors in final detection decision (FCME+CA), the backward consecutive mean excision (BCME) method with the CA scaling factors in detection (BCME+CA) and a method that uses the CA scaling factors for both censoring and detection (CA+CA). Numerical results show that iterative CFAR methods may improve detection performance compared to baseline methods. Finally, a method to set the threshold of a power-law detector that uses a nonorthogonal transform is presented. The mean, variance and skewness of the decision variable in the noise-only case are derived and these are used to find a shifted log-normal approximation for the distribution of the decision variable. The accuracy of this method is verified through simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Nawawi, Mustaffa bin. „Flow injection analysis with bioluminescence detection“. Thesis, Loughborough University, 1987. https://dspace.lboro.ac.uk/2134/31964.

Der volle Inhalt der Quelle
Annotation:
The detection of bacterial contamination of water, pharmaceutical products etc. is of great importance, and is most conveniently performed by the detection of bacterial ATP (Adenosine TriPhosphate) using the luciferin-luciferase bioluminescence system. This system uses unstable and expensive reagents, and emits transient light signals. In this study an FIA (Flow Injection Analysis) system was set up to monitor the light signal produced by the reaction. Using a luminometric detector (a liquid scintillation counter) with the FIA system, the reaction length, sample volume, flow rates, pH etc. were investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wang, Shinan. „Detection and Analysis of GNSS Multipath“. Thesis, KTH, Geodesi och satellitpositionering, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188573.

Der volle Inhalt der Quelle
Annotation:
Multipath effect is generated when a signal arrives at the antenna by multiple paths instead of one direct path. It is, to a large extent, dependent on the surrounding environment of the antenna and the satellite geometry. Despite all the efforts put into the mitigation of multipath errors, it remains the dominant error source that cannot be ignored for GNSS precise positioning and other GNSS applications. In this thesis report, two methods have been developed with Trimble Business Center and MATLAB to study the presence and performance of multipath effect. The first method – Trimble baseline analysis focuses on the height change pattern of the study station with regard to its reference station over time. The second method – RINEX analysis focuses on the change of the geometry-free combination of pseudorange codes and carrier phase measurements over time. These two methods have been firstly tested on station KTH and then applied on station Vidsel and station Botsmark. Various forms of results all indicate the existence of multipath effect on the three suspicious stations. The height value of the study station has a variation pattern on a daily basis because of multipath. Multipath errors also cause noise in the satellite signals, with pseudorange more affected than carrier phase. It is also worth-noted that satellite at low elevation angle is more susceptible to multipath errors than that at high elevation angle.
Flervägsfel genereras när en signal anländer till antennen genom flera olika vägar i stället för den direkta vägen från satelliten. Det är i stor utsträckning beroende på den omgivande miljön av antennen och satellitgeometrin. Trots olika metoder för att reducera flervägsfel, är det fortfarande en dominerande felkälla som inte kan ignoreras för precis positionering med GNSS och andra GNSS-tillämpningar. I denna rapport har två metoder utvecklats med Trimble Business Center och MATLAB för att studera närvaron och effekten av flervägsfelet. Den första metoden - Trimble baslinje analys fokuserar på förändring i höjden för studie stationen relativt till referensstationen över tid. Den andra metoden - RINEX analys fokuserar på förändring av den geometrifria kombination av pseudoavståndsmätningar () och fasmätningar () över tid. Dessa två metoder har först testats på KTH-stationen och sedan appliceras på stationen Vidsel och stationen Botsmark. Olika resultat indikerar förekomsten av flervägsfel på de tre stationer. Höjden för studiestationerna har ett dagligt variationsmönster på grund av flervägsfelet. Flervägsfel orsakar även buller i satellitsignalerna, var pseudoavstånd är mer drabbade än fasmätningarna. Det är också värt att noterade att satelliter med låg elevationsvinkel är mer mottagliga för flervägsfel än vid hög elevationsvinkel.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Abd, Yusof Noor Fazilla. „Computational approaches to depression analysis : from detection to intention analysis“. Thesis, University of Aberdeen, 2018. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=238393.

Der volle Inhalt der Quelle
Annotation:
The proliferation of social media-based research on mental health offers exciting possibilities to complement traditional methods in mental health care. As ascertained by psychology experts, the online platform should get priority over offline as it offers considerably reliable diagnosis than granted in person. Early detection does not only alleviate the effects of depression on the patient but also benefits the whole community. In this thesis, we explore computational methods in tackling some of the research challenges in depression analysis and make four contributions to the body of knowledge. First, we develop a binary classification model for classifying depression-indicative text from social media. We propose three feature engineering strategies and assess the effectiveness of supervised model to enhance the classification performance in predicting posts indicate depression. To tackle the short and sparse social media data, we particularly integrate the coherent sentiment-topic extracted from the topic model. Additionally, we propose strategies to investigate the effectiveness of affective lexicon in the task of depression classification. Second, we propose a computational method for analysing potential causes of depression from text. With this study, we demonstrate the ability to employ the topic model to discover the potential factors that might lead to depression. We show the most prominent causes and how it evolved over time. Furthermore, we highlight some differences in causes triggered between two different groups, i.e. high-risk of depression and low-risk. Hence, this study significantly expands the ability to discover the potential factors that trigger depression, making it possible to increase the efficiency of depression treatment. Third, we develop a computational method for monitoring the psychotherapy outcome from the individual psychotherapy counselling. Third, we develop a computational method for monitoring the psychotherapy outcome from the individual psychotherapy counselling. By doing this, we show the possibilities of utilising the topic model to track the treatment progress of each patient by assessing the sentiment and topic discussed throughout the course of psychotherapy treatment. Fourth, we propose an unsupervised method called split over-training for identifying user's intention expressed in social media text. We develop a binary classification model for classifying intentions in texts. With this study, we want to show the possibility of applying the intention analysis in mental health domain. Overall, we demonstrate how computational analysis can be fully utilised to benefit clinical settings in mental health analysis. We suggest that more future work could be further explored to complement the traditional settings in mental health care.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Leite, Jose Antonio Ferreira. „Multi-scale line detection“. Thesis, University of York, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Manceau, Jérôme. „Clonage réaliste de visage“. Thesis, CentraleSupélec, 2016. http://www.theses.fr/2016SUPL0004/document.

Der volle Inhalt der Quelle
Annotation:
Les clones de visage 3D peuvent être utilisés dans de nombreux domaines tels que l'interaction homme-machine et comme prétraitement dans des applications telles que l'analyse de l'émotion. Toutefois, ces clones doivent avoir la forme du visage bien modélisée tout en conservant les spécificités des individus et ils doivent être sémantiques. Un clone est sémantique quand on connaît la position des différentes parties du visage (yeux, nez ...). Dans notre technique, nous utilisons un capteur RVB-Z pour obtenir les spécificités des individus et un modèle déformable de visage 3D pour marquer la forme du visage. Pour la reconstruction de la forme, nous inversons le processus utilisé classiquement. En effet, nous réalisons d'abord le fitting puis la fusion de données. Pour chaque trame de profondeur, nous gardons les parties appropriées de données appelées patchs de forme. Selon le positionnement de ces patchs, nous fusionnons les données du capteur ou les données du modèle déformable de visage 3D. Pour la reconstruction de la texture, nous utilisons des patchs de forme et de texture pour préserver les caractéristiques de la personne. Ils sont détectés à partir des cartes de profondeur du capteur. Les tests que nous avons effectués montrent la robustesse et la précision de notre méthode
3D face clones can be used in many areas such as Human-Computer Interaction and as pretreatment in applications such as emotion analysis. However, such clones should have well-modeled facial shape while keeping the specificities of individuals and they should be semantic. A clone is semantic when we know the position of the different parts of the face (eyes, nose...). In our technique, we use a RGB-D sensor to get the specificities of individuals and 3D Morphable Face Model to mark facial shape. For the reconstruction of the shape, we reverse the process classically used. Indeed, we first perform fitting and then data fusion. For each depth frame, we keep the suitable parts of data called patches. Depending on the location, we merge either sensor data or 3D Morphable Face Model data. For the reconstruction of the texture, we use shape and texture patches to preserve the person's characteristics. They are detected using the depth frames of a RGB-D sensor. The tests we perform show the robustness and the accuracy of our method
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Yanilmaz, Huseyin. „Damage Detection In Beams By Wavelet Analysis“. Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609162/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this thesis, a method proposed by Han et al. [40] for detecting and locating damage in a structural member was adapted. The method was based on the energies that were calculated from the CWT coefficients of vibrational response of a cantilever beam. A transverse cut at varying depths was introduced. The presence and location of crack was investigated by processing experimentally acquired acceleration signals. Results of modal analysis and wavelet analysis of the beam with different cut depths were compared. In addition, effect of using different mother wavelets in CWT analysis for damage detection capability was investigated. Acceleration data were analyzed through CWT at different scales and CWT coefficients were calculated. Those CWT coefficients obtained from different scales were evaluated from the standpoint of damage detection. Effectiveness of energy indices associated with CWT coefficients in damage detection was demonstrated as independent of the type of mother wavelet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Alonso-Fernandez, Fernando, und Josef Bigun. „Iris Pupil Detection by Structure Tensor Analysis“. Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-14946.

Der volle Inhalt der Quelle
Annotation:
This paper present a pupil detection/segmentation algorithm for iris images based on Structure Tensor analysis. Eigenvalues of the structure tensor matrix have been observed to be high in pupil boundaries and specular reflections of iris images. We exploit this fact to detect the specular reflections region and the boundary of the pupil in a sequential manner. Experimental results are given using the CASIA-IrisV3-Interval database (249 contributors, 396 different eyes, 2,639 iris images). Results show that our algorithm works specially well in detecting the specular reflections (98.98% success rate) and pupil boundary detection is correctly done in 84.24% of the images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Parks, Donovan H. „Object detection and analysis using coherency filtering“. Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99784.

Der volle Inhalt der Quelle
Annotation:
This thesis introduces a novel local appearance method, termed coherency filtering, which allows for the robust detection and analysis of rigid objects contained in heterogeneous scenes by properly exploiting the wealth of information returned by a k-nearest neighbours (k-NN) classifier. A significant advantage of k-NN classifiers is their ability to indicate uncertainty in the classification of a local window by returning a list of k candidate classifications. Classification of a local window can be inherently uncertain when considered in isolation since local windows from different objects or the background may be similar in appearance. In order to robustly identify objects in a query image, a process is needed to appropriately resolve this uncertainty. Coherency filtering resolves this uncertainty by imposing constraints across the colour channels of a query image along with spatial constraints between neighbouring local windows in a manner that produces reliable classification of local windows and ultimately results in the robust identification of objects.
Extensive experimental results demonstrate that the proposed system can robustly identify objects contained in test images focusing on pose, scale, illumination, occlusion, and image noise. A qualitative comparison with four state-of-the-art systems indicates comparable or superior performance on test sets of similar difficulty can be achieved by the proposed system, while being capable of robustly identifying objects under a greater range of viewing conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Behrens, Richard J. „Change detection analysis with spectral thermal imagery“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA356044.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Space Systems Operations) Naval Postgraduate School, September 1998.
"September 1998." Thesis advisor(s): Richard Christopher Olsen, David D. Cleary. Includes bibliographical references (p. 129-131). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Avdiienko, Vitalii [Verfasser]. „Program analysis for anomaly detection / Vitalii Avdiienko“. Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2020. http://d-nb.info/1224883659/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Wolf, Katharine. „Flow injection analysis with photodiode array detection“. Thesis, University of Hull, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.278427.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Beaulieu, Martin Ronald. „Launch detection satellite system engineering error analysis“. Thesis, Monterey, California. Naval Postgraduate School, 1996. http://hdl.handle.net/10945/8611.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited.
An orbiting detector of infrared (IR) energy may be used to detect the rocket plumes generated by ballistic missiles during the powered segment of their trajectory. By measuring angular directions of the detections over several observations, the trajectory properties, launch location, and impact area may be estimated using a nonlinear least-squares iteration procedure. observations from two or more sensors may be combined to form stereoscopic lines of sight (LOS), increasing the accuracy of the estimation algorithm. The focus of this research has been to develop a computer-model of an estimation algorithm, and determine what parameter, or combination of parameters will significantly affect on the error of the tactical parameter estimation. This model is coded in MATLAB, and generates observation data, and produces an estimate for time, position, and heading at launch, at burnout, and calculates an impact time and position. The effects of time errors, LOS measurement errors, and satellite position errors upon the estimation accuracy were then determined using analytical and Monte Carlo simulation techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie