To see the other types of publications on this topic, follow the link: Digital forensics.

Dissertations / Theses on the topic 'Digital forensics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital forensics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Avinash. "A Digital Forensic Readiness Approach for Ransomware Forensics." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/75610.

Full text
Abstract:
Computers play a vital role in the automation of tedious tasks in our everyday lives. With the adoption of the advances in technology, there is a significant increase in the exploitation of security vulnerabilities, particularly in Windows computing environments. These exploitations are mostly carried out by malicious software (malware). Ransomware, a variant of malware which encrypts user files and retains the decryption key for ransom. Ransomware has shown its dominance over the years wreaking havoc to many organizations and users. This global digital epidemic is continuously on the rise with no signs of being eradicated. The current method of mitigation and propagation of malware and its variants, such as anti-viruses, have proven ineffective against most ransomware attacks. Theoretically, Ransomware retains footprints of the attack process in the Windows Registry as well as volatile memory of the infected machine. With the adoption of Digital Forensic Readiness (DFR) processes organizations can better prepare for these types of attacks. DFR provides mechanisms for pro-active collection of digital artifacts. These artifacts play a vital role when a digital investigation is conducted where these artifacts may not be available post-incident. The availability of such artifacts can be attributed to the anti-forensic properties of the ransomware itself cleaning up all the evidence before it can be investigated. Ransomware investigation often to a lengthy process because security researchers need to disassemble and reverse engineer the ransomware in order to find a inherit flaw in the malware. In some cases, the ransomware is not available post-incident which makes it more difficult. Therefore, study proposed a framework with the integration of DFR mechanisms as a process to mitigate ransomware attacks whilst maximizing Potential Digital Evidence (PDE) collection. The proposed framework was evaluated in compliance with the ISO/IEC 27043 standard as well as expert review using two prototype tools. These prototype tools realize the framework by providing a proof of concept implementation of such a framework within an organization. The evaluation revealed that the proposed framework has the potential to harness system information prior to, and during a ransomware attack. This information can then be used to help forensic investigators to potentially decrypt the encrypted machine, as well as providing automated analysis of the ransomware relieving the burden of complicated analysis. The implementation of the proposed framework can potentially be a major breakthrough in mitigating this global digital endemic that has plagued various organizations.<br>Dissertation (MSc)--University of Pretoria, 2019.<br>Computer Science<br>MSc (Computer Science)<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
2

Almarzooqi, Ahmed. "Digital forensics practices : a road map for building digital forensics capability." Thesis, De Montfort University, 2016. http://hdl.handle.net/2086/14178.

Full text
Abstract:
Identifying the needs for building and managing Digital Forensics Capability (DFC) are important because these can help organisations to stay abreast of criminal’s activities and challenging pace of technological advancement. The field of Digital Forensics (DF) is witnessing rapid development in investigation procedures, tools used, and the types of digital evidence. However, several research publications confirm that a unified standard for building and managing DF capability does not exit. Therefore, this thesis identifies, documents, and analyses existing DF frameworks and the attitudes of organisations for establishing the DF team, staffing and training, acquiring and employing effective tools in practice and establishing effective procedures. First, this thesis looks into the existing practices in the DF community for carrying out digital investigations and more importantly the precise steps taken for setting up the laboratories. Second, the thesis focuses on research data collected from organisations in the United Kingdom and the United Arab Emirates and based on this collection a framework has been developed to understand better the building and managing the capabilities of the DFOs (DFOs). This framework has been developed by applying Grounded Theory as a systematic and comprehensive qualitative methodology in the emerging field of DF research. This thesis, furthermore, provides a systematic guideline to describe the procedures and techniques of using grounded theory in DF research by applying three Grounded Theory coding methods (open, axial, and selective coding) which have been used in this thesis. Also the techniques presented in this thesis provide a thorough critique, making it a valuable contribution to the discussion of methods of analysis in the field of DF. Finally, the thesis proposes a framework in the form of an equation for analysing the capability of DFOs. The proposed framework, called the Digital Forensics Organisation Core Capability Framework, offers an explanation of the factors involved in establishing the capability for a digital forensics organisation. Also software was developed for applying the framework in real life.
APA, Harvard, Vancouver, ISO, and other styles
3

Mandelecha, Sonal. "Prototype Digital Forensics Repository." ScholarWorks@UNO, 2005. http://scholarworks.uno.edu/td/292.

Full text
Abstract:
The explosive growth in technology has led to a new league of a crime involving identity theft, stealing trade secrets, malicious virus attacks, hacking of DVD players, etc. The law enforcement community which has been trained to deal with traditional form of crime, is now being trained in a new realm of Digital Forensics. Forensics investigators have realized that often the most valuable resource available to them is experience and knowledge of fellow investigators. But there is seldom an explicit mechanism for disseminating this knowledge. Hence the same problems and mistakes continue to resurface and the same solutions are re-invented. In this Thesis we design and create a knowledge base, a Digital Forensics Repository, to support the sharing of experiences about the Forensics Investigation Process. It offers capabilities such as submission of lessons, online search and retrieval which will provide a means of querying into an ever increasing knowledge base.
APA, Harvard, Vancouver, ISO, and other styles
4

Marziale, Lodovico. "Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1027.

Full text
Abstract:
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them.
APA, Harvard, Vancouver, ISO, and other styles
5

Etow, Tambue Ramine. "IMPACT OF ANTI-FORENSICS TECHNIQUES ON DIGITAL FORENSICS INVESTIGATION." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97116.

Full text
Abstract:
Computer crimes have become very complex in terms of investigation and prosecution. This is mainly because forensic investigations are based on artifacts left oncomputers and other digital devices. In recent times, perpetrators of computer crimesare getting abreast of the digital forensics dynamics hence, capacitated to use someanti-forensics measures and techniques to obfuscate the investigation processes.Incases where such techniques are employed, it becomes extremely difficult, expensive and time consuming to carry out an effective investigation. This might causea digital forensics expert to abandon the investigation in a pessimistic manner.ThisProject work serves to practically demonstrate how numerous anti-forensics can bedeployed by the criminals to derail the smooth processes of digital forensic investigation with main focus on data hiding and encryption techniques, later a comparativestudy of the effectiveness of some selected digital forensics tools in analyzing andreporting shreds of evidence will be conducted.
APA, Harvard, Vancouver, ISO, and other styles
6

Howard, Sean David. "Customization and automation in the future of digital forensics live OS forensics with FENIX (forensic examiner unix) /." [Ames, Iowa : Iowa State University], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Flory, Christopher M. "Digital forensics and community supervision| Making a case for field based digital forensics training." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1597627.

Full text
Abstract:
<p>In this paper I will review the literature concerning investigator digital forensics models and how they apply to field investigators. A brief history of community supervision and how offenders are supervised will be established. I will also cover the difference between community supervision standards and police standards concerning searches, evidence, standards of proof, and the difference between parole boards and courts. Currently, the burden for digital forensics for community supervision officers is placed on local or state law enforcement offices, with personnel trained in forensics, but may not place a high priority on outside cases. Forensic field training for community supervision officers could ease the caseloads of outside forensic specialists, and increase fiscal responsible by increasing efficiency and public safety in the field of community supervision. </p>
APA, Harvard, Vancouver, ISO, and other styles
8

Fei, Bennie Kar Leung. "Data visualisation in digital forensics." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-03072007-153241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Altiero, Roberto A. "Digital Forensics Tool Interface Visualization." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/24.

Full text
Abstract:
Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator's cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples' mean tested for the null hypothesis of less than a significant value between the applications' comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation.
APA, Harvard, Vancouver, ISO, and other styles
10

Bade, Hans, and Oscar Hedlund. "Anti-Forensik : Anti-forensiska metoder på mobila enheter." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37701.

Full text
Abstract:
Mobiltelefoner har blivit grundläggande för extrahering av digitala artefakter i fo-rensiska utredningar. Androids Linuxbaserade operativsystem medför större möj-ligheter för anti-forensiska metoder, detta gör att kunskap om anti-forensik äressentiell för dagens IT-forensiska utredare. I denna studie belyses effekten avanti-forensik i Androidbaserade mobila enheter samt så upplyses det om dagensanti-forensiska attack metoder mot forensiska verktyg. Genom experiment så vi-sas det hur man kan förhindra ett forensisk verktyg från att extrahera data medanvändning av ett simpelt script.<br>Mobile phones have become essential for the extraction of digital artifacts in foren-sic investigations. Android’s Linux-based operating systems bring greater potentialfor anti-forensic methods, which means that knowledge of anti-forensics is essen-tial to today’s IT forensic investigators. In this study, the effect of anti-forensicson Android-based mobile devices is highlighted, as well as revealing today’s anti-forensic attack methods against forensic tools. By experiment, it is shown how toprevent a forensic tool from extracting data by using a simple script.
APA, Harvard, Vancouver, ISO, and other styles
11

Montasari, Reza. "The Comprehensive Digital Forensic Investigation Process Model (CDFIPM) for digital forensic practice." Thesis, University of Derby, 2016. http://hdl.handle.net/10545/620799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Qiao, Tong. "Statistical detection for digital image forensics." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0006/document.

Full text
Abstract:
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent un rôle de plus en plus important. Les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images. Trois problèmes sont abordés : l'identification de l'origine d'une image, la détection d'informations cachées dans une image et la détection d'un exemple falsification : le rééchantillonnage. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée<br>The remarkable evolution of information technologies and digital imaging technology in the past decades allow digital images to be ubiquitous. The tampering of these images has become an unavoidable reality, especially in the field of cybercrime. The credibility and trustworthiness of digital images have been eroded, resulting in important consequences in terms of political, economic, and social issues. To restore the trust to digital images, the field of digital forensics was born. Three important problems are addressed in this thesis: image origin identification, detection of hidden information in a digital image and an example of tampering image detection : the resampling. The goal is to develop a statistical decision approach as reliable as possible that allows to guarantee a prescribed false alarm probability. To this end, the approach involves designing a statistical test within the framework of hypothesis testing theory based on a parametric model that characterizes physical and statistical properties of natural images. This model is developed by studying the image processing pipeline of a digital camera. As part of this work, the difficulty of the presence of unknown parameters is addressed using statistical estimation, making the application of statistical tests straightforward in practice. Numerical experiments on simulated and real images have highlighted the relevance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
13

Khan, Muhammad Naeem Ahmed. "Digital Forensics using Machine Learning Methods." Thesis, University of Sussex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487975.

Full text
Abstract:
The increase in computer related crimes, with particular reference to internet crimes, has led to an increasing demand for state-of-the-art digital forensics. Reconstruction of the past events in chronological order is crucial for digital forensic investigations to pinpoint the execution of relevant application programs and the files manipulated by those applications. The event reconstruction process can be made more objective and rigorous by employing mathematical techniques due to their sound theoretical foundations. The focus of this research is to explore the effectiveness of employing machine learning methodologies for computer forensic analysis by tracing past file system activities and preparing a timeline to facilitate the identification of incriminating evidence. A general criterion for measuring the efficacy of an analysis tool is to corroborate how well the analysis responds to the unforeseen evidence. The generation of a comprehensive timeline of the past events becomes more complicated if some information is missing or certain sources of evidence are contaminated or scrubbed. This thesis provides a genuine contribution to digital forensics research by focusing on the identification of the execution of application programs - a vital area which is not usually directly accessible from the available data. In addition to the neural network techniques; a Bayesian approach for data classification has been explored, this addresses the issue of missing/incomplete data. Bayesian methodology is an improvement over the existing ad hoc digital forensic analysis approaches carried out in bits and pieces. The Bayesian and neural networks techniques have produced encouraging results and these results are reported herein.
APA, Harvard, Vancouver, ISO, and other styles
14

Dontula, Varun. "Data Exploration Interface for Digital Forensics." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1373.

Full text
Abstract:
The fast capacity growth of cheap storage devices presents an ever-growing problem of scale for digital forensic investigations. One aspect of scale problem in the forensic process is the need for new approaches to visually presenting and analyzing large amounts of data. Current generation of tools universally employ three basic GUI components—trees, tables, and viewers—to present all relevant information. This approach is not scalable as increasing the size of the input data leads to a proportional increase in the amount of data presented to the analyst. We present an alternative approach, which leverages data visualization techniques to provide a more intuitive interface to explore the forensic target. We use tree visualization techniques to give the analyst both a high-level view of the file system and an efficient means to drill down into the details. Further, we provide means to search for keywords and filter the data by time period.
APA, Harvard, Vancouver, ISO, and other styles
15

Calhau, João Pedro Figueira Galhardo. "Digital forensics research using constraint programming." Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24257.

Full text
Abstract:
In this dissertation we present a new and innovative approach to Digital Forensics analysis, based on Declarative Programming approaches, more specifically Constraint Programming methodologies, to describe and solve Digital Forensics problems. With this approach we allow for an intuitive, descriptive and more efficient method to analyze digital equipment data. The work described herein enables the description of a Digital Forensics Problem (DFP) as a Constraint Satisfaction Problem (CSP) and, with the help of a CSP solver, reach a solution to such problem, if it exists, which can be a set of elements or evidences that match the initial problem description; Sumário: Pesquisa em Forense Digital Utilizando Programação por Restrições Nesta dissertação apresentamos uma nova e inovadora abordagem à análise de Forense Digital, baseada em técnicas de Programação Declarativa, mais especificamente em metodologias de Programação por Restrições, para descrever e resolver problemas de Forense Digital. Com esta abordagem, é nos permitida a utilização de um método mais intuitivo, mais descritivo e mais eficiente para analisar dados de equipamentos digitais. O trabalho aqui descrito permite a descrição de um Problema de Forense Digital (PFD) como um Problema de Satisfação de Restrições (PSR) e, com a ajuda de um ”Solver” de PSRs, chegar a uma solução, se existir, que pode ser um conjunto de elementos ou evidências que correspondem à descrição inicial do problema.
APA, Harvard, Vancouver, ISO, and other styles
16

Doan, Thi Ngoc Canh. "Statistical Methods for Digital Image Forensics." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0036.

Full text
Abstract:
L’explosion de la technologie d’imagerie numérique s’est considérablement accrue, posant d’énormes problèmes pour la sécurité de l’information. Grâce à des outils d'édition d'images à faible coût, l'omniprésence des images falsifiées est devenue une réalité incontournable. Cette situation souligne la nécessité d'étendre les recherches actuelles dans le domaine de la criminalistique numérique afin de restaurer la confiance dans les images numériques. Deux problèmes importants sont abordés dans cette thèse: l’estimation du facteur de qualité d’une image JPEG et la détection de la falsification des images numériques. Ces travaux s’inscrivent dans le cadre de la théorie des tests d’hypothèse et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d’atteindre une performance de détection élevée, il est proposé d’exploiter un modèle statistique des images naturelles. Ce modèle est construit à partir du processus de formation des images. Des expériences numériques sur des images simulées et réelles ont mis en évidence la pertinence de l'approche proposée<br>Digital imaging technology explosion has grown significantly posing tremendous security concerns to information security. Under the support of low-cost image editing tools, the ubiquity of tampered images has become an unavoidable reality. This situation highlights the need to improve and extend the current research in the field of digital forensics to restore the trust of digital images. Since each stage of the image history leaves a specific trace on the data, we propose to extract the digital fingerprint as evidence of tampering. Two important problems are addressed in this thesis: quality factor estimation for a given JPEG image and image forgery authentication. For the first problem, a likelihood ratio has been constructed relied on a spatial domain model of the variance of 8 × 8 blocks of JPEG images. In the second part of thesis, the robust forensic detectors have been designed for different types of tampering in the framework of the hypothesis testing theory based on a parametric model that characterizes statistical properties of natural images. The construction of this model is performed by studying the image processing pipeline of a digital camera. The statistical estimation of unknown parameters is employed, leading to application of these tests in practice. This approach allows the design of the most powerful test capable of warranting a prescribed false alarm probability while ensuring a high detection performance. Numerical experiments on simulated and real images have highlighted the relevance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
17

Di, Marino Grezia. "MATCHASH: Software per la Digital Forensics." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22832/.

Full text
Abstract:
La Digital Forensics, detta anche Scienza Digitale Forense, è una disciplina della scienza forense che si occupa di recuperare e analizzare materiali estratti dai dispositivi digitali. Inizialmente era denominata Computer Forensics perché i dati erano memorizzati su un computer. Questa disciplina si è evoluta dapprima in Digital Forensics poi a seconda del dispositivo o dell’ambito ha preso il nome di Mobile Forensics per l’analisi dei cellulari, Network Forensics per le reti, Web Forensics riferendosi al web, IoT forensics per il campo dell’Internet of Things, e così via. Ci sono software e hardware ausiliari per eseguire indagini informatiche: 1. Hardware Forensics Tool che sono componenti hardware come write-blocker o lettori SATA; 2. Software Forensics Tool suddivisi in tool command-line e applicazioni GUI come ad esempio Autopsy, Smart, Deftzero, Pro Discover, ILook e IXimager, FTK e FTK Imager, ed altri. Questo lavoro di tesi riguarda lo sviluppo di un software stand-alone, il suo nome è MatcHash, ed è rivolto a sostegno di tecnici forensi. MatcHash utilizza Robocopy, detto anche Robust File Copy, per copiare i dati di file preservandone i metadati. Il software sviluppato permette di scegliere tra gli algoritmi messi a disposizione quello più idoneo per il calcolo dell’hash, sia del dato sorgente che di quello destinazione, e permette di avere in output un documento .txt che riporti informazioni sull’analisi effettuata. Per poter sviluppare il software MatcHash si è fatto uso del linguaggio di programmazione Python integrando la libreria dell’interfaccia grafica Tkinter. Il software sviluppato, è scaricabile dal sito Web https://matchash.altervista.org/ progettato e implementato dallo stesso autore.
APA, Harvard, Vancouver, ISO, and other styles
18

Karie, Nickson M. "Resolving Semantic Disparities in Digital Forensics." Thesis, University of Pretoria, 2016. http://hdl.handle.net/2263/57356.

Full text
Abstract:
Digital forensics is a growing field that is gaining popularity among many computer professionals, law enforcement agencies, investigators and other digital forensic practitioners. For this reason, several investigation process models have been developed to offer direction on how to recognize and preserve potential digital evidence obtained from a crime scene. However, the vast number of existing models and frameworks has added to the complexity of the digital forensic field. This situation has further created an environment replete with semantic disparities in the domain, which need to be resolved. Note that the term ‘semantic disparities’ is used in this thesis to refer to disagreements about the interpretation, description and representation of the same or related digital forensic data or information and terminologies. In a world where digital technology keeps changing and the evolution of the digital forensic domain continues, it would be appropriate to develop and standardise dynamic and practical methods that can help to resolve many of the present and future disparities bound to occur in digital forensics. Such methods will further aid in creating uniformity in the interpretation, description and representation of the same or related digital forensic data or information. The interpretation, description and representation of digital forensic data or information are important, especially during the digital forensic investigation process, in order to conform to the uniformity of investigative terminologies so that misunderstandings between investigators and other parties, e.g. judges, does not happen. In this research study, therefore, the researcher employs a pragmatic approach to research and proposes a sematic reconciliation model for resolving semantic disparities in digital forensics. The study is conducted in two phases where the first phase involves investigating the various challenges that digital forensics have faced to date – in a bid to demonstrate the semantic disparities that exist in digital forensics. In the second phase, a model coined as the Digital Forensic Semantic Reconciliation (DFSR) model is presented in an attempt to provide directions in resolving the semantic disparities that occur in the digital forensic domain. The researcher also demonstrates in this study a prototype implementation of the DFSR model called the DFSR prototype. Finally, to assess the efficiency of the DFSR prototype, several experiments are conducted and the results discussed. All the experiments conducted to test the feasibility and implementations of the proposed DFSR model in this study have delivered remarkable results. Therefore, the proposed DFSR model in this study can be used as an initial guide towards resolving semantic disparities in digital forensics. The proposed DFSR model, for example, can also be helpful in facilitating the harmonisation and/or uniformity in the interpretation, description and representation of the same or related digital forensic data or information within the field of digital forensics.<br>Thesis (PhD)--University of Pretoria, 2016.<br>University of Pretoria Post Graduate Support Bursary<br>Computer Science<br>PhD<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
19

MARTURANA, FABIO. "Device classification in digital forensics triage." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2014. http://hdl.handle.net/2108/214142.

Full text
Abstract:
Internet’s pervasiveness and the large availability of multimedia devices with networking capability have contributed to the global diffusion of cyber threats and cybercrimes, causing serious challenges from the digital forensic perspective. As a consequence, the potential amount of data which requires analysis is increasing, causing an urgent need for new forensic techniques and tools. Those currently in use, indeed, being more focused on full device extraction for some basic statistics than reducing and correlating data for case-relevant device identification, tend to be outdated. In this situation, which may be referred to as data rich but information poor, a practical solution is represented by Digital Forensics Triage, a promising new branch of the Digital Forensics science whose aim is to extract evidence and provide vital intelligence in a timely manner. Digital Forensics Triage, or simply Digital Triage, is generally referred to as a framework that could be adopted in time-critical situations to assign a higher priority to certain digital devices with regards to others, according to their relevance to the criminal case. Digital Triage has been characterized by the development of rapid data extraction techniques and tools whereas, despite some categorisation functions, determining the relevance of a digital device to a criminal case, also known as classification, tend to be a mostly manual process. Based on substantial research carried out to establish current methodologies in the field of Digital Triage and their potential use, this thesis describes an original methodology for digital device pre-examination and classification either on the crime scene or at Digital Forensic Laboratories (DFLs). The intended aim of the methodology is to automate the identification of relevant devices in criminal cases where time, huge backlogs, and suspect’s privacy protection are deemed critical factors. As a result of the merge of Digital Forensics best practices and Machine Learning supervised classification, the novel approach described hereafter presents several advantages with respect to other techniques currently in use. Selective device pre-examination on the crime scene or at DFLs, indeed, allows the timely identification of case-relevant devices while reducing the volume of data that need to be exhaustively examined. Experimental results of the criminal case studies of child pornography exchange and copyright infringement, suggest that the methodology described in this thesis is effective and viable, and can provide a basis for an automated digital devices classification tool for use in real-life criminal cases
APA, Harvard, Vancouver, ISO, and other styles
20

Al, Fahdi Mahmood. "Automated digital forensics and computer crime profiling." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/8090.

Full text
Abstract:
Over the past two decades, technology has developed tremendously, at an almost exponential rate. While this development has served the nation in numerous different positive ways, negatives have also emerged. One such negative is that of computer crime. This criminality has even grown so fast as to leave current digital forensic tools lagging behind in terms of development, and capabilities to manage such increasing and sophisticated types of crime. In essence the time taken to analyse a case is huge and increasing, and cases are not fully or properly investigated. This results in an ever-increasing number of pending and unsolved cases pertaining to computer crime. Digital forensics has become an essential tool in the fight against computer crime, providing both procedures and tools for the acquisition, examination and analysis of digital evidence. However, the use of technology is expanding at an ever-increasing rate, with the number of devices a single user might engage with increasing from a single device to 3 or more, the data capacity of those devices reaching far into the Terabytes, and the nature of the underlying technology evolving (for example, the use of cloud services). This results in an incredible challenge for forensic examiners to process and analyse cases in an efficient and effective manner. This thesis focuses upon the examination and analysis phases of the investigative process and considers whether automation of the process is possible. The investigation begins with researching the current state of the art, and illustrates a wide range of challenges that are facing the digital forensics investigators when analysing a case. Supported by a survey of forensic researchers and practitioners, key challenges were identified and prioritised. It was found that 95% of participants believed that the number of forensic investigations would increase in the coming times, with 75% of participants believing that the time consumed in such cases would increase. With regards to the digital forensic sophistication, 95% of the participants expected a rise in the complexity level and sophistication of digital forensics. To this end, an automated intelligent system that could be used to reduce the investigator’s time and cognitive load was found to be a promising solution. A series of experiments are devised around the use of Self-Organising Maps (SOMs) – a technique well known for unsupervised clustering of objects. The analysis is performed on a range of file system and application-level objects (e.g. email, internet activity) across four forensic cases. Experiment evaluations revealed SOMs are able to successfully cluster forensic artefacts from the remaining files. Having established SOMs are capable of clustering wanted artefacts from the case, a novel algorithm referred to as the Automated Evidence Profiler (AEP), is proposed to encapsulate the process and provide further refinement of the artefact identification process. The algorithm led to achieving identification rates in examined cases of 100% in two cases and 94% in a third. A novel architecture is proposed to support the algorithm in an operational capacity – considering standard forensic techniques such as hashing for known files, file signature analysis, application-level analysis. This provides a mechanism that is capable of utilising the A E P with several other components that are able to filter, prioritise and visualise artefacts of interest to investigator. The approach, known as Automated Forensic Examiner (AFE), is capable of identifying potential evidence in a more efficient and effective manner. The approach was evaluated by a number of experts in the field, and it was unanimously agreed that the chosen research problem was one with great validity. Further to this, the experts all showed support for the Automated Forensic Examiner based on the results of cases analysed.
APA, Harvard, Vancouver, ISO, and other styles
21

Kohn, Michael Donovan. "Integrated digital forensic process model." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/25433.

Full text
Abstract:
The Information and Communications Technology (ICT) environment constitutes an integral part of our daily lives. Individual computer users and large corporate companies are increasingly dependent on services provided by ICT. These services range from basic communication to managing large databases with corporate client information. Within these ICT environments something is bound to go wrong for a number of reasons, which include an intentional attack on information services provided by an organisation. These organisations have in turn become interested in tracing the root cause of such an incident with the intent of successfully prosecuting a suspected malicious user. Digital forensics has developed signi cantly towards prosecuting such criminals. The volumes of information and rapid technological developments have contributed to making simple investigations rather cumbersome. In the digital forensics community a number of digital forensic process models have been proposed encapsulating a complete methodology for an investigation. Software developers have also greatly contributed toward the development of digital forensics tools. These developments have resulted in divergent views on digital forensic investigations. This dissertation presents the IDFPM - Integrated Digital Forensic Process Model. The model is presented after examining digital forensic process models within the current academic and law enforcement literature. An adapted sequential logic notation is used to represent the forensic models. The terminology used in the various models is examined and standardised to suit the IDFPM. Finally, a prototype supports a limited selection of the IDFPM processes, which will aid a digital forensic investigator.<br>Dissertation (MSc)--University of Pretoria, 2012.<br>Computer Science<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
22

Schatz, Bradley Lawrence. "Digital evidence : representation and assurance." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16507/1/Bradley_Schatz_Thesis.pdf.

Full text
Abstract:
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
23

Schatz, Bradley Lawrence. "Digital evidence : representation and assurance." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16507/.

Full text
Abstract:
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
24

Watson, Michael Charles. "Certifying Computer Forensics Skills." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/9131.

Full text
Abstract:
Computer forensics is an ever-growing technological field of complexity and depth. Individuals must strive to keep learning and growing their skills as they help combat cybercrime throughout the world. This study attempts to establish a method of evaluating conceptual expertise in computer forensics to help indicate whether or not an individual understands the five basic phases of computer forensics: preparation, seizure of evidence, acquisition of data, analysis of data, and reporting the findings of the analysis. A survey was presented to a university class of 30 students taking a computer forensics course and as well as posted online asking computer forensics professionals to participate in the survey. Results show that novices that were enrolled in a computer forensics course were able to identify the phases of computer forensics more readily than professionals
APA, Harvard, Vancouver, ISO, and other styles
25

Thai, Thanh Hai. "Statistical modeling and detection for digital image forensics." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0024/document.

Full text
Abstract:
Le XXIème siècle étant le siècle du passage au tout numérique, les médias digitaux jouent maintenant un rôle de plus en plus important dans la vie de tous les jours. De la même manière, les logiciels sophistiqués de retouche d’images se sont démocratisés et permettent aujourd’hui de diffuser facilement des images falsifiées. Ceci pose un problème sociétal puisqu’il s’agit de savoir si ce que l’on voit a été manipulé. Cette thèse s'inscrit dans le cadre de la criminalistique des images numériques. Deux problèmes importants sont abordés : l'identification de l'origine d'une image et la détection d'informations cachées dans une image. Ces travaux s'inscrivent dans le cadre de la théorie de la décision statistique et proposent la construction de détecteurs permettant de respecter une contrainte sur la probabilité de fausse alarme. Afin d'atteindre une performance de détection élevée, il est proposé d'exploiter les propriétés des images naturelles en modélisant les principales étapes de la chaîne d'acquisition d'un appareil photographique. La méthodologie, tout au long de ce manuscrit, consiste à étudier le détecteur optimal donné par le test du rapport de vraisemblance dans le contexte idéal où tous les paramètres du modèle sont connus. Lorsque des paramètres du modèle sont inconnus, ces derniers sont estimés afin de construire le test du rapport de vraisemblance généralisé dont les performances statistiques sont analytiquement établies. De nombreuses expérimentations sur des images simulées et réelles permettent de souligner la pertinence de l'approche proposée<br>The twenty-first century witnesses the digital revolution that allows digital media to become ubiquitous. They play a more and more important role in our everyday life. Similarly, sophisticated image editing software has been more accessible, resulting in the fact that falsified images are appearing with a growing frequency and sophistication. The credibility and trustworthiness of digital images have been eroded. To restore the trust to digital images, the field of digital image forensics was born. This thesis is part of the field of digital image forensics. Two important problems are addressed: image origin identification and hidden data detection. These problems are cast into the framework of hypothesis testing theory. The approach proposes to design a statistical test that allows us to guarantee a prescribed false alarm probability. In order to achieve a high detection performance, it is proposed to exploit statistical properties of natural images by modeling the main steps of image processing pipeline of a digital camera. The methodology throughout this manuscript consists of studying an optimal test given by the Likelihood Ratio Test in the ideal context where all model parameters are known in advance. When the model parameters are unknown, a method is proposed for parameter estimation in order to design a Generalized Likelihood Ratio Test whose statistical performances are analytically established. Numerical experiments on simulated and real images highlight the relevance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
26

Lin, Xufeng. "Digital image forensics based on sensor pattern noise." Thesis, University of Warwick, 2016. http://wrap.warwick.ac.uk/89715/.

Full text
Abstract:
With the advent of low-cost and high-quality digital imaging devices and the availability of user-friendly and powerful image-editing software, digital images can be easily manipulated without leaving obvious traces. The credibility of digital images is often challenged when they are presented as crucial evidence for news photography, scientific discovery, law enforcement, etc. In this context, digital image forensics emerges as an essential approach for ensuring the credibility of digital images. Sensor pattern noise mainly consists of the photo response non-uniformity noise arising primarily from the manufacturing imperfections and the inhomogeneity of silicon wafers during the manufacturing process. It has been proven to be an effective and robust device fingerprint that can be used for a variety of important digital image forensic tasks, such as source device identification, device linking, and image forgery detection. The objective of this thesis is to design effective and robust algorithms for better fulfilling the forensic tasks based on sensor pattern noise. We found that the non-unique periodic artifacts, typically shared amongst cameras subjected to the same or similar in-camera processing procedures, often give rise to false positives. These periodic artifacts manifest themselves as salient peaks in the magnitude spectrum of reference sensor pattern noise. We propose a spectrum equalization algorithm to detect and suppress the salient peaks in the magnitude spectrum of reference sensor pattern noise, aiming to improve the accuracy and reliability of source camera identification based on sensor pattern noise. We also propose a framework for large-scale image clustering based on device fingerprints (sensor pattern noises). The proposed clustering framework deals with large-scale and high-dimensional device fingerprint databases and is capable of overcoming the NC >> SC problem, i.e., the number of cameras is much higher than the average number of images acquired by each camera. Additionally, for the task of image forgery detection based on sensor pattern noise, we propose a refining algorithm to solve the missing detection problem along the boundary area between the forged and non-forged regions. The proposed algorithms are evaluated on either a public benchmarking database or our own image databases. Experimental results, as well as the comparisons with state-of-the-art algorithms, confirm their effectiveness and robustness.
APA, Harvard, Vancouver, ISO, and other styles
27

Hansen, Tone. "A Digital Tool to Improve the Efficiency of IT Forensic Investigations." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40232.

Full text
Abstract:
The IT forensic process causing bottlenecks in investigations is an identified issue, with multiple underlying causes – one of the main causes being the lack of expertise among those responsible for ordering IT forensic investigations. The focus of the study is to create and evaluate a potential solution for this problem, aiming to answer research questions related to a suitable architecture, structure and design of a digital tool that would assist individuals in creating IT forensic orders. This work evaluates concepts of such a digital tool. This is done using a grounded theory approach, where a series of test sessions together with the answers from a survey have been examined and analyzed in an iterative process. A low-fidelity prototype is used in the process. The resulting conclusion of the study is a set of concepts, ideas and principles for a digital tool that would aid in the IT forensic ordering process, as well improving the efficiency of the IT forensic process itself. Future work could involve developing the concept further to eventually become a finished product, or using it for improving already existing systems and tools, improving the efficiency and quality of the IT forensic process.
APA, Harvard, Vancouver, ISO, and other styles
28

Alqahtany, Saad. "A forensically-enabled IaaS cloud computing architecture." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9508.

Full text
Abstract:
Cloud computing has been advancing at an intense pace. It has become one of the most important research topics in computer science and information systems. Cloud computing offers enterprise-scale platforms in a short time frame with little effort. Thus, it delivers significant economic benefits to both commercial and public entities. Despite this, the security and subsequent incident management requirements are major obstacles to adopting the cloud. Current cloud architectures do not support digital forensic investigators, nor comply with today’s digital forensics procedures – largely due to the fundamental dynamic nature of the cloud. When an incident has occurred, an organization-based investigation will seek to provide potential digital evidence while minimising the cost of the investigation. Data acquisition is the first and most important process within digital forensics – to ensure data integrity and admissibility. However, access to data and the control of resources in the cloud is still very much provider-dependent and complicated by the very nature of the multi-tenanted operating environment. Thus, investigators have no option but to rely on the Cloud Service Providers (CSPs) to acquire evidence for them. Due to the cost and time involved in acquiring the forensic image, some cloud providers will not provide evidence beyond 1TB despite a court order served on them. Assuming they would be willing or are required to by law, the evidence collected is still questionable as there is no way to verify the validity of evidence and whether evidence has already been lost. Therefore, dependence on the CSPs is considered one of the most significant challenges when investigators need to acquire evidence in a timely yet forensically sound manner from cloud systems. This thesis proposes a novel architecture to support a forensic acquisition and analysis of IaaS cloud-base systems. The approach, known as Cloud Forensic Acquisition and Analysis System (Cloud FAAS), is based on a cluster analysis of non-volatile memory that achieves forensically reliable images at the same level of integrity as the normal “gold standard” computer forensic acquisition procedures with the additional capability to reconstruct the image at any point in time. Cloud FAAS fundamentally, shifts access of the data back to the data owner rather than relying on a third party. In this manner, organisations are free to undertaken investigations at will requiring no intervention or cooperation from the cloud provider. The novel architecture is validated through a proof-of-concept prototype. A series of experiments are undertaken to illustrate and model how Cloud FAAS is capable of providing a richer and more complete set of admissible evidence than what current CSPs are able to provide. Using Cloud FAAS, investigators have the ability to obtain a forensic image of the system after, just prior to or hours before the incident. Therefore, this approach can not only create images that are forensically sound but also provide access to deleted and more importantly overwritten files – which current computer forensic practices are unable to achieve. This results in an increased level of visibility for the forensic investigator and removes any limitations that data carving and fragmentation may introduce. In addition, an analysis of the economic overhead of operating Cloud FAAS is performed. This shows the level of disk change that occurs is well with acceptable limits and is relatively small in comparison to the total volume of memory available. The results show Cloud FAAS has both a technical and economic basis for solving investigations involving cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
29

Al, Mutawa Noora Ahmad Khurshid. "Integrating behavioural analysis within the digital forensics investigation process." Thesis, University of Central Lancashire, 2018. http://clok.uclan.ac.uk/25412/.

Full text
Abstract:
This programme of research focused on incorporating Behavioural Analysis (BA) within the digital forensics investigation process. A review of previously developed digital forensics investigation models indicated a lack of sufficient consideration of the behavioural and motivational dimensions of offending, and the way in which digital evidence can be used to address these issues during the investigation process. This programme of research aimed to build on previous work by scientific researchers and investigators by developing a digital forensics investigation model which incorporates greater consideration of the behavioural and motivational implications of case-related digital evidence based on current theoretical understandings of these aspects of offending from forensic psychology. This can aid with understanding of the crime events and reconstruction, and lead to the development of more detailed models and guidelines for examining computer-facilitated interpersonal crimes. The first study employed an abductive approach to forensically analyse individual cases (real cases obtained from the Dubai Police archives) applying BA to the online Sexually Exploitative Imagery of Children (SEIC) and cyberstalking. Its aim was to investigate what BA could contribute to the digital forensics investigation of cases within these crime categories. It identified five benefits: (1) providing focus, speed and investigative directions, (2) inferring victim/offender behaviours, (3) inferring offender motivation(s), (4) identifying potential victims, and (5) eliminating suspects. This was followed by a survey study empirically examining the perceptions of national and international digital forensics practitioners regarding the use and utility of BA during the process of investigating SEIC and cyberstalking cases. The results indicated that while the majority believed that BA has potential to contribute to many aspects of digital forensics investigations, their daily investigative activities involved a limited use of this technique. The implications of the study were outlined, and emphasised the need to design a digital forensics investigation model that provides guiding steps and illustrations on how to utilise BA in digital forensics investigations. Based on the findings from the conducted studies, a digital forensics investigation model that incorporates aspects of BA was designed. It aimed to provide a pragmatic, structured, multidisciplinary approach to performing a post mortem examination, analysis, and interpretation of the content of the digital devices associated with computer-facilitated interpersonal crimes. Two comprehensive case studies were also used to illustrate the investigative importance of the model in investigating computer-facilitated interpersonal crimes.
APA, Harvard, Vancouver, ISO, and other styles
30

Alfraih, Areej S. "Feature extraction and clustering techniques for digital image forensics." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808306/.

Full text
Abstract:
This thesis proposes an adaptive algorithm which applies feature extraction and clustering techniques for cloning detection and localization in digital images. Multiple contributions have been made to test the performance of different feature detectors for forensic use. The �first contribution is to improve a previously published algorithm by Wang et al. by localizing tampered regions using the grey-level co-occurrence matrix (GLCM) for extracting texture features from the chromatic component of an image (Cb or Cr component). The main trade-off� is a diminishing detection accuracy as the region size decreases. The second contribution is based on extracting Maximally Stable Extremal Regions (MSER) features for cloning detection, followed by k-means clustering for cloning localization. Then, for comparison purposes, we implement the same approach using Speeded Up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT). Experimental results show that we can detect and localize cloning in tampered images with an accuracy reaching 97% using MSER features. The usability and effi�cacy of our approach is verified by comparing with recent state-of-the-art approaches. For the third contribution we propose a flexible methodology for detecting cloning in images, based on the use of feature detectors. We determine whether a particular match is the result of a cloning event by clustering the matches using k-means clustering and using a Support Vector Machine (SVM) to classify the clusters. This descriptor-agnostic approach allows us to combine the results of multiple feature descriptors, increasing the potential number of keypoints in the cloned region. Results using MSER, SURF and SIFT outperform state of the art where the highest true positive rate is achieved at approximately 99.60% and the false positive rate is achieved at 1.6%, when different descriptors are combined. A statistical �filtering step, based on computing the median value of the dissimilarity matrix, is also proposed. Moreover, our algorithm uses an adaptive technique for selecting the optimal k value for each image independently, allowing our method to detect multiple cloned regions. Finally, we propose an adaptive technique that chooses feature detectors based on the type of image being tested. Some detectors are robust in detecting features in textured images while other detectors are robust in detecting features in smooth images. Combining the detectors makes them complementary to each other and can generate optimal results. The highest value for the area under ROC curve is achieved at approximately 98.87%. We also test the performance of agglomerative hierarchical clustering for cloning localization. Hierarchical and k-means clustering techniques have a similar performance for cloning localization. The True Positive Rate (TPR) for match level localization is achieved at approximately 97.59% and 96.43% for k-means and hierarchical clustering techniques, respectively. The robustness of our technique is analyzed against additive white Gaussian noise and JPEG compression. Our technique is still reliable even when using a low signal-to-noise (SNR = 20 dB) or a low JPEG compression quality factor (QF = 50).
APA, Harvard, Vancouver, ISO, and other styles
31

Necaise, Nathan Joseph. "Empirical analysis of disk sector prefixes for digital forensics." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-03282007-151218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Samsuddin, Samsuddin Wira Bin Tu Manghui. "Digital forensics curriculum for undergraduate and master graduate students." [Cedar City, Utah] : Southern Utah University, 2009. http://unicorn.li.suu.edu/ScholarArchive/ForensicScience/SamsuddinWira.pdf.

Full text
Abstract:
Thesis (M.S.)--Southern Utah University, 2009.<br>Title from PDF title page. "Thesis presented to the faculty of the Graduate School of Southern Utah University in partial fulfillment of the requirements for the degree of Master of Science in Forensic Science, Computer Forensics Emphasis." Manghui Tu, Advisor. Includes bibliographical references (p. 83-88).
APA, Harvard, Vancouver, ISO, and other styles
33

Berthet, Alexandre. "Deep learning methods and advancements in digital image forensics." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS252.

Full text
Abstract:
Le volume de données visuelles numériques augmente considérablement d'année en années. En parallèle, l’édition d'images est devenue plus facile et plus précise. Les modifications malveillantes sont donc plus accessibles. La criminalistique des images fournit des solutions pour garantir l’authenticité des données visuelles numériques. Tout d’abord, les solutions étaient des méthodes classiques basées sur les artéfacts produits lors de la création d’une image numérique. Puis, comme pour d’autres domaines du traitement d’images, les méthodes sont passées à l’apprentissage profond. Dans un premier temps, nous présentons une étude de l’état de l’art des méthodes d’apprentissage profond pour la criminalistique des images. Notre étude de l’état de l'art souligne le besoin d’appliquer des modules de pré-traitement pour extraire les artéfacts cachés par le contenu des images. Nous avons aussi mis en avant les problèmes concernant les protocoles d’évaluation de la reconnaissance d’image. De plus, nous abordons la contre-criminalistique et présentons la compression basée sur l’intelligence artificielle, qui pourrait être pris en compte comme une attaque. Dans un second temps, cette thèse détaille trois protocoles d’évaluation progressifs qui abordent les problèmes de reconnaissance de caméras. Le protocole final, plus fiable et reproductible, met en avant l’impossibilité des méthodes de l’état de l’art à reconnaître des caméras dans un contexte difficile. Dans un troisième temps, nous étudions l’impact de la compression basée sur l’intelligence artificielle sur deux tâches analysant les artéfacts de compression : la détection de falsifications et la reconnaissance du réseau social<br>The volume of digital visual data is increasing dramatically year after year. At the same time, image editing has become easier and more precise. Malicious modifications are therefore more accessible. Image forensics provides solutions to ensure the authenticity of digital visual data. Recognition of the source camera and detection of falsified images are among the main tasks. At first, the solutions were classical methods based on the artifacts produced during the creation of a digital image. Then, as in other areas of image processing, the methods moved to deep learning. First, we present a state-of-the-art survey of deep learning methods for image forensics. Our state-of-the-art survey highlights the need to apply pre-processing modules to extract artifacts hidden by image content. We also highlight the problems concerning image recognition evaluation protocols. Furthermore, we address counter-forensics and present compression based on artificial intelligence, which could be considered as an attack. In a second step, this thesis details three progressive evaluation protocols that address camera recognition problems. The final protocol, which is more reliable and reproducible, highlights the impossibility of state-of-the-art methods to recognize cameras in a challenging context. In a third step, we study the impact of compression based on artificial intelligence on two tasks analyzing compression artifacts: tamper detection and social network recognition. The performances obtained show on the one hand that this compression must be taken into account as an attack, but that it leads to a more important decrease than other manipulations for an equivalent image degradation
APA, Harvard, Vancouver, ISO, and other styles
34

Ademu, Inikpi. "A comprehensive digital forensics investigation model and guidelines for establishing admissible digital evidence." Thesis, University of East London, 2013. http://roar.uel.ac.uk/3992/.

Full text
Abstract:
Technology systems are attacked by offenders using digital devices and networks to facilitate their crimes and hide their identities, creating new challenges for digital investigators. Malicious programs that exploit vulnerabilities also serve as threats to digital investigators. Since digital devices such as computers and networks are used by organisations and digital investigators, malicious programs and risky practices that may contaminate the integrity of digital evidence can lead to loss of evidence. For some reasons, digital investigators face a major challenge in preserving the integrity of digital evidence. Not only is there no definitive comprehensive model of digital forensic investigation for ensuring the reliability of digital evidence, but there has to date been no intensive research into methods of doing so. To address the issue of preserving the integrity of digital evidence, this research improves upon other digital forensic investigation model by creating a Comprehensive Digital Forensic Investigation Model (CDFIM), a model that results in an improvement in the investigation process, as well as security mechanism and guidelines during investigation. The improvement is also effected by implementing Proxy Mobile Internet Protocol version 6 (PMIPv6) with improved buffering based on Open Air Interface PIMIPv6 (OAI PMIPv6) implementation to provide reliable services during handover in Mobile Node (MN) and improve performance measures to minimize loss of data which this research identified as a factor affecting the integrity of digital evidence. The advantage of this is to present that the integrity of digital evidence can be preserved if loss of data is prevented. This research supports the integration of security mechanism and intelligent software in digital forensic investigation which assist in preserving the integrity of digital evidence by conducting experiments which carried out two different attack experiment to test CDFIM. It found that when CDFIM used security mechanism and guidelines with the investigation process, it was able to identify the attack and also ensured that the integrity of the digital evidence was preserved. It was also found that the security mechanism and guidelines incorporated in the digital investigative process are useless when the security guidelines are ignored by digital investigators, thus posing a threat to the integrity of digital evidence.
APA, Harvard, Vancouver, ISO, and other styles
35

Solanke, Abiodun Abdullahi <1983&gt. "Digital Forensics AI: on Practicality, Optimality, and Interpretability of Digital Evidence Mining Techniques." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10400/1/SOLANKE-ABIODUN-ABDULLAHI-Tesi.pdf.

Full text
Abstract:
Digital forensics as a field has progressed alongside technological advancements over the years, just as digital devices have gotten more robust and sophisticated. However, criminals and attackers have devised means for exploiting the vulnerabilities or sophistication of these devices to carry out malicious activities in unprecedented ways. Their belief is that electronic crimes can be committed without identities being revealed or trails being established. Several applications of artificial intelligence (AI) have demonstrated interesting and promising solutions to seemingly intractable societal challenges. This thesis aims to advance the concept of applying AI techniques in digital forensic investigation. Our approach involves experimenting with a complex case scenario in which suspects corresponded by e-mail and deleted, suspiciously, certain communications, presumably to conceal evidence. The purpose is to demonstrate the efficacy of Artificial Neural Networks (ANN) in learning and detecting communication patterns over time, and then predicting the possibility of missing communication(s) along with potential topics of discussion. To do this, we developed a novel approach and included other existing models. The accuracy of our results is evaluated, and their performance on previously unseen data is measured. Second, we proposed conceptualizing the term “Digital Forensics AI” (DFAI) to formalize the application of AI in digital forensics. The objective is to highlight the instruments that facilitate the best evidential outcomes and presentation mechanisms that are adaptable to the probabilistic output of AI models. Finally, we enhanced our notion in support of the application of AI in digital forensics by recommending methodologies and approaches for bridging trust gaps through the development of interpretable models that facilitate the admissibility of digital evidence in legal proceedings.
APA, Harvard, Vancouver, ISO, and other styles
36

Kruger, Jaco-Louis. "Digital forensic readiness for IOT devices." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/73385.

Full text
Abstract:
The Internet of Things (IoT) has evolved to be an important part of modern society. IoT devices can be found in several environments such as smart homes, transportation, the health sector, smart cities and even facilitates automation in organisations. The increasing dependence on IoT devices increases the possibility of security incidents in the physical or cyber environment. Traditional methods of digital forensic (DF) investigations are not always applicable to IoT devices due to their limited data processing resources. A possible solution for conducting forensic investigations on IoT devices is to utilise a proactive approach known as digital forensic readiness (DFR). This dissertation firstly aims to conduct a thorough review of the available literature in the current body of knowledge to identify a clear process that can be followed to implement DFR tailored for IoT devices. This dissertation then formulates requirements for DFR in IoT based on existing forensic techniques. The requirements for DFR in IoT give rise to the development of a model for DFR in IoT, which is then implemented in a prototype for IoT devices. The prototype is subsequently tested and evaluated on IoT devices that conduct proactive DFR in a simulation of a smart home system. Finally, the dissertation illustrates the feasibility of the DFR processes for IoT and serves as a basis for future research with regards to DFR in IoT. This dissertation will impact future research with regards to developing a standard for DFR in IoT.<br>Dissertation (MSc)--University of Pretoria, 2019.<br>Computer Science<br>MSc<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
37

Chabot, Yoan. "Construction, enrichment and semantic analysis of timelines : application to digital forensics." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS037/document.

Full text
Abstract:
Obtenir une vision précise des évènements survenus durant un incident est un objectif difficile à atteindre lors d'enquêtes de criminalistique informatique. Le problème de la reconstruction d'évènements, ayant pour objectif la construction et la compréhension d'une chronologie décrivant un incident, est l'une des étapes les plus importantes du processus d'investigation. La caractérisation et la compréhension complète d'un incident nécessite d'une part d'associer à chaque fragment d'information sa signification passée, puis d'établir des liens sémantiques entre ces fragments. Ces tâches nécessitent l'exploration de grands volumes de données hétérogènes trouvés dans la scène de crime. Face à ces masses d'informations, les enquêteurs rencontrent des problèmes de surcharge cognitive les amenant à commettre des erreurs ou à omettre des informations pouvant avoir une forte valeur ajoutée pour les progrès de l'enquête. De plus, tout résultat produit au terme de la reconstruction d'évènements doit respecter un certain nombre de critères afin de pouvoir être utilisé lors du procès. Les enquêteurs doivent notamment être en capacité d'expliquer les résultats produits. Afin d'aider les enquêteurs face à ces problèmes, cette thèse introduit l'approche SADFC. L'objectif principal de cette approche est de fournir aux enquêteurs des outils les aidant à restituer la sémantique des entités composant la scène de crime et à comprendre les relations liant ces entités tout en respectant les contraintes juridiques. Pour atteindre cet objectif, SADFC est composé de deux éléments. Tout d'abord, SADFC s'appuie sur des fondations théoriques garantissant la crédibilité des résultats produits par les outils via une définition formelle et rigoureuse des processus utilisés. Cette approche propose ensuite une architecture centrée sur une ontologie pour modéliser les connaissances inhérentes à la scène de crime et assister l'enquêteur dans l'analyse de ces connaissances. La pertinence et l'efficacité de ces outils sont démontrées au travers d'une étude relatant un cas d'investigation fictive<br>Having a clear view of events that occurred over time is a difficult objective to achieve in digital investigations (DI). Event reconstruction, which allows investigators to build and to understand the timeline of an incident, is one of the most important steps of a DI process. The complete understanding of an incident and its circumstances requires on the one hand to associate each piece of information to its meaning, and on the other hand to identify semantic relationships between these fragments. This complex task requires the exploration of a large and heterogeneous amount of information found on the crime scene. Therefore, investigators encounter cognitive overload problems when processing this data, causing them to make mistakes or omit information that could have a high added value for the progress of the investigation. In addition, any result produced by the reconstruction process must meet several legal requirements to be admissible at trial, including the ability to explain how the results were produced. To help the investigators to deal with these problems, this thesis introduces a semantic-based approach called SADFC. The main objective of this approach is to provide investigators with tools to help them find the meaning of the entities composing the crime scene and understand the relationships linking these entities, while respecting the legal requirements. To achieve this goal, SADFC is composed of two elements. First, SADFC is based on theoretical foundations, ensuring the credibility of the results produced by the tools via a formal and rigorous definition of the processes used. This approach then proposes an architecture centered on an ontology to model and structure the knowledge inherent to an incident and to assist the investigator in the analysis of this knowledge. The relevance and the effectiveness of this architecture are demonstrated through a case study describing a fictitious investigation
APA, Harvard, Vancouver, ISO, and other styles
38

Lawgaly, Ashref. "Digital camera identification using sensor pattern noise for forensics applications." Thesis, Northumbria University, 2017. http://nrl.northumbria.ac.uk/32314/.

Full text
Abstract:
Nowadays, millions of pictures are shared through the internet without applying any authentication system. This may cause serious problems, particularly in situations where the digital image is an important component of the decision making process for example, child pornography and movie piracy. Motivated by this, the present research investigates the performance of estimating Photo Response Non-Uniformity (PRNU) and developing new estimation approaches to improve the performance of digital source camera identification. The PRNU noise is a sensor pattern noise characterizing the imaging device. Nonetheless, the PRNU estimation procedure is faced with the presence of image-dependent information as well as other non-unique noise components. This thesis primarily focuses on efficiently estimating the physical PRNU components during different stages. First, an image sharpening technique is proposed as a pre-processing approach for source camera identification. The sharpening method aims to amplify the PRNU components for better estimation. In the estimation stage, a new weighted averaging (WA) technique is presented. Most existing PRNU techniques estimate PRNU using the constant averaging of residue signals extracted from a set of images. However, treating all residue signals equally through constant averaging is optimal only if they carry undesirable noise of the same variance. Moreover, an improved version of the locally adaptive discrete cosine transform (LADCT) filter is proposed in the filtering stage to reduce the effect of scene details on noise residues. Finally, the post-estimation stage consists of combining the PRNU estimated from each colour plane aims to reduce the effect of colour interpolation and increasing the amount of physical PRNU components. The aforementioned techniques have been assessed on two image datasets acquired by several camera devices. Experimental results have shown a significant improvement obtained with the proposed enhancements over related state-of-the-art systems. Nevertheless, in this thesis the experiments are not including images taken with various acquisition different resolutions to evaluate the effect of these settings on PRNU performance. Moreover, images captured by scanners, cell phones can be included for a more comprehensive work. Another limitation is that investigating how the improvement may change with JPEG compression or gamma correction. Additionally, the proposed methods have not been considered in cases of geometrical processing, for instance cropping or resizing.
APA, Harvard, Vancouver, ISO, and other styles
39

Homem, Irvin. "Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud Environments." Licentiate thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130742.

Full text
Abstract:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
APA, Harvard, Vancouver, ISO, and other styles
40

Hannay, Peter. "A non-device specific framework for the development of forensic locational data analysis procedure for consumer grade small and embedded devices." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2017. https://ro.ecu.edu.au/theses/2026.

Full text
Abstract:
Portable and wearable computing devices such as smart watches, navigation units, mobile phones, and tablet computers commonly ship with Global Navigation Satellite System (GNSS) supported locational awareness. Locational functionality is no longer limited to navigation specific devices such as satellite navigation devices and location tracking systems. Instead the use of these technologies has extended to become secondary functionality on many devices, including mobile phones, cameras, portable computers, and video game consoles. The increase in use of location aware technology is of use to forensic investigators as it has the potential to provide historic locational information. The evidentiary value of these devices to forensic investigators is currently limited due to the lack of available forensic tools and published methods to properly acquire and analyse these data sources. This research addresses this issue through the synthesis of common processes for the development of forensic procedure to acquire and interpret historic locational data from embedded, locationally aware devices. The research undertaken provides a framework for the generation of forensic procedure to enable the forensic extraction of historical locational data. The framework is device agnostic, relying instead on differential analysis and structured testing to produce a validated method for the extraction of locational history. This framework was evaluated against five devices, selected on a basis of market penetration, availability and a stage of deduplication. The examination of the framework took place in a laboratory developed specifically for the research. This laboratory replicates all identified sources of location data for the devices selected. In this case the laboratory is able to simulate cellular (2G and 3G), GNSS (NAVSTAR and GLONASS), and Wi-Fi locationing services. The laboratory is a closed-sky facility, meaning that the laboratory is contained within a faraday cage and all signals are produced and broadcast internally. Each selected device was run through a series of simulations. These simulations involved the broadcast of signals, replicating the travel of a specific path. Control data was established through the use of appropriate data recording systems, for each of the simulated location signals. On completion of the simulation, each device was forensically acquired and analysed in accordance with the proposed framework. For each experiment carried out against the five devices, the control and experimental data were compared. In this examination any divergence less than those expected for GNSS were ignored. Any divergence greater than this was examined to establish cause. Predictable divergence was accepted and non-predictable divergence would have been noted as a limitation. In all instances where data was recovered, all divergences were found to be predictable. Post analysis, the research found that the proposed framework was successful in producing locational forensic procedure in a non-device specific manner. This success was confirmed for all the devices tested.
APA, Harvard, Vancouver, ISO, and other styles
41

Brandt, Johan, and Oscar Wärnling. "Addressing the Digital Forensic Challenges Within Modern Law Enforcement : A study of digital forensics and organizational buying behavior from a DF-company perspective." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279595.

Full text
Abstract:
Today’s law enforcement agencies are presented with challenges regarding how to navigate the rapidly changing technical landscape. The proliferation of digital devices in society has presented opportunities for modern criminals, resulting in substantial changes in criminal behavior. Digital devices have, thus, become a crucial piece of evidence within forensic investigation processes, which has caused the field of digital forensics to emerge as a central part of modern law enforcement. However, law enforcement is experiencing substantial challenges in regard to handling the complexity of modern digital devices, as well as the data quantities that these devices imply. While digital forensics literature commonly discusses the challenges that law enforcement agencies are facing, it fails to address the role and responsibilities that the digital forensic companies have in ensuring that law enforcement agencies possess the necessary means to counteract criminal activity. Therefore, this study aims to investigate how the companies that supply the tools that law enforcement depends on, can help the agencies to overcome these challenges. Although the need for digital forensics is at an all time high, the consensus among practitioners is that they lack the necessary means to adequately handle digital evidence. Moreover, it is identified that lack of organizational understanding is impeding law enforcement from prioritizing allocation of capital towards digital forensics. Thus, this study also assesses how digital forensic companies can adapt their marketing approaches based on the purchasing behavior of law enforcement, in order to efficiently communicate the need within the customer organizations and ensure that law enforcement agencies possess the necessary means to counteract modern criminality. In order to investigate this area of research and address the identified problems, ii the study is set up as a case study in collaboration with a European digital forensic company. The study includes several internal interviews with company representatives, as well as a large amount of external interviews with digital forensic experts from different European law enforcement agencies. The empirical evidence is assessed against renowned literature within digital forensics and organizational buying behavior to acquire a comprehensive understanding of the problems and help answer the proposed research questions. The study concludes that the main challenges that law enforcement is facing in regard to digital forensics originate from a lack of organizational understanding. This results in insufficient resources being allocated towards digital forensics. In turn, this limits law enforcement’s ability to properly educate their staff and purchase the necessary tools to effectively handle the complexity and quantity of evidence that modern digital forensics implies. To address this, digital forensic companies are required to adapt their business models to the resource limitations of their customers by offering more flexible training solutions and tailor the tools based on specific user needs. Moreover, it is determined that companies should be involved in pursuing law enforcement management to improve the organizational understanding regarding the importance of digital forensics. The study also identifies that the organizational structure of law enforcement agencies highly impact their purchasing behavior. Depending on the degree of law enforcement centralization, the buying center structure varies. For centralized organizations the scale of the buying center is generally larger and the same applies for its purchases. The individual members of the buying center have minor influence over the decision making process, instead the decisions are a consequence of collective decision making by different departments. In contrast, decentralized organizations make smaller purchases through smaller buying centers. The individual members within the decentralized buying center have far more influence over the buying behaviour in comparison to the members of a centralized organization’s buying center. Therefore, digital forensic companies need to employ different marketing strategies to anchor their products within different law enforcement organizations. It is established that companies should aspire to identify the buying centers of potential and existing customer organizations, to improve efficiency of marketing efforts. Mapping out organizational and authoritative structure is, thus, concluded to be crucial in order to successfully capitalize on the purchasing behavior of different law enforcement organizations. To enable digital forensic companies to practically apply these suggestions within the context of their marketing strategies, applicable models based on theory and empirics are proposed.<br>Dagens brottsbekämpande myndigheter står inför en stor mängd utmaningar när det kommer till att navigera i det snabbföränderliga tekniska landskapet. Spridningen av digitala enheter i samhället har presenterat möjligheter för kriminella, vilket har resulterat i väsentliga förändringar i kriminellt beteende. Digitala enheter har därför blivit viktiga som bevis inom moderna forensiska utredningsprocesser, vilket har lett till att digital forensik har blivit en central del av modern brottsutredning. Brottsutredande myndigheter upplever dock betydande utmaningar när det kommer till att hantera komplexiteten hos moderna digitala enheter, samt den mängd data som dessa enheter medför. Litteratur om digital forensik tar frekvent upp utmaningarna som brottsbekämpande myndigheter står inför. Däremot tar litteraturen inte upp rollen och ansvaret som digital forensik-företagen har i att säkra att brottsutredande myndigheter har de nödvändiga verktygen för att bekämpa brottslighet. Därför är syftet med studien att undersöka hur företagen som förser brottsutredande myndigheter med verktyg, kan hjälpa till att lösa dessa problem. Trots att behovet av digital forensik är rekordhögt, är konsensusen bland utövare att de saknar de nödvändiga medlen för att hantera digitalt bevismaterial. Dessutom identifieras det att organisatorisk förståelse hindrar brottsutredande myndigheter från att prioritera allokering av kapital för digital forensik. Därmed undersöker denna studie också hur digital forensik-företag kan anpassa sina marknadsföringsstrategier baserat på brottsutredande myndigheters inköpsbeteende, för att effektivt kunna kommunicera behovet inom kundorganisationerna och försäkra att brottsutredande myndigheter innehar de nödvändiga medlen för att kunna bekämpa kriminalitet. För att undersöka detta forskningsområde och hantera de identifierade problemen, är studien strukturerad som en fallstudie i samarbete med ett Europeiskt digital forensik-företag. Studien inkluderar ett mindre antal interna intervjuer iv med företagsrepresentanter, samt ett flertal externa intervjuer med digital forensikexperter från olika brottsutredande myndigheter i Europa. Det empiriska underlaget analyseras med hjälp av erkänd litteratur inom digital forensik samt ramverk inom organisationellt köpbeteende för att få en omfattande förståelse för problemen och kunna besvara de föreslagna forskningsfrågorna. Studien drar slutsatsen att de mest omfattande utmaningarna som brottsutredande myndigheter står inför när det gäller digital kriminalteknik, grundas i brist på organisatorisk förståelse. Detta resulterar i att de resurser som allokeras till digital brottsbekämpning är otillräckliga. I sin tur begränsar detta brottsutredande myndigheters möjligheter att i tillräcklig utsträckning utbilda personal samt anskaffa de nödvändiga verktygen för att effektivt hantera komplexiteten och beviskvantiteten som dagens digitala brottsbekämpning innebär. För att hantera detta krävs att digital forensik-företag anpassar sina affärsmodeller efter kundernas resursbegränsningar genom att erbjuda mer flexibla utbildningslösningar och verktyg som är skräddarsydda utifrån specifika användarbehov. Dessutom fastställs det att företagen bör vara involverade i att övertyga brottsutredande myndigheter på ledningsnivå om digital brottsbekämpnings betydelse och relevans. Studien identifierar också organisatorisk struktur inom brottsutredande myndigheter som en faktor som har stor inverkan på inköpsbeteende. Beroende på graden av centralisering, så kommer “buying center”-strukturen att förändras. För centraliserade organisationer är både skalan av buying centret och inköpen stora. De individuella medlemmarna av buying centret har lite inflytande över beslutsprocessen, istället så är besluten en konsekvens av kollektiva beslut fattade av flertalet avdelningar. I kontrast till centraliserade organisationer så gör decentraliserade organisationer mindre inköp genom mindre buying centers. De individuella medlemmarna inom decentraliserade buying centers har betydligt mer inflytande över köpbeteendet i jämförelse med medlemmarna i en centraliserad organisations buying center. Därför behöver digital forensikföretag adoptera olika marknadsföringsstrategier för att lyckas nå ut och förankra produkterna inom brottutredingsorganisationer. Det fastställs att företag borde sträva efter att identifiera potentiella och befintliga kunders så kallade buying centers för att effektivisera marknadsföringsinsatserna. Kartläggning av organisatoriska och auktoritära strukturer är därför nödvändigt för att framgångsrikt kapitalisera på rättsutredande organisationers inköpsbeteenden. För att göra det möjligt för digital forensik-företag att praktiskt tillämpa dessa förslag på deras marknadsföringsstrategier, har applicerbara modeller baserade på teori och empiri föreslagits.
APA, Harvard, Vancouver, ISO, and other styles
42

Van, Ramesdonk Paul. "Continued forensic development - investigation into current trends and proposed model for digital forensic practitioners." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20707.

Full text
Abstract:
Continuous professional development has been looked at in many professions over the years, most notably in primary and secondary education and in the medical fields. With digital forensics being cast into the limelight due to the rapid advancements in technology, academic institutions have added courses to address the void created by the boom in the industry. Little research has been done to address the issues that have now become apparent concerning continued learning in this field. The purpose of this research was to investigate the kinds of frameworks and methods used in other professions, and how the practitioners themselves see career development, and to create a framework that could be used to keep abreast of developments in the field of digital forensics, be it changes in the law, case law, or changes in software. The data analysis showed quite a number of continued learning approaches that could be employed in the digital/computer forensic fields to achieve the objective of keeping abreast of changes in the field. Some, understandably, are due to the nature of the discipline. As part of practitioners' current approach to continued learning, they rely heavily on knowledge sharing in the form of learning from other professionals, through self-study by reading books, articles and research conducted in the forensic field, the use of Information and Communications Technology (ICT) for education, and the use of Internet sources such as user forums, Facebook groups, and web-blogs. The majority of the respondents had received formal training in digital forensics, and of the total number of participants, only six percent had not been involved in any form of continued learning activities in the past five years. When looking at the data obtained, and because there are no formal requirements to perform continued learning in the digital/computer forensic field, it becomes clear that individuals themselves need to be self-driven to keep up to date with changes in the field. As seen in studies focused on continued learning activities in other professions, the research shows that digital/computer forensic practitioners experience similar barriers to their own approaches to continued learning.
APA, Harvard, Vancouver, ISO, and other styles
43

Börjesson, Holme, and Filiph Lindskog. "Går det att köpa personuppgifter på bilskroten? : Ett arbete om digital forensik på begagnade bildelar." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42369.

Full text
Abstract:
I moderna bilar lagras ofta data från användaren av bilen då en mobiltelefon eller annan enhet parkopplas genom Bluetooth- eller USB-anslutning. I de fall dessa data innehåller personuppgifter kan de vara intressanta i en utredning samt vara värda att skydda ur ett integritetsperspektiv. Vad händer med dessa data då bilen skrotas? När en bil skrotas monteras den ner och de delar som går att tjäna pengar på säljs av bildemonteringsföretaget. Det kan gälla allt från stötdämpare, hjul och rattar, till elektroniska komponenter och multimediaenheter. I detta arbete utvanns personuppgifter ur tre sådana begagnade multimediaenheter som köpts från bildemonteringar. Den mest framgångsrika metoden var att avlägsna rätt lagringskrets från multimediaenhetens kretskort och utvinna dess data genom direkt anslutning. I samtliga fall har informationen varit strukturerad i ett bekant filsystem vilket kunnat monteras. I alla tre undersöka multimediaenheter utvanns personuppgifter. Resultatet visar att det finns brister i hanteringen av personuppgifter då en bil skrotas.<br>In modern vehicles, data from the user of the vehicle is often stored when a mobile phone or other device is paired through Bluetooth or USB connection. In cases where this data contains personal data, they may be of interest in an investigation and may be worth protecting from a privacy perspective. What happens to this data when the car is scrapped? When a car is scrapped, it is dismantled and the parts that can be made money from are sold by the scrap company. This can be anything from shock absorbers, wheels and steering wheels, to electronic components and infotainment devices. In this report, personal data was extracted from three such infotainment devices purchased from scrap companies. The most successful method was to remove the correct storage circuit from the infotainment device circuit board and extract its data by direct connection. In all cases, the information has been structured in a familiar file system which could be mounted. In all three investigated infotainment devices, personal data were extracted. The result shows that there are deficiencies in the handling of personal data when a car is scrapped.
APA, Harvard, Vancouver, ISO, and other styles
44

Brand, Murray. "Analysis avoidance techniques of malicious software." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2010. https://ro.ecu.edu.au/theses/138.

Full text
Abstract:
Anti Virus (AV) software generally employs signature matching and heuristics to detect the presence of malicious software (malware). The generation of signatures and determination of heuristics is dependent upon an AV analyst having successfully determined the nature of the malware, not only for recognition purposes, but also for the determination of infected files and startup mechanisms that need to be removed as part of the disinfection process. If a specimen of malware has not been previously extensively analyzed, it is unlikely to be detected by AV software. In addition, malware is becoming increasingly profit driven and more likely to incorporate stealth and deception techniques to avoid detection and analysis to remain on infected systems for a myriad of nefarious purposes. Malware extends beyond the commonly thought of virus or worm, to customized malware that has been developed for specific and targeted miscreant purposes. Such customized malware is highly unlikely to be detected by AV software because it will not have been previously analyzed and a signature will not exist. Analysis in such a case will have to be conducted by a digital forensics analyst to determine the functionality of the malware. Malware can employ a plethora of techniques to hinder the analysis process conducted by AV and digital forensics analysts. The purpose of this research has been to answer three research questions directly related to the employment of these techniques as: 1. What techniques can malware use to avoid being analyzed? 2. How can the use of these techniques be detected? 3. How can the use of these techniques be mitigated?
APA, Harvard, Vancouver, ISO, and other styles
45

Mohammed, Hussam J. "Automated identification of digital evidence across heterogeneous data resources." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/12839.

Full text
Abstract:
Digital forensics has become an increasingly important tool in the fight against cyber and computer-assisted crime. However, with an increasing range of technologies at people's disposal, investigators find themselves having to process and analyse many systems with large volumes of data (e.g., PCs, laptops, tablets, and smartphones) within a single case. Unfortunately, current digital forensic tools operate in an isolated manner, investigating systems and applications individually. The heterogeneity and volume of evidence place time constraints and a significant burden on investigators. Examples of heterogeneity include applications such as messaging (e.g., iMessenger, Viber, Snapchat, and WhatsApp), web browsers (e.g., Firefox and Google Chrome), and file systems (e.g., NTFS, FAT, and HFS). Being able to analyse and investigate evidence from across devices and applications in a universal and harmonized fashion would enable investigators to query all data at once. In addition, successfully prioritizing evidence and reducing the volume of data to be analysed reduces the time taken and cognitive load on the investigator. This thesis focuses on the examination and analysis phases of the digital investigation process. It explores the feasibility of dealing with big and heterogeneous data sources in order to correlate the evidence from across these evidential sources in an automated way. Therefore, a novel approach was developed to solve the heterogeneity issues of big data using three developed algorithms. The three algorithms include the harmonising, clustering, and automated identification of evidence (AIE) algorithms. The harmonisation algorithm seeks to provide an automated framework to merge similar datasets by characterising similar metadata categories and then harmonising them in a single dataset. This algorithm overcomes heterogeneity issues and makes the examination and analysis easier by analysing and investigating the evidential artefacts across devices and applications based on the categories to query data at once. Based on the merged datasets, the clustering algorithm is used to identify the evidential files and isolate the non-related files based on their metadata. Afterwards, the AIE algorithm tries to identify the cluster holding the largest number of evidential artefacts through searching based on two methods: criminal profiling activities and some information from the criminals themselves. Then, the related clusters are identified through timeline analysis and a search of associated artefacts of the files within the first cluster. A series of experiments using real-life forensic datasets were conducted to evaluate the algorithms across five different categories of datasets (i.e., messaging, graphical files, file system, internet history, and emails), each containing data from different applications across different devices. The results of the characterisation and harmonisation process show that the algorithm can merge all fields successfully, with the exception of some binary-based data found within the messaging datasets (contained within Viber and SMS). The error occurred because of a lack of information for the characterisation process to make a useful determination. However, on further analysis, it was found that the error had a minimal impact on subsequent merged data. The results of the clustering process and AIE algorithm showed the two algorithms can collaborate and identify more than 92% of evidential files.
APA, Harvard, Vancouver, ISO, and other styles
46

Delport, Waldo. "Forensic evidence isolation in clouds." Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/33490.

Full text
Abstract:
Cloud computing is gaining acceptance and also increasing in popularity. Organisations often rely on cloud resources as an effective replacement for their `in-house' computer systems. In the cloud, virtual resources are provided from a larger pool of resources, these resources being available to multiple different clients. When something suspicious happens within a digital environment, a digital forensic investigation may be conducted to gather information about the event. When conducting such an investigation digital forensic procedures are followed. These procedures involve the steps to be followed to aid in the successful completion of the investigation. One of the possible steps that may be followed involves isolating possible evidence in order to protect it from contamination and tampering. Clouds may provide a multi-tenancy solution across multiple geographical locations. When conducting an investigation into physical equipment the equipment may be isolated. This may be done, for example, by placing a cell phone in a Faraday bag in order to block signals or unplugging a computer's network cable to stop the computer from either sending or receiving of network traffic. However, in the cloud it may not be applicable to isolate the equipment of the cloud because of the multi-tenancy and geographically separated nature of the cloud. There is currently little research available on how isolation can be accomplished inside the cloud environment. This dissertation aims at addressing the need for isolation on the cloud by creating new methods and techniques that may be incorporated into an investigation in order to isolate cloud resources. Isolation can be achieved by moving the unnecessary evidence to a different location and retaining the required evidence or by moving the required evidence in such a manner that the evidence would not be contaminated. If isolated evidence were to be moved to a digital forensic laboratory, the question arises as to whether it would be possible to create such a laboratory on the cloud utilise the benefits of cloud computing and enable the investigation to be conducted on the cloud without moving the isolated evidence from the cloud. The dissertation will develop various models of isolation. These models are then tested in experimental conditions. The experiments were conducted on Nimbula Director 1.0.3 and VMware vSphere 5.0. The models were successfully applied in the experiments. It was found that investigations could benefit from the use of the proposed models for isolation. However, the experiments also highlighted that some of the models are not applicable or that a combination should be used. The experiments also indicated that the methods to be used would depend on the circumstances of the investigation. A preliminary "cloud laboratory" was designed and described in terms of which a digital forensic laboratory can be created on the cloud resources, thus enabling an investigation to be conducted inside the cloud environment.<br>Dissertation (MSc)--University of Pretoria, 2013.<br>Computer Science<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
47

Hales, Gavin. "Assisting digital forensic analysis via exploratory information visualisation." Thesis, Abertay University, 2016. https://rke.abertay.ac.uk/en/studentTheses/774128b9-957e-4a05-aa74-dbeefebb8113.

Full text
Abstract:
Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.
APA, Harvard, Vancouver, ISO, and other styles
48

Barone, Joshua M. "Automated Timeline Anomaly Detection." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1609.

Full text
Abstract:
Digital forensics is the practice of trained investigators gathering and analyzing evidence from digital devices such as computers and smart phones. On these digital devices, it is possible to change the time on the device for a purpose other than what is intended. Currently there are no documented techniques to determine when this occurs. This research seeks to prove out a technique for determining when the time has been changed on forensic disk image by analyzing the log files found on the image. Out of this research a tool is created to perform this analysis in automated fashion. This tool is TADpole, a command line program that analyzes the log files on a disk image and determines if a timeline anomaly has occurred.
APA, Harvard, Vancouver, ISO, and other styles
49

Jacobsson, Emma, and Wistad Ebba Andersson. "Digital bevisning : En definition." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44963.

Full text
Abstract:
The digitalization of society has contributed to a more convenient lifestyle for the Swedishpopulation. Our everyday life includes digital technology that saves information about us and our actions from digital devices. Collected digital information can be admissible as evidence in a legal process. Despite the increase in crimes involving IT, Sweden seems to lack an official description for the concept of digital evidence. The purpose of this report is to propound two definitions: one for the general public and one more suitable for specific technical usage by people in the field. This report focuses on three different areas which together contribute to a holistic approach to gaining the basic knowledge and understanding of what digital evidence is. These areas include the value of digital evidence in a court decision, how anti-computer forensics can affect its extraction, and the legitimacy of digital evidence. To explore the various areas, employees within the police authority have answered questions based on their professional roles. The participants work as either investigators, preliminary investigation leaders, IT-computer forensic scientists, or prosecutors. The participants' answers have, together with literature, contributed to the definition of the concept of digital evidence and to the creation of a basic understanding of the subject of digital evidence.<br>Digitaliseringen av samhället har bidragit till en enklare vardag för den svenskabefolkningen. Denna vardag består till stor del av digital teknik som sparar information om oss och våra handlingar från digitala enheter. Insamlad digital information kan användas som bevismaterial i en rättsprocess. Trots den kraftfulla ökningen av IT-brott verkar Sverige sakna en officiell vedertagen beskrivning för begreppet digital bevisning. Arbetets syfte har varit att föreslå två definitioner; en för lekmän och en som lämpar sig för tekniskt kunniga. Arbetet fördjupar sig inom tre områden och bidrar därmed till ett helhetsperspektiv för att få en förståelse för vad digital bevisning är. Problemområdena innefattar den digitala bevisningens värde i ett domstolsbeslut, hur anti-forensik påverkar utvinningen och legitimiteten av digitala bevis. För att få en större förståelse för utmaningarna inom de angivna problemområdena har ett antal anställda inom polismyndigheten fått svara på frågor som riktar sig till deras arbetsuppgifter. Deltagarna som har blivit tillfrågade arbetar som utredare, förundersökningsledare, IT-forensiker och åklagare. Deltagarnas svar har tillsammans med tidigare forskning skapat utgångspunkten för att definiera begreppet digital bevisning och skapa en mera gedigen förståelse för digitala bevis.
APA, Harvard, Vancouver, ISO, and other styles
50

Alrumaithi, A. M. "Prioritisation in digital forensics : a case study of Abu Dhabi Police." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8936/.

Full text
Abstract:
The main goal of this research is to investigate prioritization process in digital forensics departments in law enforcement organizations. This research is motivated by the fact that case prioritisation plays crucial role to achieve efficient operations in digital forensics departments. Recent years have witnessed the widespread use of digital devices in every aspect of human life, around the globe. One of these aspects is crime. These devices have became an essential part of every investigation in almost all cases handled by police. The reason behind their importance lies in their ability to store huge amounts of data that can be utilized by investigators to solve cases under consideration. Thus, involving Digital Forensics departments, though often over-burdened and under-resourced, is becoming a compulsory to achieve successful investigations. Increasing the effectiveness of these departments requires improving their processes including case prioritisation. Existing literature focuses on prioritisation process within the context of crime scene triage. The main research problem in literature is prioritising existing digital devices found in crime scene in a way that leads to successful digital forensics. On the other hand, the research problem in this thesis focuses on prioritisation of cases rather than digital devices belonging to a specific case. Normally, Digital Forensics cases are prioritised based on several factors where influence of officers handling the case play one of the most important roles. Therefore, this research investigates how perception of different individuals in law enforcement organization may affect case prioritisation for the Digital Forensics department. To address this prioritisation problem, the research proposes the use of maturity models and machine learning. A questionnaire was developed and distributed among officers in Abu Dhabi Police. The main goal of this questionnaire is to measure perception regarding digital forensics among employees in Abu Dhabi police. Response of the subjects were divided into two sets. The first set represents responses of subjects who are experts in DF; while the other set includes the remaining subjects. Responses in the first set were averaged to produce a benchmark of the optimal questionnaire answers. Then, a reliability measure is proposed to summarize each subject’s perception. Data obtained from the reliability measurement were used in machine learning models, so that the process is automated. Results of data analysis confirmed the severity of problem where the proposed prioritisation process can be a very effective solution as seen in the results provided in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography