To see the other types of publications on this topic, follow the link: Digital and Computer Forensics.

Dissertations / Theses on the topic 'Digital and Computer Forensics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital and Computer Forensics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Watson, Michael Charles. "Certifying Computer Forensics Skills." BYU ScholarsArchive, 2021. https://scholarsarchive.byu.edu/etd/9131.

Full text
Abstract:
Computer forensics is an ever-growing technological field of complexity and depth. Individuals must strive to keep learning and growing their skills as they help combat cybercrime throughout the world. This study attempts to establish a method of evaluating conceptual expertise in computer forensics to help indicate whether or not an individual understands the five basic phases of computer forensics: preparation, seizure of evidence, acquisition of data, analysis of data, and reporting the findings of the analysis. A survey was presented to a university class of 30 students taking a computer forensics course and as well as posted online asking computer forensics professionals to participate in the survey. Results show that novices that were enrolled in a computer forensics course were able to identify the phases of computer forensics more readily than professionals
APA, Harvard, Vancouver, ISO, and other styles
2

Al, Fahdi Mahmood. "Automated digital forensics and computer crime profiling." Thesis, University of Plymouth, 2016. http://hdl.handle.net/10026.1/8090.

Full text
Abstract:
Over the past two decades, technology has developed tremendously, at an almost exponential rate. While this development has served the nation in numerous different positive ways, negatives have also emerged. One such negative is that of computer crime. This criminality has even grown so fast as to leave current digital forensic tools lagging behind in terms of development, and capabilities to manage such increasing and sophisticated types of crime. In essence the time taken to analyse a case is huge and increasing, and cases are not fully or properly investigated. This results in an ever-increasing number of pending and unsolved cases pertaining to computer crime. Digital forensics has become an essential tool in the fight against computer crime, providing both procedures and tools for the acquisition, examination and analysis of digital evidence. However, the use of technology is expanding at an ever-increasing rate, with the number of devices a single user might engage with increasing from a single device to 3 or more, the data capacity of those devices reaching far into the Terabytes, and the nature of the underlying technology evolving (for example, the use of cloud services). This results in an incredible challenge for forensic examiners to process and analyse cases in an efficient and effective manner. This thesis focuses upon the examination and analysis phases of the investigative process and considers whether automation of the process is possible. The investigation begins with researching the current state of the art, and illustrates a wide range of challenges that are facing the digital forensics investigators when analysing a case. Supported by a survey of forensic researchers and practitioners, key challenges were identified and prioritised. It was found that 95% of participants believed that the number of forensic investigations would increase in the coming times, with 75% of participants believing that the time consumed in such cases would increase. With regards to the digital forensic sophistication, 95% of the participants expected a rise in the complexity level and sophistication of digital forensics. To this end, an automated intelligent system that could be used to reduce the investigator’s time and cognitive load was found to be a promising solution. A series of experiments are devised around the use of Self-Organising Maps (SOMs) – a technique well known for unsupervised clustering of objects. The analysis is performed on a range of file system and application-level objects (e.g. email, internet activity) across four forensic cases. Experiment evaluations revealed SOMs are able to successfully cluster forensic artefacts from the remaining files. Having established SOMs are capable of clustering wanted artefacts from the case, a novel algorithm referred to as the Automated Evidence Profiler (AEP), is proposed to encapsulate the process and provide further refinement of the artefact identification process. The algorithm led to achieving identification rates in examined cases of 100% in two cases and 94% in a third. A novel architecture is proposed to support the algorithm in an operational capacity – considering standard forensic techniques such as hashing for known files, file signature analysis, application-level analysis. This provides a mechanism that is capable of utilising the A E P with several other components that are able to filter, prioritise and visualise artefacts of interest to investigator. The approach, known as Automated Forensic Examiner (AFE), is capable of identifying potential evidence in a more efficient and effective manner. The approach was evaluated by a number of experts in the field, and it was unanimously agreed that the chosen research problem was one with great validity. Further to this, the experts all showed support for the Automated Forensic Examiner based on the results of cases analysed.
APA, Harvard, Vancouver, ISO, and other styles
3

Altiero, Roberto A. "Digital Forensics Tool Interface Visualization." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/24.

Full text
Abstract:
Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator's cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples' mean tested for the null hypothesis of less than a significant value between the applications' comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation.
APA, Harvard, Vancouver, ISO, and other styles
4

Fei, Bennie Kar Leung. "Data visualisation in digital forensics." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-03072007-153241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Etow, Tambue Ramine. "IMPACT OF ANTI-FORENSICS TECHNIQUES ON DIGITAL FORENSICS INVESTIGATION." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97116.

Full text
Abstract:
Computer crimes have become very complex in terms of investigation and prosecution. This is mainly because forensic investigations are based on artifacts left oncomputers and other digital devices. In recent times, perpetrators of computer crimesare getting abreast of the digital forensics dynamics hence, capacitated to use someanti-forensics measures and techniques to obfuscate the investigation processes.Incases where such techniques are employed, it becomes extremely difficult, expensive and time consuming to carry out an effective investigation. This might causea digital forensics expert to abandon the investigation in a pessimistic manner.ThisProject work serves to practically demonstrate how numerous anti-forensics can bedeployed by the criminals to derail the smooth processes of digital forensic investigation with main focus on data hiding and encryption techniques, later a comparativestudy of the effectiveness of some selected digital forensics tools in analyzing andreporting shreds of evidence will be conducted.
APA, Harvard, Vancouver, ISO, and other styles
6

Flory, Christopher M. "Digital forensics and community supervision| Making a case for field based digital forensics training." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1597627.

Full text
Abstract:
<p>In this paper I will review the literature concerning investigator digital forensics models and how they apply to field investigators. A brief history of community supervision and how offenders are supervised will be established. I will also cover the difference between community supervision standards and police standards concerning searches, evidence, standards of proof, and the difference between parole boards and courts. Currently, the burden for digital forensics for community supervision officers is placed on local or state law enforcement offices, with personnel trained in forensics, but may not place a high priority on outside cases. Forensic field training for community supervision officers could ease the caseloads of outside forensic specialists, and increase fiscal responsible by increasing efficiency and public safety in the field of community supervision. </p>
APA, Harvard, Vancouver, ISO, and other styles
7

Marziale, Lodovico. "Advanced Techniques for Improving the Efficacy of Digital Forensics Investigations." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1027.

Full text
Abstract:
Digital forensics is the science concerned with discovering, preserving, and analyzing evidence on digital devices. The intent is to be able to determine what events have taken place, when they occurred, who performed them, and how they were performed. In order for an investigation to be effective, it must exhibit several characteristics. The results produced must be reliable, or else the theory of events based on the results will be flawed. The investigation must be comprehensive, meaning that it must analyze all targets which may contain evidence of forensic interest. Since any investigation must be performed within the constraints of available time, storage, manpower, and computation, investigative techniques must be efficient. Finally, an investigation must provide a coherent view of the events under question using the evidence gathered. Unfortunately the set of currently available tools and techniques used in digital forensic investigations does a poor job of supporting these characteristics. Many tools used contain bugs which generate inaccurate results; there are many types of devices and data for which no analysis techniques exist; most existing tools are woefully inefficient, failing to take advantage of modern hardware; and the task of aggregating data into a coherent picture of events is largely left to the investigator to perform manually. To remedy this situation, we developed a set of techniques to facilitate more effective investigations. To improve reliability, we developed the Forensic Discovery Auditing Module, a mechanism for auditing and enforcing controls on accesses to evidence. To improve comprehensiveness, we developed ramparser, a tool for deep parsing of Linux RAM images, which provides previously inaccessible data on the live state of a machine. To improve efficiency, we developed a set of performance optimizations, and applied them to the Scalpel file carver, creating order of magnitude improvements to processing speed and storage requirements. Last, to facilitate more coherent investigations, we developed the Forensic Automated Coherence Engine, which generates a high-level view of a system from the data generated by low-level forensics tools. Together, these techniques significantly improve the effectiveness of digital forensic investigations conducted using them.
APA, Harvard, Vancouver, ISO, and other styles
8

Samsuddin, Samsuddin Wira Bin Tu Manghui. "Digital forensics curriculum for undergraduate and master graduate students." [Cedar City, Utah] : Southern Utah University, 2009. http://unicorn.li.suu.edu/ScholarArchive/ForensicScience/SamsuddinWira.pdf.

Full text
Abstract:
Thesis (M.S.)--Southern Utah University, 2009.<br>Title from PDF title page. "Thesis presented to the faculty of the Graduate School of Southern Utah University in partial fulfillment of the requirements for the degree of Master of Science in Forensic Science, Computer Forensics Emphasis." Manghui Tu, Advisor. Includes bibliographical references (p. 83-88).
APA, Harvard, Vancouver, ISO, and other styles
9

Schatz, Bradley Lawrence. "Digital evidence : representation and assurance." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16507/1/Bradley_Schatz_Thesis.pdf.

Full text
Abstract:
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
10

Schatz, Bradley Lawrence. "Digital evidence : representation and assurance." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16507/.

Full text
Abstract:
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
11

Montasari, Reza. "The Comprehensive Digital Forensic Investigation Process Model (CDFIPM) for digital forensic practice." Thesis, University of Derby, 2016. http://hdl.handle.net/10545/620799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Dontula, Varun. "Data Exploration Interface for Digital Forensics." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1373.

Full text
Abstract:
The fast capacity growth of cheap storage devices presents an ever-growing problem of scale for digital forensic investigations. One aspect of scale problem in the forensic process is the need for new approaches to visually presenting and analyzing large amounts of data. Current generation of tools universally employ three basic GUI components—trees, tables, and viewers—to present all relevant information. This approach is not scalable as increasing the size of the input data leads to a proportional increase in the amount of data presented to the analyst. We present an alternative approach, which leverages data visualization techniques to provide a more intuitive interface to explore the forensic target. We use tree visualization techniques to give the analyst both a high-level view of the file system and an efficient means to drill down into the details. Further, we provide means to search for keywords and filter the data by time period.
APA, Harvard, Vancouver, ISO, and other styles
13

Bade, Hans, and Oscar Hedlund. "Anti-Forensik : Anti-forensiska metoder på mobila enheter." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37701.

Full text
Abstract:
Mobiltelefoner har blivit grundläggande för extrahering av digitala artefakter i fo-rensiska utredningar. Androids Linuxbaserade operativsystem medför större möj-ligheter för anti-forensiska metoder, detta gör att kunskap om anti-forensik äressentiell för dagens IT-forensiska utredare. I denna studie belyses effekten avanti-forensik i Androidbaserade mobila enheter samt så upplyses det om dagensanti-forensiska attack metoder mot forensiska verktyg. Genom experiment så vi-sas det hur man kan förhindra ett forensisk verktyg från att extrahera data medanvändning av ett simpelt script.<br>Mobile phones have become essential for the extraction of digital artifacts in foren-sic investigations. Android’s Linux-based operating systems bring greater potentialfor anti-forensic methods, which means that knowledge of anti-forensics is essen-tial to today’s IT forensic investigators. In this study, the effect of anti-forensicson Android-based mobile devices is highlighted, as well as revealing today’s anti-forensic attack methods against forensic tools. By experiment, it is shown how toprevent a forensic tool from extracting data by using a simple script.
APA, Harvard, Vancouver, ISO, and other styles
14

Barone, Joshua M. "Automated Timeline Anomaly Detection." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1609.

Full text
Abstract:
Digital forensics is the practice of trained investigators gathering and analyzing evidence from digital devices such as computers and smart phones. On these digital devices, it is possible to change the time on the device for a purpose other than what is intended. Currently there are no documented techniques to determine when this occurs. This research seeks to prove out a technique for determining when the time has been changed on forensic disk image by analyzing the log files found on the image. Out of this research a tool is created to perform this analysis in automated fashion. This tool is TADpole, a command line program that analyzes the log files on a disk image and determines if a timeline anomaly has occurred.
APA, Harvard, Vancouver, ISO, and other styles
15

Alrumaithi, A. M. "Prioritisation in digital forensics : a case study of Abu Dhabi Police." Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/8936/.

Full text
Abstract:
The main goal of this research is to investigate prioritization process in digital forensics departments in law enforcement organizations. This research is motivated by the fact that case prioritisation plays crucial role to achieve efficient operations in digital forensics departments. Recent years have witnessed the widespread use of digital devices in every aspect of human life, around the globe. One of these aspects is crime. These devices have became an essential part of every investigation in almost all cases handled by police. The reason behind their importance lies in their ability to store huge amounts of data that can be utilized by investigators to solve cases under consideration. Thus, involving Digital Forensics departments, though often over-burdened and under-resourced, is becoming a compulsory to achieve successful investigations. Increasing the effectiveness of these departments requires improving their processes including case prioritisation. Existing literature focuses on prioritisation process within the context of crime scene triage. The main research problem in literature is prioritising existing digital devices found in crime scene in a way that leads to successful digital forensics. On the other hand, the research problem in this thesis focuses on prioritisation of cases rather than digital devices belonging to a specific case. Normally, Digital Forensics cases are prioritised based on several factors where influence of officers handling the case play one of the most important roles. Therefore, this research investigates how perception of different individuals in law enforcement organization may affect case prioritisation for the Digital Forensics department. To address this prioritisation problem, the research proposes the use of maturity models and machine learning. A questionnaire was developed and distributed among officers in Abu Dhabi Police. The main goal of this questionnaire is to measure perception regarding digital forensics among employees in Abu Dhabi police. Response of the subjects were divided into two sets. The first set represents responses of subjects who are experts in DF; while the other set includes the remaining subjects. Responses in the first set were averaged to produce a benchmark of the optimal questionnaire answers. Then, a reliability measure is proposed to summarize each subject’s perception. Data obtained from the reliability measurement were used in machine learning models, so that the process is automated. Results of data analysis confirmed the severity of problem where the proposed prioritisation process can be a very effective solution as seen in the results provided in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
16

Kruger, Jaco-Louis. "Digital forensic readiness for IOT devices." Diss., University of Pretoria, 2019. http://hdl.handle.net/2263/73385.

Full text
Abstract:
The Internet of Things (IoT) has evolved to be an important part of modern society. IoT devices can be found in several environments such as smart homes, transportation, the health sector, smart cities and even facilitates automation in organisations. The increasing dependence on IoT devices increases the possibility of security incidents in the physical or cyber environment. Traditional methods of digital forensic (DF) investigations are not always applicable to IoT devices due to their limited data processing resources. A possible solution for conducting forensic investigations on IoT devices is to utilise a proactive approach known as digital forensic readiness (DFR). This dissertation firstly aims to conduct a thorough review of the available literature in the current body of knowledge to identify a clear process that can be followed to implement DFR tailored for IoT devices. This dissertation then formulates requirements for DFR in IoT based on existing forensic techniques. The requirements for DFR in IoT give rise to the development of a model for DFR in IoT, which is then implemented in a prototype for IoT devices. The prototype is subsequently tested and evaluated on IoT devices that conduct proactive DFR in a simulation of a smart home system. Finally, the dissertation illustrates the feasibility of the DFR processes for IoT and serves as a basis for future research with regards to DFR in IoT. This dissertation will impact future research with regards to developing a standard for DFR in IoT.<br>Dissertation (MSc)--University of Pretoria, 2019.<br>Computer Science<br>MSc<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
17

Marrington, Andrew Daniel. "Computer profiling for forensic purposes." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/31048/1/Andrew_Marrington_Thesis.pdf.

Full text
Abstract:
Computer forensics is the process of gathering and analysing evidence from computer systems to aid in the investigation of a crime. Typically, such investigations are undertaken by human forensic examiners using purpose-built software to discover evidence from a computer disk. This process is a manual one, and the time it takes for a forensic examiner to conduct such an investigation is proportional to the storage capacity of the computer's disk drives. The heterogeneity and complexity of various data formats stored on modern computer systems compounds the problems posed by the sheer volume of data. The decision to undertake a computer forensic examination of a computer system is a decision to commit significant quantities of a human examiner's time. Where there is no prior knowledge of the information contained on a computer system, this commitment of time and energy occurs with little idea of the potential benefit to the investigation. The key contribution of this research is the design and development of an automated process to describe a computer system and its activity for the purposes of a computer forensic investigation. The term proposed for this process is computer profiling. A model of a computer system and its activity has been developed over the course of this research. Using this model a computer system, which is the subj ect of investigation, can be automatically described in terms useful to a forensic investigator. The computer profiling process IS resilient to attempts to disguise malicious computer activity. This resilience is achieved by detecting inconsistencies in the information used to infer the apparent activity of the computer. The practicality of the computer profiling process has been demonstrated by a proof-of concept software implementation. The model and the prototype implementation utilising the model were tested with data from real computer systems. The resilience of the process to attempts to disguise malicious activity has also been demonstrated with practical experiments conducted with the same prototype software implementation.
APA, Harvard, Vancouver, ISO, and other styles
18

Kessler, Gary Craig. "Judges' Awareness, Understanding, and Application of Digital Evidence." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/196.

Full text
Abstract:
As digital evidence grows in both volume and importance in criminal and civil courts, judges need to fairly and justly evaluate the merits of the offered evidence. To do so, judges need a general understanding of the underlying technologies and applications from which digital evidence is derived. Due to the relative newness of the computer forensics field, there have been few studies on the use of digital forensic evidence and none about judges' relationship with digital evidence. This study addressed judges' awareness, knowledge, and perceptions of digital evidence, using grounded theory methods. The interaction of judges with digital evidence has a social aspect that makes a study of this relationship well suited to grounded theory. This study gathered data via a written survey distributed to judges in the American Bar Association and National Judicial College, followed by interviews with judges from Massachusetts and Vermont. The results indicated that judges generally recognize the importance of evidence derived from digital sources, although they are not necessarily aware of all such sources. They believe that digital evidence needs to be authenticated just like any type of evidence and that it is the role of attorneys rather than of judges to mount challenges to that evidence, as appropriate. Judges are appropriately wary of digital evidence, recognizing how easy it is to alter or misinterpret such evidence. Less technically aware judges appear even more wary of digital evidence than their more knowledgeable peers. Judges recognize that they need additional training in computer and Internet technology as the computer forensics process and digital evidence, citing a lack of availability of such training. This training would enable judges to better understand the arguments presented by lawyers, testimony offered by technical witnesses, and judicial opinions forming the basis of decisional law. A framework for such training is provided in this report. This study is the first in the U.S. to analyze judges and digital forensics, thus opening up a new avenue of research. It is the second time that grounded theory has been employed in a digital forensics study, demonstrating the applicability of that methodology to this discipline.
APA, Harvard, Vancouver, ISO, and other styles
19

Necaise, Nathan Joseph. "Empirical analysis of disk sector prefixes for digital forensics." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-03282007-151218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hewling, Moniphia Orlease. "Digital forensics : an integrated approach for the investigation of cyber/computer related crimes." Thesis, University of Bedfordshire, 2013. http://hdl.handle.net/10547/326231.

Full text
Abstract:
Digital forensics has become a predominant field in recent times and courts have had to deal with an influx of related cases over the past decade. As computer/cyber related criminal attacks become more predominant in today’s technologically driven society the need for and use of, digital evidence in courts has increased. There is the urgent need to hold perpetrators of such crimes accountable and successfully prosecuting them. The process used to acquire this digital evidence (to be used in cases in courts) is digital forensics. The procedures currently used in the digital forensic process were developed focusing on particular areas of the digital evidence acquisition process. This has resulted in very little regard being made for the core components of the digital forensics field, for example the legal and ethical along with other integral aspects of investigations as a whole. These core facets are important for a number of reasons including the fact that other forensic sciences have included them, and to survive as a true forensics discipline digital forensics must ensure that they are accounted for. This is because, digital forensics like other forensics disciplines must ensure that the evidence (digital evidence) produced from the process is able to withstand the rigors of a courtroom. Digital forensics is a new and developing field still in its infancy when compared to traditional forensics fields such as botany or anthropology. Over the years development in the field has been tool centered, being driven by commercial developers of the tools used in the digital investigative process. This, along with having no set standards to guide digital forensics practitioners operating in the field has led to issues regarding the reliability, verifiability and consistency of digital evidence when presented in court cases. Additionally some developers have neglected the fact that the mere mention of the word forensics suggests courts of law, and thus legal practitioners will be intimately involved. Such omissions have resulted in the digital evidence being acquired for use in various investigations facing major challenges when presented in a number of cases. Mitigation of such issues is possible with the development of a standard set of methodologies flexible enough to accommodate the intricacies of all fields to be considered when dealing with digital evidence. This thesis addresses issues regarding digital forensics frameworks, methods, methodologies and standards for acquiring digital evidence using the grounded theory approach. Data was gathered using literature surveys, questionnaires and interviews electronically. Collecting data using electronic means proved useful when there is need to collect data from different jurisdictions worldwide. Initial surveys indicated that there were no existing standards in place and that the terms models/frameworks and methodologies were used interchangeably to refer to methodologies. A framework and methodology have been developed to address the identified issues and represent the major contribution of this research. The dissertation outlines solutions to the identified issues and presents the 2IR Framework of standards which governs the 2IR Methodology supported by a mobile application and a curriculum of studies. These designs were developed using an integrated approach incorporating all four core facets of the digital forensics field. This research lays the foundation for a single integrated approach to digital forensics and can be further developed to ensure the robustness of process and procedures used by digital forensics practitioners worldwide.
APA, Harvard, Vancouver, ISO, and other styles
21

Hales, Gavin. "Assisting digital forensic analysis via exploratory information visualisation." Thesis, Abertay University, 2016. https://rke.abertay.ac.uk/en/studentTheses/774128b9-957e-4a05-aa74-dbeefebb8113.

Full text
Abstract:
Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.
APA, Harvard, Vancouver, ISO, and other styles
22

Brand, Murray. "Analysis avoidance techniques of malicious software." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2010. https://ro.ecu.edu.au/theses/138.

Full text
Abstract:
Anti Virus (AV) software generally employs signature matching and heuristics to detect the presence of malicious software (malware). The generation of signatures and determination of heuristics is dependent upon an AV analyst having successfully determined the nature of the malware, not only for recognition purposes, but also for the determination of infected files and startup mechanisms that need to be removed as part of the disinfection process. If a specimen of malware has not been previously extensively analyzed, it is unlikely to be detected by AV software. In addition, malware is becoming increasingly profit driven and more likely to incorporate stealth and deception techniques to avoid detection and analysis to remain on infected systems for a myriad of nefarious purposes. Malware extends beyond the commonly thought of virus or worm, to customized malware that has been developed for specific and targeted miscreant purposes. Such customized malware is highly unlikely to be detected by AV software because it will not have been previously analyzed and a signature will not exist. Analysis in such a case will have to be conducted by a digital forensics analyst to determine the functionality of the malware. Malware can employ a plethora of techniques to hinder the analysis process conducted by AV and digital forensics analysts. The purpose of this research has been to answer three research questions directly related to the employment of these techniques as: 1. What techniques can malware use to avoid being analyzed? 2. How can the use of these techniques be detected? 3. How can the use of these techniques be mitigated?
APA, Harvard, Vancouver, ISO, and other styles
23

SOLINAS, FABRIZIO. "Technical and legal perspectives on forensics scenario." Doctoral thesis, Università degli Studi di Cagliari, 2014. http://hdl.handle.net/11584/266504.

Full text
Abstract:
The dissertation concerns digital forensic. The expression digital forensic (sometimes called digital forensic science) is the science that studies the identification, storage, protection, retrieval, documentation, use, and every other form of computer data processing in order to be evaluated in a legal trial. Digital forensic is a branch of forensic science. First of all, digital forensic represents the extension of theories, principles and procedures that are typical and important elements of the forensic science, computer science and new technologies. From this conceptual viewpoint, the logical consideration concerns the fact that the forensic science studies the legal value of specific events in order to contrive possible sources of evidence. The branches of forensic science are: physiological sciences, social sciences, forensic criminalistics and digital forensics. Moreover, digital forensic includes few categories relating to the investigation of various types of devices, media or artefacts. These categories are: - computer forensic: the aim is to explain the current state of a digital artefact; such as a computer system, storage medium or electronic document; - mobile device forensic: the aim is to recover digital evidence or data from mobile device, such as image, log call, log sms and so on; - network forensic: the aim is related to the monitoring and analysis of network traffic (local, WAN/Internet, UMTS, etc.) to detect intrusion more in general to find network evidence; - forensic data analysis: the aim is examine structured data to discover evidence usually related to financial crime; - database forensic: the aim is related to databases and their metadata. The origin and historical development of the discipline of study and research of digital forensic are closely related to progress in information and communication technology in the modern era. In parallel with the changes in society due to new technologies and, in particular, the advent of the computer and electronic networks, there has been a change in the mode of collection, management and analysis of evidence. Indeed, in addition to the more traditional, natural and physical elements, the procedures have included further evidence that although equally capable of identifying an occurrence, they are inextricably related to a computer or a computer network or electronic means. The birth of computer forensics can be traced back to 1984, when the FBI and other American investigative agencies have began to use software for the extraction and analysis of data on a personal computer. At the beginning of the 80s, the CART(Computer Analysis and Response Team) was created within the FBI, with the express purpose of seeking the so-called digital evidence. This term is used to denote all the information stored or transmitted in digital form that may have some probative value. While the term evidence, more precisely, constitutes the judicial nature of digital data, the term forensic emphasizes the procedural nature of matter, literally, "to be presented to the Court". Digital forensic have a huge variety of applications. The most common applications are related to crime or cybercrime. Cybercrime is a growing problem for government, business and private. - Government: security of the country (terrorism, espionage, etc.) or social problems (child pornography, child trafficking and so on). - Business: purely economic problems, for example industrial espionage. - Private: personal safety and possessions, for example phishing, identity theft. Often many techniques, used in digital forensics, are not formally defined and the relation between the technical procedure and the law is not frequently taken into consideration. From this conceptual perspective, the research work intends to define and optimize the procedures and methodologies of digital forensic in relation to Italian regulation, testing, analysing and defining the best practice, if they are not defined, concerning common software. The research questions are: 1. The problem of cybercrime is becoming increasingly significant for governments, businesses and citizens. - In relation to governments, cybercrime involves problems concerning national security, such as terrorism and espionage, and social questions, such as trafficking in children and child pornography. - In relation to businesses, cybercrime entails problems concerning mainly economic issues, such as industrial espionage. - In relation to citizens, cybercrime involves problems concerning personal security, such as identity thefts and fraud. 2. Many techniques, used within the digital forensic, are not formally defined. 3. The relation between procedures and legislation are not always applied and taken into consideration
APA, Harvard, Vancouver, ISO, and other styles
24

Homem, Irvin. "Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud Environments." Licentiate thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130742.

Full text
Abstract:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
APA, Harvard, Vancouver, ISO, and other styles
25

AlMarri, Saeed. "A structured approach to malware detection and analysis in digital forensics investigation." Thesis, University of Bedfordshire, 2017. http://hdl.handle.net/10547/622529.

Full text
Abstract:
Within the World Wide Web (WWW), malware is considered one of the most serious threats to system security with complex system issues caused by malware and spam. Networks and systems can be accessed and compromised by various types of malware, such as viruses, worms, Trojans, botnet and rootkits, which compromise systems through coordinated attacks. Malware often uses anti-forensic techniques to avoid detection and investigation. Moreover, the results of investigating such attacks are often ineffective and can create barriers for obtaining clear evidence due to the lack of sufficient tools and the immaturity of forensics methodology. This research addressed various complexities faced by investigators in the detection and analysis of malware. In this thesis, the author identified the need for a new approach towards malware detection that focuses on a robust framework, and proposed a solution based on an extensive literature review and market research analysis. The literature review focussed on the different trials and techniques in malware detection to identify the parameters for developing a solution design, while market research was carried out to understand the precise nature of the current problem. The author termed the new approaches and development of the new framework the triple-tier centralised online real-time environment (tri-CORE) malware analysis (TCMA). The tiers come from three distinctive phases of detection and analysis where the entire research pattern is divided into three different domains. The tiers are the malware acquisition function, detection and analysis, and the database operational function. This framework design will contribute to the field of computer forensics by making the investigative process more effective and efficient. By integrating a hybrid method for malware detection, associated limitations with both static and dynamic methods are eliminated. This aids forensics experts with carrying out quick, investigatory processes to detect the behaviour of the malware and its related elements. The proposed framework will help to ensure system confidentiality, integrity, availability and accountability. The current research also focussed on a prototype (artefact) that was developed in favour of a different approach in digital forensics and malware detection methods. As such, a new Toolkit was designed and implemented, which is based on a simple architectural structure and built from open source software that can help investigators develop the skills to critically respond to current cyber incidents and analyses.
APA, Harvard, Vancouver, ISO, and other styles
26

Koen, Renico. "The development of an open-source forensics platform." Diss., Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-02172009-014722/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Barbosa, Akio Nogueira. "Método para ranqueamento e triagem de computadores aplicado à perícia de informática." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-14062016-081553/.

Full text
Abstract:
Considerando-se que uma das tarefas mais comuns para um perito judicial que atua na área da informática é procurar vestígios de interesse no conteúdo de dispositivos de armazenamento de dados (DADs), que esses vestígios na maioria das vezes consistem em palavras-chave (PChs) e durante o tempo necessário para realização da duplicação do DAD o perito fica praticamente impossibilitado de interagir com os dados contidos no mesmo, decidiu-se verificar a hipótese de que seja possível na etapa de coleta, realizar simultaneamente à duplicação do DAD a varredura para procurar PCHs em dados brutos (raw data), sem com isso impactar significativamente o tempo de duplicação. O principal objetivo desta tese é propor um método que possibilite identificar os DADs com maior chance de conter vestígios de interesse para uma determinada perícia ao término da etapa de coleta, baseado na quantidade de ocorrências de PCHs encontradas por um mecanismo de varredura que atua no nível de dados brutos. A partir desses resultados é realizada uma triagem dos DADs. Com os resultados da triagem é realizado um processo de ranqueamento, indicando quais DADs deverão ser examinados prioritariamente na etapa de análise. Os resultados dos experimentos mostraram que é possível e viável a aplicação do método sem onerar o tempo de duplicação e com um bom nível de precisão. Em muitos de casos, a aplicação do método contribui para a diminuição da quantidade de DADs que devem ser analisados, auxiliando a diminuir o esforço humano necessário.<br>Considering that one of the most common tasks for a legal expert acting in the information technology area is to look for invidences of interest in the content data storage devices (DADs). In most cases these evidences consist of keywords. During the time necessary to perform the DAD duplication, the expert is practically unable to interact with the data contained on DAD. In this work we have decided to verify the following hypothesis: It is possible, at the collection stage, to simultaneously hold the duplication of the DAD and scan to search for keywords in raw data, without thereby significantly impact the duplication time. The main objective of this thesis is to propose a method that allows to identify DADs with a strong chance of containing evidences of interest for a particular skill at the end of the collection stage, based on the keywords occurrences found by a scanner mechanism that operates at the raw data level. Based on these results, a triage of DADs is established. With the results of the triage, a ranking process is made, providing an indication of which DADs should be examined first at the analysis stage. The results of the ours experiments showed that it is possible and feasible to apply the method without hindering the duplication time and with a certain level of accuracy. In most cases, the application of the method contributes to reduce the number of DADs that must be analyzed, helping to reduces the human effort required.
APA, Harvard, Vancouver, ISO, and other styles
28

Sylve, Joseph T. "Android Memory Capture and Applications for Security and Privacy." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1400.

Full text
Abstract:
The Android operating system is quickly becoming the most popular platform for mobiledevices. As Android’s use increases, so does the need for both forensic and privacy toolsdesigned for the platform. This thesis presents the first methodology and toolset for acquiringfull physical memory images from Android devices, a proposed methodology for forensicallysecuring both volatile and non-volatile storage, and details of a vulnerability discovered by theauthor that allows the bypass of the Android security model and enables applications to acquirearbitrary permissions.
APA, Harvard, Vancouver, ISO, and other styles
29

Sanyamahwe, Tendai. "Digital forensic model for computer networks." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/d1000968.

Full text
Abstract:
The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project.
APA, Harvard, Vancouver, ISO, and other styles
30

Hjerpe, David, and Henrik Bengtsson. "Digital forensics - Performing virtual primary memory extraction in cloud environments using VMI." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16735.

Full text
Abstract:
Infrastructure as a Service and memory forensics are two subjects which have recently gained increasing amounts of attention. Combining these topics poses new challenges when performing forensic investigations. Forensics targeting virtual machines in a cloud environment is problematic since the devices are virtual, and memory forensics are a newer branch of forensics which is hard to perform and is not well documented. It is, however an area of utmost importance since virtual machines may be targets of, or participate in suspicious activity to the same extent as physical machines. Should such activity require an investigation to be conducted, some data which could be used as evidence may only be found in the primary memory. This thesis aims to further examine memory forensics in cloud environments and expand the academic field of these subjects and help cloud hosting organisations. The objective of this thesis was to study if Virtual Machine Introspection is a valid technique to acquire forensic evidence from the virtual primary memory of a virtual machine. Virtual Machine Introspection is a method of monitoring and analysing a guest via the hypervisor. In order to verify whether Virtual Machine Introspection is a valid forensic technique, the first task was to attempt extracting data from the primary memory which had been acquired using Virtual Machine Introspection. Once extracted, the integrity of the data had to be authenticated. This was done by comparing a hash sum of a file located on a guest with a hash sum of the extracted data. The experiment showed that the two hashes were an exact match. Next, the solidity of the extracted data was tested by changing the memory of a guest while acquiring the memory via Virtual Machine Introspection. This showed that the solidity is heavily compromised because memory acquisition process used was too slow. The final task was to compare Virtual Machine Introspection to acquiring the physical memory of the host. By setting up two virtual machines and examining the primary memory, data from both machines was found where as Virtual Machine Introspection only targets one machine, providing an advantage regarding privacy.
APA, Harvard, Vancouver, ISO, and other styles
31

Oskarsson, Tim. "Digital incursion: Breaching the android lock screen and liberating data." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44939.

Full text
Abstract:
Android is the most used operating system in the world, because of this the probability of an android device being acquired in an investigation is high. To begin to extract data from an android device you first need to gain access to it. Mechanisms like full system encryption can make this very difficult. In this paper, the advantages and disadvantages of different methods of gaining access and extracting data from an android device with an unlocked bootloader are discussed. Many users unlock the bootloader of their android device to gain a much greater level of control over it. Android forensics on a device without a unlocked bootloader is very limited. It is therefore interesting to study how you can extract data from an android device that doesn’t have this limitation to android forensics. A literature study is done on previous related research to gather methods for gaining access and extracting data. The methods collected are then tested by performing experiments on a Oneplus 3 android 9 and Oneplus 8 android 11. The research of this paper found that it is possible to perform a brute force attack within a reasonable time against a PIN of length 4-5 or pattern of length 4-6 on the lock screen of an android device. It found that you can optimise the attack by performing a dictionary attack by using public lists of the most used PIN codes. A list of all possible pattern combinations sorted and optimised for a dictionary attack is generated based on statistics of pattern starting location and length. A proof of concept is made by creating a copy of a fingerprint with common cheap materials to gain access through the fingerprint sensor. A device image were able to be extracted by using a root shell through Android Debug Bridge and common command-line tools. Memory forensics were performed by using Frida and was able to extract usernames, passwords, and emails from Google Chrome and Gmail. The custom recovery image TWRP was used to boot the device, gain root access, and was able to extract a full device image with common command-line tools. The results of the TWRP backup feature is also analysed. The results of the data extraction is then analysed manually and with Autopsy.
APA, Harvard, Vancouver, ISO, and other styles
32

Börjesson, Holme, and Filiph Lindskog. "Går det att köpa personuppgifter på bilskroten? : Ett arbete om digital forensik på begagnade bildelar." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42369.

Full text
Abstract:
I moderna bilar lagras ofta data från användaren av bilen då en mobiltelefon eller annan enhet parkopplas genom Bluetooth- eller USB-anslutning. I de fall dessa data innehåller personuppgifter kan de vara intressanta i en utredning samt vara värda att skydda ur ett integritetsperspektiv. Vad händer med dessa data då bilen skrotas? När en bil skrotas monteras den ner och de delar som går att tjäna pengar på säljs av bildemonteringsföretaget. Det kan gälla allt från stötdämpare, hjul och rattar, till elektroniska komponenter och multimediaenheter. I detta arbete utvanns personuppgifter ur tre sådana begagnade multimediaenheter som köpts från bildemonteringar. Den mest framgångsrika metoden var att avlägsna rätt lagringskrets från multimediaenhetens kretskort och utvinna dess data genom direkt anslutning. I samtliga fall har informationen varit strukturerad i ett bekant filsystem vilket kunnat monteras. I alla tre undersöka multimediaenheter utvanns personuppgifter. Resultatet visar att det finns brister i hanteringen av personuppgifter då en bil skrotas.<br>In modern vehicles, data from the user of the vehicle is often stored when a mobile phone or other device is paired through Bluetooth or USB connection. In cases where this data contains personal data, they may be of interest in an investigation and may be worth protecting from a privacy perspective. What happens to this data when the car is scrapped? When a car is scrapped, it is dismantled and the parts that can be made money from are sold by the scrap company. This can be anything from shock absorbers, wheels and steering wheels, to electronic components and infotainment devices. In this report, personal data was extracted from three such infotainment devices purchased from scrap companies. The most successful method was to remove the correct storage circuit from the infotainment device circuit board and extract its data by direct connection. In all cases, the information has been structured in a familiar file system which could be mounted. In all three investigated infotainment devices, personal data were extracted. The result shows that there are deficiencies in the handling of personal data when a car is scrapped.
APA, Harvard, Vancouver, ISO, and other styles
33

Andersson, Victor. "Standards and methodologies for evaluating digital forensics tools : Developing and testing a new methodology." Thesis, Högskolan i Halmstad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-37485.

Full text
Abstract:
Standards play a big role in a lot of professions and when it comes to most aspects of law enforcement and forensic investigation, it’s no different. Despite that, for some reason, there aren’t any for when it comes to evaluating and choosing forensic tools. The lack of an international standard for evaluating forensic tools has a clear negative impact on the digital forensics community as it lowers the value of tool tests and evaluations and hinders both the reproducibility and verification of their results. Most tool evaluations are performed with custom forensic images and measures metrics that are not scientifically motivated, but rather made up based on the evaluator's personal preferences. By examining current standards and related work done in the field, a new methodology is proposed. It builds on scientific principles and the strengths of existing literature. The methodology is then tested in a practical experiment. The result of the paper is a solid foundation for a new standard to be built upon.
APA, Harvard, Vancouver, ISO, and other styles
34

Jacobsson, Emma, and Wistad Ebba Andersson. "Digital bevisning : En definition." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44963.

Full text
Abstract:
The digitalization of society has contributed to a more convenient lifestyle for the Swedishpopulation. Our everyday life includes digital technology that saves information about us and our actions from digital devices. Collected digital information can be admissible as evidence in a legal process. Despite the increase in crimes involving IT, Sweden seems to lack an official description for the concept of digital evidence. The purpose of this report is to propound two definitions: one for the general public and one more suitable for specific technical usage by people in the field. This report focuses on three different areas which together contribute to a holistic approach to gaining the basic knowledge and understanding of what digital evidence is. These areas include the value of digital evidence in a court decision, how anti-computer forensics can affect its extraction, and the legitimacy of digital evidence. To explore the various areas, employees within the police authority have answered questions based on their professional roles. The participants work as either investigators, preliminary investigation leaders, IT-computer forensic scientists, or prosecutors. The participants' answers have, together with literature, contributed to the definition of the concept of digital evidence and to the creation of a basic understanding of the subject of digital evidence.<br>Digitaliseringen av samhället har bidragit till en enklare vardag för den svenskabefolkningen. Denna vardag består till stor del av digital teknik som sparar information om oss och våra handlingar från digitala enheter. Insamlad digital information kan användas som bevismaterial i en rättsprocess. Trots den kraftfulla ökningen av IT-brott verkar Sverige sakna en officiell vedertagen beskrivning för begreppet digital bevisning. Arbetets syfte har varit att föreslå två definitioner; en för lekmän och en som lämpar sig för tekniskt kunniga. Arbetet fördjupar sig inom tre områden och bidrar därmed till ett helhetsperspektiv för att få en förståelse för vad digital bevisning är. Problemområdena innefattar den digitala bevisningens värde i ett domstolsbeslut, hur anti-forensik påverkar utvinningen och legitimiteten av digitala bevis. För att få en större förståelse för utmaningarna inom de angivna problemområdena har ett antal anställda inom polismyndigheten fått svara på frågor som riktar sig till deras arbetsuppgifter. Deltagarna som har blivit tillfrågade arbetar som utredare, förundersökningsledare, IT-forensiker och åklagare. Deltagarnas svar har tillsammans med tidigare forskning skapat utgångspunkten för att definiera begreppet digital bevisning och skapa en mera gedigen förståelse för digitala bevis.
APA, Harvard, Vancouver, ISO, and other styles
35

Årnes, Andre. "Risk, Privacy, and Security in Computer Networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1725.

Full text
Abstract:
<p>With an increasingly digitally connected society comes complexity, uncertainty, and risk. Network monitoring, incident management, and digital forensics is of increasing importance with the escalation of cybercrime and other network supported serious crimes. New laws and regulations governing electronic communications, cybercrime, and data retention are being proposed, continuously requiring new methods and tools.</p><p>This thesis introduces a novel approach to real-time network risk assessment based on hidden Markov models to represent the likelihood of transitions between security states. The method measures risk as a composition of individual hosts, providing a precise, fine-grained model for assessing risk and providing decision support for incident response. The approach has been integrated with an existing framework for distributed, large-scale intrusion detection, and the results of the risk assessment are applied to prioritize the alerts produced by the intrusion detection sensors. Using this implementation, the approach is evaluated on both simulated and real-world data.</p><p>Network monitoring can encompass large networks and process enormous amounts of data, and the practice and its ubiquity can represent a great threat to the privacy and confidentiality of network users. Existing measures for anonymization and pseudonymization are analyzed with respect to the trade-off of performing meaningful data analysis while protecting the identities of the users. The results demonstrate that most existing solutions for pseudonymization are vulnerable to a range of attacks. As a solution, some remedies for strengthening the schemes are proposed, and a method for unlinkable transaction pseudonyms is considered.</p><p>Finally, a novel method for performing digital forensic reconstructions in a virtual security testbed is proposed. Based on a hypothesis of the security incident in question, the testbed is configured with the appropriate operating systems, services, and exploits. Attacks are formulated as event chains and replayed on the testbed. The effects of each event are analyzed in order to support or refute the hypothesis. The purpose of the approach is to facilitate reconstruction experiments in digital forensics. Two examples are given to demonstrate the approach; one overview example based on the Trojan defense and one detailed example of a multi-step attack. Although a reconstruction can neither prove a hypothesis with absolute certainty, nor exclude the correctness of other hypotheses, a standardized environment combined with event reconstruction and testing can lend credibility to an investigation and can be a valuable asset in court.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Shakir, Amer, Muhammad Hammad, and Muhammad Kamran. "Comparative Analysis & Study of Android/iOS MobileForensics Tools." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44797.

Full text
Abstract:
This report aims to draw a comparison between two commercial mobile forensics and recovery tools, Magnet AXIOM and MOBILedit. A thorough look at previously done studies was helpful to know what aspects of the data extractions must be compared and which areas are the most important ones to focus upon. This work focuses on how the data extracted from one tool compares with another and provides comprehensive extraction based on different scenarios, circumstances, and aspects. Performances of both tools are compared based on various benchmarks and criteria. This study has helped establish that MOBILedit has been able to outperform Magnet AXIOM on more data extraction and recovery aspects. It is a comparatively better tool to get your hands on.
APA, Harvard, Vancouver, ISO, and other styles
37

Olsson, Jens. "Digital Evidence with Emphasis on Time." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3774.

Full text
Abstract:
Computer Forensics is mainly about investigating crimes where computers has been involved. There are many tools available to aid the investigator with this task. We have created a prototype of a completely new type of tool where all evidences are indexed by its time variable and plotted on a timeline. We believed that this way would make it easier and more intuitive to find coherent evidence and would make it faster to work with for the investigator. We have performed a user test where a group of people has evaluated our prototype tool against a modern commercial computer forensic tool and the results of this test are much better than we expected. The results show that users completed the task much faster and that the results were more correct. They also experienced that the prototype were more intuitive to use and that it was easier to find evidence that was coherent in time.
APA, Harvard, Vancouver, ISO, and other styles
38

Alfaize, Najah Abdulaziz. "The impact of culture and religion on digital forensics : the study of the role of digital evidence in the legal process in Saudi Arabia." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/13124.

Full text
Abstract:
This work contributes to the multi-disciplinary community of researchers in computer science, information technology and computer forensics working together with legal enforcement professionals involved in digital forensic investigations. It is focused on the relationship between scientific approaches underpinning digital forensics and the Islamic law underpinning legal enforcement. Saudi Arabia (KSA) is studied as an example of an Islamic country that has adopted international guidelines, such as ACPO, in its legal enforcement procedures. The relationship between Islamic law and scientific ACPO guidelines is examined in detail through the practices of digital forensic practitioners in the process of discovery, preparation and presentation of digital evidence for use in Islamic courts in KSA. In this context, the influence of religion and culture on the role and status of digital evidence throughout the entire legal process has been the main focus of this research. Similar studies in the literature confirm that culture and religion are significant factors in the relationship between law, legal enforcement procedure and digital evidence. Islamic societies, however, have not been extensively studied from this perspective, and this study aims to address issues that arise at both professional and personal levels. Therefore the research questions that this study aims to answer are: in what way and to what extent Islamic religion and Saudi culture affect the status of digital evidence in the KSA legal process and what principles the practitioners have to observe in the way they treat digital evidence in judicial proceedings. The methodology is based on a mixed-method approach where the pilot questionnaire identified legal professionals who come into contact with digital evidence, their educational and professional profiles. Qualitative methods included case studies, interviews and documentary evidence to discover how their beliefs and attitudes influence their trust in digital evidence. The findings show that a KSA judge would trust witnesses more than digital evidence, due to the influence of tradition, which regards justice and law to arise from the relationship between Man and God. Digital evidence, as it arises from the scientific method, is acceptable, but there is underlying lack of trust in its authenticity, reliability and credibility. In the eyes of the legal enforcement professionals working in all areas of the KSA legal process, acceptance of digital evidence in the KSA judicial system can best be improved if knowledge, education and skills of digital forensics specialists is improved also, so that they can be trusted as expert witnesses. This further shows the significance of KSA laws, regulations and education of digital forensic experts as the primary means for establishing trust in digital evidence. Further research following from this study will be focused on comparative studies of other Islamic non-Islamic legal systems as they adopt and adapt western guidelines such as ACPO to their religion, culture and legal systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Cervantes, Milagros. "Success factors and challenges in digital forensics for law enforcement : A Systematic Literature Review." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20154.

Full text
Abstract:
Context: The widespread use of communication and digital technology in the society has affected the number of devices requiring analysis in criminal investigations. Additionally, the increase of storage volume, the diversity of digital devices, and the use of cloud environment introduce more complexities to the digital forensic domain. Objective: This work aims to supply a taxonomy of the main challenges and success factors faced in the digital forensic domain in law enforcement. Method: The chosen method for this research is a systematic literature review of studies with topics related to success factors and challenges in digital forensics for law enforcement. The candidate studies were 1,428 peer-reviewed scientific articles published between 2015 and 2021. Those studies were retrieved from five digital databases following a systematic process. From those candidate studies, twenty were selected as primary studies due to their relevance to the topic. After backward searching, eight other studies were also included in the group of primary studies. A total of twentyeight primary studies were analyzed by applying thematic coding. Furthermore, a survey of digital forensic practitioners from the Swedish Police was held to triangulate the results achieved with the systematic literature review.
APA, Harvard, Vancouver, ISO, and other styles
40

Karresand, Martin. "Completing the Picture : Fragments and Back Again." Licentiate thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

VACIAGO, GIUSEPPE EMILIANO. "Digital evidence: profili tecnico-giuridiche e garanzie dell'imputato." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/20472.

Full text
Abstract:
The key areas of interest dealt with this PHD thesis are digital forensics best practices in the context of cybercrime investigations. Over the coming years, a crucial issue in dealing with cybercrime will be the delicate balance that must necessarily be found in order to protect the fundamental rights of "digital citizens". The output of this work involves analyzing the context of digital investigation in Europe and in the United States and identifying the technical and legal solutions that enable a more satisfactory balance to be achieved between the legitimate demands of cybercrime prevention and respect for the fundamental rights of the individual.
APA, Harvard, Vancouver, ISO, and other styles
42

Shanmugam, Karthikeyan. "Validating digital forensic evidence." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/7651.

Full text
Abstract:
This dissertation focuses on the forensic validation of computer evidence. It is a burgeoning field, by necessity, and there have been significant advances in the detection and gathering of evidence related to electronic crimes. What makes the computer forensics field similar to other forensic fields is that considerable emphasis is placed on the validity of the digital evidence. It is not just the methods used to collect the evidence that is a concern. What is also a problem is that perpetrators of digital crimes may be engaged in what is called anti-forensics. Digital forensic evidence techniques are deliberately thwarted and corrupted by those under investigation. In traditional forensics the link between evidence and perpetrator's actions is often straightforward: a fingerprint on an object indicates that someone has touched the object. Anti-forensic activity would be the equivalent of having the ability to change the nature of the fingerprint before, or during the investigation, thus making the forensic evidence collected invalid or less reliable. This thesis reviews the existing security models and digital forensics, paying particular attention to anti-forensic activity that affects the validity of data collected in the form of digital evidence. This thesis will build on the current models in this field and suggest a tentative first step model to manage and detect possibility of anti-forensic activity. The model is concerned with stopping anti-forensic activity, and thus is not a forensic model in the normal sense, it is what will be called a “meta-forensic” model. A meta-forensic approach is an approach intended to stop attempts to invalidate digital forensic evidence. This thesis proposes a formal procedure and guides forensic examiners to look at evidence in a meta-forensic way.
APA, Harvard, Vancouver, ISO, and other styles
43

Ware, Scott. "HFS Plus File System Exposition and Forensics." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5559.

Full text
Abstract:
The Macintosh Hierarchical File System Plus, HFS+, or as it is commonly referred to as the Mac Operating System, OS, Extended, was introduced in 1998 with Mac OS X 8.1. HFS+ is an update to HFS, Mac OS Standard format that offers more efficient use of disk space, implements international friendly file names, future support for named forks, and facilitates booting on non-Mac OS operating systems through different partition schemes. The HFS+ file system is efficient, yet, complex. It makes use of B-trees to implement key data structures for maintaining meta-data about folders, files, and data. The implementation of what happens within HFS+ at volume format, or when folders, files, and data are created, moved, or deleted is largely a mystery to those who are not programmers. The vast majority of information on this subject is relegated to documentation in books, papers, and online content that direct the reader to C code, libraries, and include files. If one can't interpret the complex C or Perl code implementations the opportunity to understand the workflow within HFS+ is less than adequate to develop a basic understanding of the internals and how they work. The basic concepts learned from this research will facilitate a better understanding of the HFS+ file system and journal as changes resulting from the adding and deleting files or folders are applied in a controlled, easy to follow, process. The primary tool used to examine the file system changes is a proprietary command line interface, CLI, tool called fileXray. This tool is actually a custom implementation of the HFS+ file system that has the ability to examine file system, meta-data, and data level information that isn't available in other tools. We will also use Apple's command line interface tool, Terminal, the WinHex graphical user interface, GUI, editor, The Sleuth Kit command line tools and DiffFork 1.1.9 help to document and illustrate the file system changes. The processes used to document the pristine and changed versions of the file system, with each experiment, are very similar such that the output files are identical with the exception of the actual change. Keeping the processes the same enables baseline comparisons using a diff tool like DiffFork. Side by side and line by line comparisons of the allocation, extents overflow, catalog, and attributes files will help identify where the changes occurred. The target device in this experiment is a two-gigabyte Universal Serial Bus, USB, thumb drive formatted with Global Unit Identifier, GUID, and Partition Table. Where practical, HFS+ special files and data structures will be manually parsed; documented, and illustrated.<br>ID: 031001395; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed May 28, 2013).; Thesis (M.S.)--University of Central Florida, 2012.; Includes bibliographical references (p. 131-132).<br>M.S.<br>Masters<br>Computer Science<br>Engineering and Computer Science<br>Digital Forensics
APA, Harvard, Vancouver, ISO, and other styles
44

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Full text
Abstract:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
45

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Full text
Abstract:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
46

Henriksson, Jonas. "Automated Cross-Border Mutual Legal Assistance in Digital Forensics (AUTOMLA) : A global realized Enterprise Architecture." Thesis, Högskolan i Halmstad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44824.

Full text
Abstract:
Organized cybercrime has no borders in cyberspace. This paper suggests a state-of-the-art architected solution for a global Automated cross-border mutual legal assistance system within Digital Forensic (AUTOMLA). The Enterprise framework with technical viewpoint enables international collaboration between sovereign countries Fusion Centers. The evaluation concludes a user interface built in React, middleware Apollo with schema support linked to graph database Neo4j. GraphQL is the preferred application protocol over REST. Fusion Centers API is deployed as federated gateways, and business functions are implemented as PaaS serverless services.  Its intuitive modeling Forensics in graphs, semantic networks enables causality and inference. All suggested elements in AUTOMLA are forming an internationally agreed collaborative platform; the solution for fast cross-border crime investigations. AUTOMLA deployed on the Internet is a subject for threats. Risks are mitigated in design guided by security frameworks. The recommended development method is agile, distributed in between autonomous teams.
APA, Harvard, Vancouver, ISO, and other styles
47

Nelson, Alexander J. "Software signature derivation from sequential digital forensic analysis." Thesis, University of California, Santa Cruz, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140317.

Full text
Abstract:
<p> Hierarchical storage system namespaces are notorious for their immense size, which is a significant hindrance for any computer inspection. File systems for computers start with tens of thousands of files, and the Registries of Windows computers start with hundreds of thousands of cells. An analysis of a storage system, whether for digital forensics or locating old data, depends on being able to reduce the namespaces down to the features of interest. Typically, having such large volumes to analyze is seen as a challenge to identifying relevant content. However, if the origins of files can be identified&mdash;particularly dividing between software and human origins&mdash;large counts of files become a boon to profiling how a computer has been used. It becomes possible to identify software that has influenced the computer's state, which gives an important overview of storage system contents not available to date. </p><p> In this work, I apply document search to observed changes in a class of forensic artifact, cell names of the Windows Registry, to identify effects of software on storage systems. Using the search model, a system's Registry becomes a query for matching software signatures. To derive signatures, file system differential analysis is extended from between two storage system states to many sequences of states. The workflow that creates these signatures is an example of analytics on data lineage, from branching data histories. The signatures independently indicate past presence or usage of software, based on consistent creation of measurably distinct artifacts. A signature search engine is demonstrated against a machine with a selected set of applications installed and executed. The optimal search engine according to that machine is then turned against a separate corpus of machines with a set of present applications identified by several non-Registry forensic artifact sources, including the file systems, memory, and network captures. The signature search engine corroborates those findings, using only the Windows Registry.</p>
APA, Harvard, Vancouver, ISO, and other styles
48

Blid, Emma, and Patrick Massler. "Den IT-forensiska utvinningen i molnet : En kartläggning över den IT-forensiska utvinningen i samband med molntjänster samt vilka möjligheter och svårigheter den möter." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-34872.

Full text
Abstract:
Det blir allt vanligare att spara data online, i stället för på fysiska lagringsmedium. Detta bringar många möjligheter för dig som användare, men orsakar också nya problem framför allt inom utredningsarbetet. Problemen i kombinationen IT-forensik och molntjänster kan framför allt delas upp i två kategorier, vilka är juridiska respektive tekniska problem. De juridiska problemen berör främst att servern som lagrar data och ägaren till denna ofta befinner sig i en annan nation än där det misstänkta brottet utreds. De flesta juridiska problem kan tyckas enkla att lösa genom lagändringar, men är mer omfattande än så då både de konsekvenser det kan ha för molnleverantörerna, liksom de fördelar det kan ha för rättsväsendet, måste tas hänsyn till och noga övervägas. De tekniska problemen finns det ofta redan lösningar på. Många av dessa kan dock inte anses vara reella då krävd storlek på lagringsytan, och kostnaderna därtill, inte är i proportion av vad som skulle kunna uppnås. De flesta tekniska lösningar ger även nya problem i form av etiska dilemman då de kräver utökad lagring av personlig information. Att spara information och eventuellt behöva utreda information kopplat till en person som inte är misstänkt gör intrång på den personliga integriteten. Molnet har dock också möjligheter där den främsta för IT-forensiken är vad som kallas Digital Forensics as a Service. Detta innebär att molnets resurser nyttjas för att lösa resurstunga problem som hade varit betydligt mer tidskrävande att genomföra lokalt, likaså att möjligheterna för samarbeten och specialkompetens ökar, i syfte att underlätta och effektivera det IT-forensiska arbetet.<br>It is becoming more common to save data online, rather than on physical storage media. This brings many opportunities for you as a user, but also causes new problems, especially within the crime investigations. The problems in the combination of digital forensics and cloud services can be divided into two main categories, which are legal issues and technical issues. The legal issues primarily concern that the server that stores data and the owner of the server is typically based in a different nation than where the suspected crime is investigated. Most legal issues may seem easy to solve through law changes, but are more extensive than that, as both the consequences it may have for the cloud suppliers, as well as the benefits it may have for the justice system, must be taken into consideration. The technical issues often have solutions. However, many of these cannot be considered as realistic since the size of the required storage space, and the costs caused by it, are not proportional to what could be achieved. Most technical solutions also give rise to new issues in the form of ethical dilemmas as they require enhanced storage of personal information. To save more information and to possibly need to investigate information associated with a person who is not suspected of committing the crime intrudes the personal integrity. The cloud, however, also brings opportunities where the foremost for digital forensics is what is called Digital Forensics as a Service. This means that the cloud’s resources are utilised to solve resource related problems that had been significantly more time consuming to implement locally, as well as the opportunities for cooperation and expertise increase, in order to facilitate and enhance IT-forensic work.
APA, Harvard, Vancouver, ISO, and other styles
49

Roux, Brian. "Application of Digital Forensic Science to Electronic Discovery in Civil Litigation." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1554.

Full text
Abstract:
Following changes to the Federal Rules of Civil Procedure in 2006 dealing with the role of Electronically Stored Information, digital forensics is becoming necessary to the discovery process in civil litigation. The development of case law interpreting the rule changes since their enactment defines how digital forensics can be applied to the discovery process, the scope of discovery, and the duties imposed on parties. Herein, pertinent cases are examined to determine what trends exist and how they effect the field. These observations buttress case studies involving discovery failures in large corporate contexts along with insights on the technical reasons those discovery failures occurred and continue to occur. The state of the art in the legal industry for handling Electronically Stored Information is slow, inefficient, and extremely expensive. These failings exacerbate discovery failures by making the discovery process more burdensome than necessary. In addressing this problem, weaknesses of existing approaches are identified, and new tools are presented which cure these defects. By drawing on open source libraries, components, and other support the presented tools exceed the performance of existing solutions by between one and two orders of magnitude. The transparent standards embodied in the open source movement allow for clearer defensibility of discovery practice sufficiency whereas existing approaches entail difficult to verify closed source solutions. Legacy industry practices in numbering documents based on Bates numbers inhibit efficient parallel and distributed processing of electronic data into paginated forms. The failures inherent in legacy numbering systems is identified, and a new system is provided which eliminates these inhibiters while simultaneously better modeling the nature of electronic data which does not lend itself to pagination; such non-paginated data includes databases and other file types which are machine readable, but not human readable in format. In toto, this dissertation provides a broad treatment of digital forensics applied to electronic discovery, an analysis of current failures in the industry, and a suite of tools which address the weaknesses, problems, and failures identified.
APA, Harvard, Vancouver, ISO, and other styles
50

Liljekvist, Erika, and Oscar Hedlund. "Uncovering Signal : Simplifying Forensic Investigations of the Signal Application." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44835.

Full text
Abstract:
The increasing availability of easy-to-use end-to-end encrypted messaging applications has made it possible for more people to conduct their conversations privately. This is something that criminals have taken advantage of and it has proven to make digital forensic investigations more difficult as methods of decrypting the data are needed. In this thesis, data from iOS and Windows devices is extracted and analysed, with focus on the application Signal. Even though other operating systems are compatible with the Signal application, such as Android, it is outside the scope of this thesis. The results of this thesis provide access to data stored in the encrypted application Signalwithout the need for expensive analysis tools. This is done by developing and publishing the first open-source script for decryption and parsing of the Signal database. The script is available for anyone at https://github.com/decryptSignal/decryptSignal.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!