To see the other types of publications on this topic, follow the link: Digital forensic analysis.

Dissertations / Theses on the topic 'Digital forensic analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital forensic analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hales, Gavin. "Assisting digital forensic analysis via exploratory information visualisation." Thesis, Abertay University, 2016. https://rke.abertay.ac.uk/en/studentTheses/774128b9-957e-4a05-aa74-dbeefebb8113.

Full text
Abstract:
Background: Digital forensics is a rapidly expanding field, due to the continuing advances in computer technology and increases in data stage capabilities of devices. However, the tools supporting digital forensics investigations have not kept pace with this evolution, often leaving the investigator to analyse large volumes of textual data and rely heavily on their own intuition and experience. Aim: This research proposes that given the ability of information visualisation to provide an end user with an intuitive way to rapidly analyse large volumes of complex data, such approached could be applied to digital forensics datasets. Such methods will be investigated; supported by a review of literature regarding the use of such techniques in other fields. The hypothesis of this research body is that by utilising exploratory information visualisation techniques in the form of a tool to support digital forensic investigations, gains in investigative effectiveness can be realised. Method:To test the hypothesis, this research examines three different case studies which look at different forms of information visualisation and their implementation with a digital forensic dataset. Two of these case studies take the form of prototype tools developed by the researcher, and one case study utilises a tool created by a third party research group. A pilot study by the researcher is conducted on these cases, with the strengths and weaknesses of each being drawn into the next case study. The culmination of these case studies is a prototype tool which was developed to resemble a timeline visualisation of the user behaviour on a device. This tool was subjected to an experiment involving a class of university digital forensics students who were given a number of questions about a synthetic digital forensic dataset. Approximately half were given the prototype tool, named Insight, to use, and the others given a common open-source tool. The assessed metrics included: how long the participants took to complete all tasks, how accurate their answers to the tasks were, and how easy the participants found the tasks to complete. They were also asked for their feedback at multiple points throughout the task. Results:The results showed that there was a statistically significant increase in accuracy for one of the six tasks for the participants using the Insight prototype tool. Participants also found completing two of the six tasks significantly easier when using the prototype tool. There were no statistically significant different difference between the completion times of both participant groups. There were no statistically significant differences in the accuracy of participant answers for five of the six tasks. Conclusions: The results from this body of research show that there is evidence to suggest that there is the potential for gains in investigative effectiveness when information visualisation techniques are applied to a digital forensic dataset. Specifically, in some scenarios, the investigator can draw conclusions which are more accurate than those drawn when using primarily textual tools. There is also evidence so suggest that the investigators found these conclusions to be reached significantly more easily when using a tool with a visual format. None of the scenarios led to the investigators being at a significant disadvantage in terms of accuracy or usability when using the prototype visual tool over the textual tool. It is noted that this research did not show that the use of information visualisation techniques leads to any statistically significant difference in the time taken to complete a digital forensics investigation.
APA, Harvard, Vancouver, ISO, and other styles
2

Nelson, Alexander J. "Software signature derivation from sequential digital forensic analysis." Thesis, University of California, Santa Cruz, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140317.

Full text
Abstract:

Hierarchical storage system namespaces are notorious for their immense size, which is a significant hindrance for any computer inspection. File systems for computers start with tens of thousands of files, and the Registries of Windows computers start with hundreds of thousands of cells. An analysis of a storage system, whether for digital forensics or locating old data, depends on being able to reduce the namespaces down to the features of interest. Typically, having such large volumes to analyze is seen as a challenge to identifying relevant content. However, if the origins of files can be identified—particularly dividing between software and human origins—large counts of files become a boon to profiling how a computer has been used. It becomes possible to identify software that has influenced the computer's state, which gives an important overview of storage system contents not available to date.

In this work, I apply document search to observed changes in a class of forensic artifact, cell names of the Windows Registry, to identify effects of software on storage systems. Using the search model, a system's Registry becomes a query for matching software signatures. To derive signatures, file system differential analysis is extended from between two storage system states to many sequences of states. The workflow that creates these signatures is an example of analytics on data lineage, from branching data histories. The signatures independently indicate past presence or usage of software, based on consistent creation of measurably distinct artifacts. A signature search engine is demonstrated against a machine with a selected set of applications installed and executed. The optimal search engine according to that machine is then turned against a separate corpus of machines with a set of present applications identified by several non-Registry forensic artifact sources, including the file systems, memory, and network captures. The signature search engine corroborates those findings, using only the Windows Registry.

APA, Harvard, Vancouver, ISO, and other styles
3

Ovens, Kenneth Martin. "Digital forensic analysis of communication applications on Apple iOS devices." Thesis, Glasgow Caledonian University, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.743916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Naru, Amar Singh. "Digital image analysis of the human bite mark." Thesis, University of Hertfordshire, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Necaise, Nathan Joseph. "Empirical analysis of disk sector prefixes for digital forensics." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-03282007-151218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lidström, Robbin, and Elfving Elfving. "An exploratory forensic analysis of the Xbox One S All Digital." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42511.

Full text
Abstract:
Gaming consoles’ relevance to the field of digital forensics has steadily been growing sincetheir presence in society has increased. Given how gaming platforms, such as the Xbox One,are produced for commercial interest, they are likely to be secured by use of proprietaryknowledge to safeguard personal data. The means by which information is secured isunknown, thus displaying the need for investigations to determine what information can beextracted from hard drive disk images and whether any of it is personally identifiable data.Furthermore, predecessors to the Xbox One were successfully modified by users, allowingunsigned code to be run; however, this is currently not possible on the Xbox One. In addition,due to the generational aspect of game consoles, proper digital forensic methodology needs tobe developed specifically adapted to the Xbox One. An exploratory approach was pursued toallow for the scope to remain dynamic, letting information found to point to additionalavenues of investigation and research. No personally identifiable information was found, yetthe analysis of selected files allowed for hypotheses concerning their intended purpose.Through file analysis, encryption was found to be in use on the console. Moreover, theMaster File Table was demonstrated as a significant extension to the foundation of consoleforensics methodology. Lastly, it was established that the Xbox One successfully prevents therunning of unsigned code, showing a significant improvement compared to its predecessors.
APA, Harvard, Vancouver, ISO, and other styles
7

Chabot, Yoan. "Construction, enrichment and semantic analysis of timelines : application to digital forensics." Thesis, Dijon, 2015. http://www.theses.fr/2015DIJOS037/document.

Full text
Abstract:
Obtenir une vision précise des évènements survenus durant un incident est un objectif difficile à atteindre lors d'enquêtes de criminalistique informatique. Le problème de la reconstruction d'évènements, ayant pour objectif la construction et la compréhension d'une chronologie décrivant un incident, est l'une des étapes les plus importantes du processus d'investigation. La caractérisation et la compréhension complète d'un incident nécessite d'une part d'associer à chaque fragment d'information sa signification passée, puis d'établir des liens sémantiques entre ces fragments. Ces tâches nécessitent l'exploration de grands volumes de données hétérogènes trouvés dans la scène de crime. Face à ces masses d'informations, les enquêteurs rencontrent des problèmes de surcharge cognitive les amenant à commettre des erreurs ou à omettre des informations pouvant avoir une forte valeur ajoutée pour les progrès de l'enquête. De plus, tout résultat produit au terme de la reconstruction d'évènements doit respecter un certain nombre de critères afin de pouvoir être utilisé lors du procès. Les enquêteurs doivent notamment être en capacité d'expliquer les résultats produits. Afin d'aider les enquêteurs face à ces problèmes, cette thèse introduit l'approche SADFC. L'objectif principal de cette approche est de fournir aux enquêteurs des outils les aidant à restituer la sémantique des entités composant la scène de crime et à comprendre les relations liant ces entités tout en respectant les contraintes juridiques. Pour atteindre cet objectif, SADFC est composé de deux éléments. Tout d'abord, SADFC s'appuie sur des fondations théoriques garantissant la crédibilité des résultats produits par les outils via une définition formelle et rigoureuse des processus utilisés. Cette approche propose ensuite une architecture centrée sur une ontologie pour modéliser les connaissances inhérentes à la scène de crime et assister l'enquêteur dans l'analyse de ces connaissances. La pertinence et l'efficacité de ces outils sont démontrées au travers d'une étude relatant un cas d'investigation fictive
Having a clear view of events that occurred over time is a difficult objective to achieve in digital investigations (DI). Event reconstruction, which allows investigators to build and to understand the timeline of an incident, is one of the most important steps of a DI process. The complete understanding of an incident and its circumstances requires on the one hand to associate each piece of information to its meaning, and on the other hand to identify semantic relationships between these fragments. This complex task requires the exploration of a large and heterogeneous amount of information found on the crime scene. Therefore, investigators encounter cognitive overload problems when processing this data, causing them to make mistakes or omit information that could have a high added value for the progress of the investigation. In addition, any result produced by the reconstruction process must meet several legal requirements to be admissible at trial, including the ability to explain how the results were produced. To help the investigators to deal with these problems, this thesis introduces a semantic-based approach called SADFC. The main objective of this approach is to provide investigators with tools to help them find the meaning of the entities composing the crime scene and understand the relationships linking these entities, while respecting the legal requirements. To achieve this goal, SADFC is composed of two elements. First, SADFC is based on theoretical foundations, ensuring the credibility of the results produced by the tools via a formal and rigorous definition of the processes used. This approach then proposes an architecture centered on an ontology to model and structure the knowledge inherent to an incident and to assist the investigator in the analysis of this knowledge. The relevance and the effectiveness of this architecture are demonstrated through a case study describing a fictitious investigation
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Mengmeng, and 王萌萌. "Temporal analysis on HFS+ and across file systems in digital forensic investigation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50900122.

Full text
Abstract:
In computer forensics, digital evidence related to time is both important and complex. The rules of changes in time associated with digital evidence, such as files or folders, can be used to analyze certain user behaviors like data access, modification or transfer. However, the format and the rules in time information for user actions are quite different for different file systems, even for different versions of operating systems with the same file system. Some research on temporal analysis has already been done on NTFS and FAT file systems, while there are few resources that describe temporal analysis on the Hierarchical File System Plus (HFS+), the default file system in Apple computer. Moreover, removable devices like USB disks are used frequently; transferring files and folders between different devices with different file systems and operating systems happens more and more frequently, so the changes of times across different file systems are also crucial in digital forensics and investigations. In this research, the changes in time attributes of files and folders resulting from user actions on the HFS+ file system and across file systems are analyzed, and the rules of time are generated by inductive reasoning to help reconstruct crime scenes in the digital forensic investigation. Since inductive reasoning is not definitely true compared with deductive reasoning, experiments are performed to validate the rules. The usage of the rules is demonstrated by analyzing a case in details. The methods proposed here are efficient, practical and easy to put into practice in real scenarios.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
9

Shakir, Amer, Muhammad Hammad, and Muhammad Kamran. "Comparative Analysis & Study of Android/iOS MobileForensics Tools." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44797.

Full text
Abstract:
This report aims to draw a comparison between two commercial mobile forensics and recovery tools, Magnet AXIOM and MOBILedit. A thorough look at previously done studies was helpful to know what aspects of the data extractions must be compared and which areas are the most important ones to focus upon. This work focuses on how the data extracted from one tool compares with another and provides comprehensive extraction based on different scenarios, circumstances, and aspects. Performances of both tools are compared based on various benchmarks and criteria. This study has helped establish that MOBILedit has been able to outperform Magnet AXIOM on more data extraction and recovery aspects. It is a comparatively better tool to get your hands on.
APA, Harvard, Vancouver, ISO, and other styles
10

Thompson, Marcus A. "An exploratory forensic acquisition and analysis of digital evidence on the Amazon Kindle." Thesis, Purdue University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1565358.

Full text
Abstract:

The Amazon Kindle is becoming a popular e-book reader. This popularity will lead criminals to use the Kindle as an accessory to their crime. Very few Kindle publications in the digital forensics domain exist at the time of this writing. Various blogs on the Internet currently provide some of the foundation for Kindle forensics. For this research each fifth generation Kindle was populated with various types of files a typical user may introduce using one method, the USB interface. The Kindle was forensically imaged with AccessData's Forensic Toolkit Imager before and after each Kindle was populated. Each file was deleted through the USB interface. Files were retrieved and recovered through the USB interface before and after file deletion. These two sets of files were compared to the original set of files. All files retrieved before deletion matched their original counterpart. Not all files recovered after deletion matched their original counterpart. These steps and procedures followed a similar methodology developed by Leshney (2008) for virtual machines.

APA, Harvard, Vancouver, ISO, and other styles
11

Brooks, Elizabeth M., and na. "An appraisal of the use of numerical features in the forensic examination of hair." University of Canberra. School of Health Sciences, 2007. http://erl.canberra.edu.au./public/adt-AUC20080624.144159.

Full text
Abstract:
The advent of nuclear DNA (nuDNA) analysis altered the way forensic biology was both practised and viewed by the forensic biologists, police, the legal system and the general public. The ability of nuDNA to individualise analysis of evidence and attach a statistical frequency ratio to the result, created an expectation that numerical objectivity should be part of all forensic analysis. There are few scientists who would disagree with both the need and desirability of objective measures of their results. Forensic hair examiners are no exception as indicated by numerous scientific publications specifically discussing means of objectively assessing hair and its characteristics. While mitochondrial DNA offers a partially objective measure of hair the result is destructive of the sample. A method that objectively supports the hair analysts' microscopic findings and is non destructive would be beneficial to forensic hair examination. This project attempted to develop an objective measure of hair analysis by using both traditional light microscopic comparative techniques combined with a high end digital imaging and image analysis capacity. Where objectivity equals an empirical set of numbers that can be manipulated for statistical significance, the comparative biological sciences such as histology, anthropology and forensic hair examination struggle. Forensic hair examiners have long acknowledged the difficulty, even inability, of assigning numerical values to the features that characterise one hair as being different from another. The human scalp hair is a "morphological" unit that is not readily split into component parts or even that these parts lend themselves to a number value. There have been at least nine separate studies which favourably compare the specificity of microscopic hair examinations. The challenge this study addressed was to appraise the use of numerical features in forensic hair examination, with particular emphasis on those features currently resisting numerical evaluation; specifically, colour and pigmentary characteristics. The techniques used were based on obtaining high quality digital images, and using the pixels inherent in the images to obtain numerical values of such features as colour and pigmentation. The project sample was taken from the telogen scalp hairs obtained from the hairbrushes of ten nominally brown haired Caucasians, both male and female. The focus was twofold: o Compare colour analysis of hair images from brown haired Caucasians within three standard, internationally recognized colour models, namely Red-Green-Blue (RGB) colour model; CIE XYZ Tristimulus (1931) colour model; and CIE L*a*b* (1976) colour model. o Using the same sets of digital images, undertake pattern recognition analysis both intra and inter individual hair samples. Discriminate analysis of the mean colour values collected for each of the inherent colour variables in the three colour models (red, green, blue; X, Y, Z and L*, a*, b*) indicated the RGB colour model gave the least separation of brown haired individuals; CIE XYZ and CIE L*a*b* separated several individuals for all their individual samples and several other individuals were mostly separated with only one of their own samples overlapping with another. Pattern analysis used a small area that represented the overall pigment patterning observed along the length of the hair shaft. This area was extracted from the digital image within V++ Digital Optics image analysis software. The extracted pattern piece was then compared with other sample images within the same hair and four other hairs from the same individual. Pattern extracts were also compared between person hair samples. The comparisons generated a set of numerical values based on the pixel number on the "x" axis of the whole image and the average difference between the extracted pattern image and the whole image. Analysis of this data resulted in log distributions when persons were matched with themselves. It was also possible to refer an unknown pattern extract to this distribution and based on probabilities, predict as to whether or not the unknown sample fell within any of the known sample's distribution.
APA, Harvard, Vancouver, ISO, and other styles
12

Liljekvist, Erika, and Oscar Hedlund. "Uncovering Signal : Simplifying Forensic Investigations of the Signal Application." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44835.

Full text
Abstract:
The increasing availability of easy-to-use end-to-end encrypted messaging applications has made it possible for more people to conduct their conversations privately. This is something that criminals have taken advantage of and it has proven to make digital forensic investigations more difficult as methods of decrypting the data are needed. In this thesis, data from iOS and Windows devices is extracted and analysed, with focus on the application Signal. Even though other operating systems are compatible with the Signal application, such as Android, it is outside the scope of this thesis. The results of this thesis provide access to data stored in the encrypted application Signalwithout the need for expensive analysis tools. This is done by developing and publishing the first open-source script for decryption and parsing of the Signal database. The script is available for anyone at https://github.com/decryptSignal/decryptSignal.
APA, Harvard, Vancouver, ISO, and other styles
13

Svenblad, Tobias. "An Analysis of Using Blockchains for Processing and Storing Digital Evidence." Thesis, Högskolan Dalarna, Datateknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-27855.

Full text
Abstract:
A review of digital forensics today shows that it could be exposed to threats jeopardizing the digital evidence integrity. There are several techniques to countermeasure this risk, one of which is the method that involves the use of blockchains. Blockchains use an advanced system to keep the data within it persistent and transparent, which makes it a natural candidate for everything integrity-sensitive. Several blockchain techniques and infrastructures have been described in this study, based on previous studies and other literature work. Interviews and experiments made a comparison between traditional digital forensic methodologies versus blockchains possible in later chapters. The results showed that blockchains could be the answer to securing digital evidence integrity. However, there is still a lot more work to be done before blockchains are ready to be implemented in production systems. The results of the blockchain analysis are presented such that they can be used as an aid to further research, either theoretically or practically, digital evidence blockchains.
APA, Harvard, Vancouver, ISO, and other styles
14

Carvalho, Tiago José de 1985. "Illumination inconsistency sleuthing for exposing fauxtography and uncovering composition telltales in digital images." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275519.

Full text
Abstract:
Orientadores: Anderson de Rezende Rocha, Hélio Pedrini
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-25T12:33:40Z (GMT). No. of bitstreams: 1 Carvalho_TiagoJosede_D.pdf: 74759719 bytes, checksum: dc371f3262b700f91afa5e0269df1e05 (MD5) Previous issue date: 2014
Resumo: Antes tomadas como naturalmente genuínas, fotografias não mais podem ser consideradas como sinônimo de verdade. Com os avanços nas técnicas de processamento de imagens e computação gráfica, manipular imagens tornou-se mais fácil do que nunca, permitindo que pessoas sejam capazes de criar novas realidades em minutos. Infelizmente, tais modificações, na maioria das vezes, têm como objetivo enganar os observadores, mudar opiniões ou ainda, afetar como as pessoas enxergam a realidade. Assim, torna-se imprescindível o desenvolvimento de técnicas de detecção de falsificações eficientes e eficazes. De todos os tipos de falsificações de imagens, composições são de especial interesse. Esse tipo de falsificação usa partes de duas ou mais imagens para construir uma nova realidade exibindo para o observador situações que nunca aconteceram. Entre todos os diferentes tipos de pistas investigadas para detecção de composições, as abordagens baseadas em inconsistências de iluminação são consideradas as mais promissoras uma vez que um ajuste perfeito de iluminação em uma imagem falsificada é extremamente difícil de ser alcançado. Neste contexto, esta tese, a qual é fundamentada na hipótese de que inconsistências de iluminação encontradas em uma imagem são fortes evidências de que a mesma é produto de uma composição, apresenta abordagens originais e eficazes para detecção de imagens falsificadas. O primeiro método apresentado explora o reflexo da luz nos olhos para estimar as posições da fonte de luz e do observador da cena. A segunda e a terceira abordagens apresentadas exploram um fenômeno, que ocorre com as cores, denominado metamerismo, o qual descreve o fato de que duas cores podem aparentar similaridade quando iluminadas por uma fonte de luz mas podem parecer totalmente diferentes quando iluminadas por outra fonte de luz. Por fim, nossa última abordagem baseia-se na interação com o usuário que deve inserir normais 3-D em objetos suspeitos da imagem de modo a permitir um cálculo mais preciso da posição 3-D da fonte de luz na imagem. Juntas, essas quatro abordagens trazem importantes contribuições para a comunidade forense e certamente serão uma poderosa ferramenta contra falsificações de imagens
Abstract: Once taken for granted as genuine, photographs are no longer considered as a piece of truth. With the advance of digital image processing and computer graphics techniques, it has been easier than ever to manipulate images and forge new realities within minutes. Unfortunately, most of the times, these modifications seek to deceive viewers, change opinions or even affect how people perceive reality. Therefore, it is paramount to devise and deploy efficient and effective detection techniques. From all types of image forgeries, composition images are specially interesting. This type of forgery uses parts of two or more images to construct a new reality from scenes that never happened. Among all different telltales investigated for detecting image compositions, image-illumination inconsistencies are considered the most promising since a perfect light matching in a forged image is still difficult to achieve. This thesis builds upon the hypothesis that image illumination inconsistencies are strong and powerful evidence of image composition and presents four original and effective approaches to detect image forgeries. The first method explores eye specular highlight telltales to estimate the light source and viewer positions in an image. The second and third approaches explore metamerism, when the colors of two objects may appear to match under one light source but appear completely different under another one. Finally, the last approach relies on user¿s interaction to specify 3-D normals of suspect objects in an image from which the 3-D light source position can be estimated. Together, these approaches bring to the forensic community important contributions which certainly will be a strong tool against image forgeries
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
15

Bača, Michal. "Forenzní analýza webového prohlížeče." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-233196.

Full text
Abstract:
This term project deals with digital forensic analysis of web browsers. The first chapters describe the process of forensic analysis, methodologies and procedures used, and also the issues of analysis of web browsers. In particular, the types of information browser stores and security problems that comes with that. Theoretical and practical procedure for obtaining data from Mozilla Firefox web browser was based on the theoretical foundations described in previous chapters. Common linux tools and application created as part of this thesis are used in process of collecting digital evidence. Digital forensic analysis of web broewser was performed based on suggested procedure.
APA, Harvard, Vancouver, ISO, and other styles
16

Jones, Eric Douglas. "Forensic Investigation of Stamped Markings Using a Large-Chamber Scanning Electron Microscope and Computer Analysis for Depth Determination." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1237.

Full text
Abstract:
All firearms within the United States are required by the Gun Control Act to be physically marked with a serial number; which is at least 0.003” in depth and 1/16” in height. The purpose of a serial number is to make each firearm uniquely identifiable and traceable. Intentional removal of a serial number is a criminal offense and is used to hide the identity and movements of the involved criminal parties. The current standard for firearm serial number restoration is by chemical etching; which is time & labor intensive as well as destructive to the physical evidence (firearm). It is hypothesized that a new technique that is accurate, precise, and time efficient will greatly aid law enforcement agencies in pursuing criminals. This thesis focuses on using a large chamber scanning electron microscope to take secondary electron (SE) images of a stamped metal plate and analyzing them using the MIRA MX 7 UE image processing software for purposes of depth determination. An experimental peak luminance value of 77 (pixel values) was correlated to the known depth (273 μm) at the bottom of the sample character. Results show that it is potentially possible to determine an unknown depth from a SEM image; using luminance values obtained in the MIRA analysis.
APA, Harvard, Vancouver, ISO, and other styles
17

Nordin, Anton, and Felix Liffner. "Forensiska Artefakter hos Mobila Applikationer : Utvinning och Analys av Applikationen Snapchat." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40207.

Full text
Abstract:
Today's smartphones and tablets use different applications and software for all sorts of purposes: communication, entertainment, fitness, to share images with each other, to keep up to date with the news and lots of different daily tasks. With the heavy usage of all these apps, it is no wonder that it comes with a few issues. Private data is stored in large quantities both on the local device and on the app-creators' servers. It is no wonder that applications advertising user secrecy and transient storage of user data. One of these applications is Snapchat, with over 500 million downloads on Google Play store, at the time of writing. Snapchat is a communication application with the niched feature that the images and messages sent, disappear once opened or after 24 hours have passed. With the illusion of privacy behind Snapchats niche it has become a breeding ground for criminal activity. The niche itself translates to a troublesome hurdle for law enforcement trying to retrieve evidence from devices of Snapchat users. This paper is aimed to investigate these issues and perform a methodology to retrieve potential evidence on a device using Snapchat to send images and messages. By performing a physical acquisition on a test device and analyzing to find artifacts pertaining to Snapchat and the test-data that was created. The method is performed on a Samsung Galaxy S4 with Android 5.0.1 running Snapchat version 10.52.3.0. Test data such as different images and messages were created and attempted to be retrieved at three points in time. First one being right after data creation. Second one after a restart and 24 hours after the data was created. And the third with 48 hours passed and the Snapchat user logged out at the time of acquisition. The acquisition resulted in the extraction of several sent images and a full text conversation between the experimental device and another party. A full video which was uploaded by the receiving user was able to be extracted even though the experimental device never actually viewed the video. The second acquisition which was made when 24h had passed gave the same results as the first one. This meant that time at least up to a day after the initial creation of the data did not have any effect on the evidence. However, when the Snapchat user was logged out from the application, the data was then unobtainable and had disappeared. Presumably Snapchat has a function which deletes personal data about the user when logged out from the application. This function might become a hurdle in law enforcement investigations where the application Snapchat is involved.
APA, Harvard, Vancouver, ISO, and other styles
18

Martinka, Jan. "Digitální forenzní věda a její aplikace při forenzním auditu účetnictví." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193055.

Full text
Abstract:
This thesis aims to describe a process framework suitable for conducting digital forensics investigation projects as support for forensic audit. Selection of existing digital forensics investigation framework was a subject of criterial comparison. Described new framework is a result of combination and enhancement of those frameworks, which were suitable for the characteristics of forensic audit. Thesis also discusses digital forensics methods for fraud examination and risk assessment as a part of external audit.
APA, Harvard, Vancouver, ISO, and other styles
19

Lara, Alexandre Guilherme de. "Metodologia para análise na perícia criminal de microvestígios forenses: fios de cabelo." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2006.

Full text
Abstract:
A presente dissertação expositiva teve por objetivo o desenvolvimento de metodologia para análise forense de fios de cabelos. Considerado um microvestígio comum de ser encontrado em locais de crime contra pessoa (DOREA, 2010), fios de cabelo são capazes de fornecer informações úteis na identificação da autoria do crime, ou a entender a dinâmica de um crime violento (ROBERTSON, 1999). As fibras compostas por fio de cabelo deterioram-se lentamente (ROBBINS, 2012), e podem ser analisadas mesmo transcorrido longo período de tempo, tornando-se ainda mais relevante em locais de crime que são examinados tardiamente. Os equipamentos utilizados no presente estudo são de uso comum em laboratórios, permitindo que a metodologia seja amplamente empregada. O presente estudo buscou desenvolver metodologias para análise por microscopia ótica de fios de cabelo, identificando as características de interesse forense passíveis de serem estudadas. Utilizaram-se amostras de fios de cabelo para análise das seções transversais e longitudinais, de forma destrutiva e não destrutiva, com preparação de lâminas de forma permanente e não permanente. Como resultado obteve-se uma metodologia de baixo custo descrita para aplicações forenses em vestígios coletados em locais de crime, em armas de crimes ou em suspeitos, de forma a combater a criminalidade face o poder probatório da prova material. Constatou-se a possibilidade de identificação de características macroscópicas e microscópicas de fios de cabelo, que revelam atributos do doador e, em alguns casos, auxiliar no entendimento da dinâmica do crime. A agilidade observada no método permite ainda a obtenção de informações durante a fase preliminar ou de investigação em curtos espaços de tempo.
This monographic dissertation aimed to develop standards of microscopic forensic hair analysis. A very common trace to be found in certain crimes (DOREA, 2010), hair can be helpful to provide information regarding the perpetrator or the way that the crime was committed (ROBERTSON, 1999). Once hair is a low degradation fiber (ROBBINS, 2012), it can be analyzed long after the fact, becoming even more relevant in crime scenes processed after elapse time. The equipment’s used for this study are commonly found in forensic labs, allowing broad use. The study was restricted to literature review and further elaboration of preparation and analysis by optical microscopy of hairs methods, identifying the characteristics of forensic interest that can be studied. Samples of hair for cross section and longitudinal sections analysis had been used. Destructive and non-destructive methods in permanent and non-permanent blades where tested. As a result a low cost methodology was obtained for forensic applications in traces collected from crime scenes, weapons used in crimes or suspects, in order to combat crime by using the probative power of physical evidence.
APA, Harvard, Vancouver, ISO, and other styles
20

Al, Mutawa Noora Ahmad Khurshid. "Integrating behavioural analysis within the digital forensics investigation process." Thesis, University of Central Lancashire, 2018. http://clok.uclan.ac.uk/25412/.

Full text
Abstract:
This programme of research focused on incorporating Behavioural Analysis (BA) within the digital forensics investigation process. A review of previously developed digital forensics investigation models indicated a lack of sufficient consideration of the behavioural and motivational dimensions of offending, and the way in which digital evidence can be used to address these issues during the investigation process. This programme of research aimed to build on previous work by scientific researchers and investigators by developing a digital forensics investigation model which incorporates greater consideration of the behavioural and motivational implications of case-related digital evidence based on current theoretical understandings of these aspects of offending from forensic psychology. This can aid with understanding of the crime events and reconstruction, and lead to the development of more detailed models and guidelines for examining computer-facilitated interpersonal crimes. The first study employed an abductive approach to forensically analyse individual cases (real cases obtained from the Dubai Police archives) applying BA to the online Sexually Exploitative Imagery of Children (SEIC) and cyberstalking. Its aim was to investigate what BA could contribute to the digital forensics investigation of cases within these crime categories. It identified five benefits: (1) providing focus, speed and investigative directions, (2) inferring victim/offender behaviours, (3) inferring offender motivation(s), (4) identifying potential victims, and (5) eliminating suspects. This was followed by a survey study empirically examining the perceptions of national and international digital forensics practitioners regarding the use and utility of BA during the process of investigating SEIC and cyberstalking cases. The results indicated that while the majority believed that BA has potential to contribute to many aspects of digital forensics investigations, their daily investigative activities involved a limited use of this technique. The implications of the study were outlined, and emphasised the need to design a digital forensics investigation model that provides guiding steps and illustrations on how to utilise BA in digital forensics investigations. Based on the findings from the conducted studies, a digital forensics investigation model that incorporates aspects of BA was designed. It aimed to provide a pragmatic, structured, multidisciplinary approach to performing a post mortem examination, analysis, and interpretation of the content of the digital devices associated with computer-facilitated interpersonal crimes. Two comprehensive case studies were also used to illustrate the investigative importance of the model in investigating computer-facilitated interpersonal crimes.
APA, Harvard, Vancouver, ISO, and other styles
21

Belvisi, Nicole Mariah Sharon. "Document Forensics Through Textual Analysis." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40157.

Full text
Abstract:
This project aims at giving a brief overview of the area of research called Authorship Analysis with main focus on Authorship Attribution and the existing methods. The second objective of this project is to test whether one of the main approaches in the field can be still be applied successfully to today's new ways of communicating. The study uses multiple stylometric features to establish the authorship of a text as well as a model based on the TF-IDF model.
APA, Harvard, Vancouver, ISO, and other styles
22

Stelly, Christopher D. "A Domain Specific Language for Digital Forensics and Incident Response Analysis." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2706.

Full text
Abstract:
One of the longstanding conceptual problems in digital forensics is the dichotomy between the need for verifiable and reproducible forensic investigations, and the lack of practical mechanisms to accomplish them. With nearly four decades of professional digital forensic practice, investigator notes are still the primary source of reproducibility information, and much of it is tied to the functions of specific, often proprietary, tools. The lack of a formal means of specification for digital forensic operations results in three major problems. Specifically, there is a critical lack of: a) standardized and automated means to scientifically verify accuracy of digital forensic tools; b) methods to reliably reproduce forensic computations (their results); and c) framework for inter-operability among forensic tools. Additionally, there is no standardized means for communicating software requirements between users, researchers and developers, resulting in a mismatch in expectations. Combined with the exponential growth in data volume and complexity of applications and systems to be investigated, all of these concerns result in major case backlogs and inherently reduce the reliability of the digital forensic analyses. This work proposes a new approach to the specification of forensic computations, such that the above concerns can be addressed on a scientific basis with a new domain specific language (DSL) called nugget. DSLs are specialized languages that aim to address the concerns of particular domains by providing practical abstractions. Successful DSLs, such as SQL, can transform an application domain by providing a standardized way for users to communicate what they need without specifying how the computation should be performed. This is the first effort to build a DSL for (digital) forensic computations with the following research goals: 1) provide an intuitive formal specification language that covers core types of forensic computations and common data types; 2) provide a mechanism to extend the language that can incorporate arbitrary computations; 3) provide a prototype execution environment that allows the fully automatic execution of the computation; 4) provide a complete, formal, and auditable log of computations that can be used to reproduce an investigation; 5) demonstrate cloud-ready processing that can match the growth in data volumes and complexity.
APA, Harvard, Vancouver, ISO, and other styles
23

AlMarri, Saeed. "A structured approach to malware detection and analysis in digital forensics investigation." Thesis, University of Bedfordshire, 2017. http://hdl.handle.net/10547/622529.

Full text
Abstract:
Within the World Wide Web (WWW), malware is considered one of the most serious threats to system security with complex system issues caused by malware and spam. Networks and systems can be accessed and compromised by various types of malware, such as viruses, worms, Trojans, botnet and rootkits, which compromise systems through coordinated attacks. Malware often uses anti-forensic techniques to avoid detection and investigation. Moreover, the results of investigating such attacks are often ineffective and can create barriers for obtaining clear evidence due to the lack of sufficient tools and the immaturity of forensics methodology. This research addressed various complexities faced by investigators in the detection and analysis of malware. In this thesis, the author identified the need for a new approach towards malware detection that focuses on a robust framework, and proposed a solution based on an extensive literature review and market research analysis. The literature review focussed on the different trials and techniques in malware detection to identify the parameters for developing a solution design, while market research was carried out to understand the precise nature of the current problem. The author termed the new approaches and development of the new framework the triple-tier centralised online real-time environment (tri-CORE) malware analysis (TCMA). The tiers come from three distinctive phases of detection and analysis where the entire research pattern is divided into three different domains. The tiers are the malware acquisition function, detection and analysis, and the database operational function. This framework design will contribute to the field of computer forensics by making the investigative process more effective and efficient. By integrating a hybrid method for malware detection, associated limitations with both static and dynamic methods are eliminated. This aids forensics experts with carrying out quick, investigatory processes to detect the behaviour of the malware and its related elements. The proposed framework will help to ensure system confidentiality, integrity, availability and accountability. The current research also focussed on a prototype (artefact) that was developed in favour of a different approach in digital forensics and malware detection methods. As such, a new Toolkit was designed and implemented, which is based on a simple architectural structure and built from open source software that can help investigators develop the skills to critically respond to current cyber incidents and analyses.
APA, Harvard, Vancouver, ISO, and other styles
24

Homem, Irvin. "Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud Environments." Licentiate thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130742.

Full text
Abstract:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
APA, Harvard, Vancouver, ISO, and other styles
25

Sylve, Joseph T. "Towards Real-Time Volatile Memory Forensics: Frameworks, Methods, and Analysis." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2359.

Full text
Abstract:
Memory forensics (or memory analysis) is a relatively new approach to digital forensics that deals exclusively with the acquisition and analysis of volatile system memory. Because each function performed by an operating system must utilize system memory, analysis of this memory can often lead to a treasure trove of useful information for forensic analysts and incident responders. Today’s forensic investigators are often subject to large case backlogs, and incident responders must be able to quickly identify the source and cause of security breaches. In both these cases time is a critical factor. Unfortunately, today’s memory analysis tools can take many minutes or even hours to perform even simple analysis tasks. This problem will only become more prevalent as RAM prices continue to drop and systems with very large amounts of RAM become more common. Due to the volatile nature of data resident in system RAM it is also desirable for investigators to be able to access non-volatile copies of system RAM that may exist on a device’s hard drive. Such copies are often created by operating systems when a system is being suspended and placed into a power safe mode. This dissertation presents work on improving the speed of memory analysis and the access to non-volatile copies of system RAM. Specifically, we propose a novel memory analysis framework that can provide access to valuable artifacts orders of magnitude faster than existing tools. We also propose two new analysis techniques that can provide faster and more resilient access to important forensic artifacts. Further, we present the first analysis of the hibernation file format used in modern versions of Windows. This work allows access to evidence in non-volatile copies of system RAM that were not previously able to be analyzed. Finally, we propose future enhancements to our memory analysis framework that should address limitations with the current design. Taken together, this dissertation represents substantial work towards advancing the field of memory forensics.
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Ewerton Almeida 1988. "Identificação de manipulações de cópia-colagem em imagens digitais." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275632.

Full text
Abstract:
Orientador: Anderson de Rezende Rocha
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-23T03:37:08Z (GMT). No. of bitstreams: 1 Silva_EwertonAlmeida_M.pdf: 20654769 bytes, checksum: cd66fa66dedc48d34c5feb0fa0311759 (MD5) Previous issue date: 2012
Resumo: Neste trabalho, nós investigamos duas abordagens para detecção de manipulações de Cópia-colagem (Copy-move Forgery) em imagens digitais. A primeira abordagem é baseada no algoritmo PatchMatch Generalizado [4], cuja proposta é encontrar correspondências de patches (blocos de pixels de tamanho definido) em uma ou mais imagens. A nossa abordagem consiste na aplicação do PatchMatch Generalizado em uma dada imagem com o propósito de encontrar, para cada patch desta, um conjunto de patches similares com base nas distâncias de seus histogramas. Em seguida, nós verificamos as correspondências de cada patch para decidir se eles são segmentos de uma região duplicada. A segunda abordagem, que consiste em nossa principal contribuição, é baseada em um processo de Votação e Análise Multiescala da imagem. Dada uma imagem suspeita, extraímos pontos de interesse robustos a operações de escala e rotação, encontramos correspondências entre eles e os agrupamos em regiões com base em certas restrições geométricas, tais como a distância física e a inclinação da reta que os liga. Após a aplicação das restrições geométricas, criamos uma pirâmide multiescala que representará o espaço de escalas da imagem. Nós examinamos, em cada imagem, os grupos criados usando um descritor robusto a rotações, redimensionamentos e compressões. Este processo diminui o domínio de busca de regiões duplicadas e gera um mapa de detecção para cada escala. A decisão final é dada a partir de uma votação entre todos os mapas, na qual um segmento é considerado duplicado se este assim o é na maioria das escalas. Nós validamos ambos os métodos em uma base de imagens que construímos. A base _e composta por 108 clonagens originais e com elevado grau de realismo. Comparamos os métodos propostos com outros do estado da arte nessa mesma base de imagens
Abstract: In this work, we investigate two approaches toward Copy-move Forgery detection in digital images. The first approach relies on the Generalized PatchMatch algorithm [4], which aims at finding patch correspondences in one or more images. Our approach consists in applying the Generalized PatchMatch algorithm in a certain image in order to obtain, for each of its patches, a set of similar patches based on their histogram distances. Next, we check the correspondences of each patch to decide whether or not they are portions of a duplicated region. Our second approach is based on a Voting and Multiscale Analysis process of an image. Given a suspicious image, we extract its interest points robust to scale and rotation transformations and we find possible correspondences among them. Next, we group the correspondent points into regions considering some geometric constraints, such as physical distance and inclination of the line between points of interest. After that, we construct a multiscale pyramid to represent the image scale-space. In each image, we examine the created groups using a descriptor robust to rotation, scaling and compression. This process decreases the search space of duplicated regions and yields a detection map. The final decision depends on a voting among all the detected maps, in which a pixel is considered as part of a manipulation if it is marked as so in the majority of the pyramid scales. We validate both methods using a dataset we have built comprising 108 original and realistic clonings. We compare the proposed methods to others from the state-of-the-art using such cloning dataset
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
27

Bonomi, Mattia. "Facial-based Analysis Tools: Engagement Measurements and Forensics Applications." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/271342.

Full text
Abstract:
The last advancements in technology leads to an easy acquisition and spreading of multi-dimensional multimedia content, e.g. videos, which in many cases depict human faces. From such videos, valuable information describing the intrinsic characteristic of the recorded user can be retrieved: the features extracted from the facial patch are relevant descriptors that allow for the measurement of subject's emotional status or the identification of synthetic characters. One of the emerging challenges is the development of contactless approaches based on face analysis aiming at measuring the emotional status of the subject without placing sensors that limit or bias his experience. This raises even more interest in the context of Quality of Experience (QoE) measurement, or the measurement of user emotional status when subjected to a multimedia content, since it allows for retrieving the overall acceptability of the content as perceived by the end user. Measuring the impact of a given content to the user can have many implications from both the content producer and the end-user perspectives. For this reason, we pursue the QoE assessment of a user watching multimedia stimuli, i.e. 3D-movies, through the analysis of his facial features acquired by means of contactless approaches. More specifically, the user's Heart Rate (HR) was retrieved by using computer vision techniques applied to the facial recording of the subject and then analysed in order to compute the level of engagement. We show that the proposed framework is effective for long video sequences, being robust to facial movements and illumination changes. We validate it on a dataset of 64 sequences where users observe 3D movies selected to induce variations in users' emotional status. From one hand understanding the interaction between the user's perception of the content and his cognitive-emotional aspects leads to many opportunities to content producers, which may influence people's emotional statuses according to needs that can be driven by political, social, or business interests. On the other hand, the end-user must be aware of the authenticity of the content being watched: advancements in computer renderings allowed for the spreading of fake subjects in videos. Because of this, as a second challenge we target the identification of CG characters in videos by applying two different approaches. We firstly exploit the idea that fake characters do not present any pulse rate signal, while humans' pulse rate is expressed by a sinusoidal signal. The application of computer vision techniques on a facial video allows for the contactless estimation of the subject's HR, thus leading to the identification of signals that lack of a strong sinusoidality, which represent virtual humans. The proposed pipeline allows for a fully automated discrimination, validated on a dataset consisting of 104 videos. Secondly, we make use of facial spatio-temporal texture dynamics that reveal the artefacts introduced by computer renderings techniques when creating a manipulation, e.g. face swapping, on videos depicting human faces. To do so, we consider multiple temporal video segments on which we estimated multi-dimensional (spatial and temporal) texture features. A binary decision of the joint analysis of such features is applied to strengthen the classification accuracy. This is achieved through the use of Local Derivative Patterns on Three Orthogonal Planes (LDP-TOP). Experimental analyses on state-of-the-art datasets of manipulated videos show the discriminative power of such descriptors in separating real and manipulated sequences and identifying the creation method used. The main finding of this thesis is the relevance of facial features in describing intrinsic characteristics of humans. These can be used to retrieve significant information like the physiological response to multimedia stimuli or the authenticity of the human being itself. The application of the proposed approaches also on benchmark dataset returned good results, thus demonstrating real advancements in this research field. In addition to that, these methods can be extended to different practical application, from the autonomous driving safety checks to the identification of spoofing attacks, from the medical check-ups when doing sports to the users' engagement measurement when watching advertising. Because of this, we encourage further investigations in such direction, in order to improve the robustness of the methods, thus allowing for the application to increasingly challenging scenarios.
APA, Harvard, Vancouver, ISO, and other styles
28

Furuseth, Andreas Grytting. "Digital Forensics: Methods and tools for retrieval and analysis of security credentials and hidden data." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9264.

Full text
Abstract:

This master thesis proposes digital forensic methods for retrieval and analysis of steganography during a digital investigation. These proposed methods are examined using scenarios. From the examination of steganography and these cases, it is concluded that the recommended methods can be automated and increase the chances for an investigator to detect steganography.

APA, Harvard, Vancouver, ISO, and other styles
29

Cunha, Lucas Marques da. "Reconhecimento de adulterações em imagens digitais: uma abordagem passiva." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9270.

Full text
Abstract:
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-16T13:37:17Z No. of bitstreams: 1 arquivototal.pdf: 3482689 bytes, checksum: 32f617e5ecce7581c6cf74bc8c431049 (MD5)
Made available in DSpace on 2017-08-16T13:37:17Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3482689 bytes, checksum: 32f617e5ecce7581c6cf74bc8c431049 (MD5) Previous issue date: 2016-06-22
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The creation and marketing of image editing software allowed ordinary people to perform any kind of manipulation in digital images. In a judicial context, where authenticity and data integrity are crucial, the development of techniques to ensure such attributes are needed. Forensic analysis of digital image aims to use computational scientific methods, such as analysis of a sensor device and JPEG (Joint Photographic Experts Group) artifacts, in order to recognize the presence or absence of such attributes. This paper presents a passive approach to Tampering Recognition in Digital Images with and without JPEG compression using two approaches. The first approach is based on analysis of the 4-pixel neighborhood that may be classified as interpolated or not. Based on such analysis, we obtain information about the standard CFA (Color Filter Array) pattern to investigate the authenticity and integrity of images with low or no compression according to misclassification of pixels. The second approach is based on inconsistency analysis of BAG (Block Grid Artifact) pattern in images with high compression created under tampering techniques like composition and cloning. The image's BAG is the distinction of JPEG blocks. Furthermore, segmentation techniques have been defined for precise location of the tampered area. The method selects one of the approaches according to the image compression ratio. The analysis is performed in agreement with the values of accuracy, sensitivity, specificity, and accuracy. The accuracy rates ranged from 85.1% to 95.4% and precision rates between 41.7% to 74.3%. Values from 32.3% to 82.2% were obtained for sensitivity rates and between 85.9% to 99.2% for specificity in an image database composed by 960 images interpolated by different algorithms and tampered by composition and cloning. The methods described in the literature have a limited scope related to the formats of the images tested and how they gauge their effectiveness. The approach proposed differs from these techniques presenting a most wide scope in the mentioned questions, covering images with and without compression, and assessing the efficiency from metrics able to prove the assumptions during the research.
A criação e comercialização de softwares de edição de imagens permitiram que pessoas leigas pudessem realizar qualquer tipo de manipulação em imagens digitais. Em um cenário judicial em que autenticidade e integridade dos dados são cruciais, há necessidade de técnicas que permitam promover tais atributos. A análise forense em imagens digitais busca por meio de métodos científicos computacionais, como a análise do sensor do dispositivo e artefatos JPEG (Joint Photographic Experts Group), reconhecer a presença ou ausência desses atributos. O presente trabalho apresenta uma Abordagem Passiva para o Reconhecimento de adulterações em imagens digitais com e sem compressão JPEG utilizando duas abordagens. A primeira abordagem baseia-se na técnica de análise da vizinhança-4 de um pixel que pode ser classificado como interpolado ou não interpolado. A partir dessa análise, obtêm-se as informações sobre o padrão CFA (Color Filter Array) de modo a investigar a autenticidade e integridade de imagens com baixa ou nenhuma compressão, de acordo com a classificação incorreta dos pixels. A segunda abordagem baseia-se na Análise da Inconsistência do BAG (Block Artifact Grid) de imagens com alta compressão gerado a partir de técnicas de adulteração, tais como composição e clonagem. O BAG da imagem trata-se da demarcação dos blocos JPEG. Além disso, foram definidas técnicas de segmentação para localização precisa da região adulterada. O método seleciona uma das abordagens, a partir do valor da taxa de compressão da imagem. A análise dos resultados é feita a partir dos valores de acurácia, sensibilidade, especificidade e precisão. As taxas de acurácia variaram de 85,1% a 95,4%, e com taxas de precisão entre 41,7% a 74,3%. Para as taxas de sensibilidade, obteve-se valores entre 32,3% a 82,2% e para especificidade valores entre 85,9% a 99,2%, para uma base de dados composta por 960 imagens interpoladas por algoritmos distintos e adulteradas por técnicas de composição e clonagem. Os métodos descritos na literatura apresentam um escopo limitado referente aos formatos das imagens que foram testadas e ao modo como aferem a eficiência de suas técnicas. A abordagem proposta distingue-se dessas técnicas apresentando um escopo com maior abrangência nos quesitos mencionados, englobando imagens com e sem compressão, além de aferir a eficiência a partir de métricas capazes de comprovar as hipóteses levantadas ao longo da pesquisa.
APA, Harvard, Vancouver, ISO, and other styles
30

Lucente, Leonardo. "Progettazione e Sviluppo di una Web Application per l’Analisi di Reti Sociali in ambito Forense." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22981/.

Full text
Abstract:
L’obiettivo di questo lavoro é di progettare e sviluppare una applicazione web il cui utilizzo sia quello di poter condurre analisi sociali andando ad elaborare dati provenienti da diverse fonti quali: Facebook, Twitter e mail box. Per rendere possibili queste analisi abbiamo focalizzato la nostra attenzione, non solo sulla costruzione visiva di un reticolo sociale, ma anche sulla diffusione dei contenuti nel tempo e nello spazio e sulla loro estrazione testuale. Siamo partiti definendo il concetto di rete sociale ed abbiamo proseguito evidenziando come l’analisi sociale possa essere di supporto alle analisi forensi, fornendo alcune tecniche specifiche per questo ambito. Abbiamo, infine, messo a paragone alcuni strumenti di analisi forense attualmente presenti sul marcato. Questa panoramica ci ha permesso di definire il nostro campo di interesse e le motivazioni dietro il progetto che hanno portato allo sviluppo di un’applicazione web in grado di analizzare e rappresentare visivamente i dati in ingresso. Andremo a definire l’architettura utilizzata e presenteremo un caso d’uso reale utilizzando i dati provenienti dai miei canali social.
APA, Harvard, Vancouver, ISO, and other styles
31

Canovas, Thorsell Roberto. "Integritet av IT-forensiska verktyg för automatisk analys." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20166.

Full text
Abstract:
IT-relaterad brottslighet ökar lavinartat och Polismyndigheten står inför nya utmaningar i att identifiera gärningsmän. Allt mer mjukvaror och tjänster blir automatiserade och det gäller även mjukvarorna som Polismyndigheten använder sig av. En av utmaningarna är den oerhörda mängd data som måste processas och analyseras i undersökningar och då förutsätts det att verktygen presenterar data med bibehållen integritet. Verktygen som används är nästan alltid tredjepartsmjukvara och då är det viktigt att rätt data plockas ut och att datan är korrekt. Denna studie har som mål att jämföra två mjukvaror i hur de identifierar och presenterar data. Studien görs i samverkan med Polismyndigheten vid Regionalt IT-brottscentrum Väst – Skövde och hoppas inbringa nya insikter och kunskaper i de verktyg som jämförelsen grundas på och med hjälp av kunskaperna kunna värdesätta integriteten hos verktygen. Resultatet som framträder i studien är att verktygen presenterar data med bibehållen integritet.
Cybercrime is on the rise in society and the Swedish Police is facing new challenges in identifying criminals. More tools and services are becoming automated, and this also applies to the tools that the Swedish Police uses. One of the challenges is the enormous amount of data that must be processed and analyzed during investigations. The tools used are always third-party programs and IT-forensics needs to rely on the organization that makes the software. This study aims to evaluate two different tools in how they identify and present artifacts. The study is conducted in collaboration with the Police Authority at the Regional IT Crime Center West - Skövde and hopes to bring new insights and knowledge into the tools on which the comparison is based on and with the help of the knowledge be able to value the integrity of the tools. The result that the study presents is that the tools are presenting data with preserved integrity.
APA, Harvard, Vancouver, ISO, and other styles
32

Federici, Corrado <1965&gt. "The twofold role of Cloud Computing in Digital Forensics: target of investigations and helping hand to evidence analysis." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6694/.

Full text
Abstract:
This PhD thesis discusses the impact of Cloud Computing infrastructures on Digital Forensics in the twofold role of target of investigations and as a helping hand to investigators. The Cloud offers a cheap and almost limitless computing power and storage space for data which can be leveraged to commit either new or old crimes and host related traces. Conversely, the Cloud can help forensic examiners to find clues better and earlier than traditional analysis applications, thanks to its dramatically improved evidence processing capabilities. In both cases, a new arsenal of software tools needs to be made available. The development of this novel weaponry and its technical and legal implications from the point of view of repeatability of technical assessments is discussed throughout the following pages and constitutes the unprecedented contribution of this work
APA, Harvard, Vancouver, ISO, and other styles
33

Qaisi, Ahmed Abdulrheem Jerribi. "Network Forensics and Log Files Analysis : A Novel Approach to Building a Digital Evidence Bag and Its Own Processing Tool." Thesis, University of Canterbury. Computer Science and Software Engineering, 2011. http://hdl.handle.net/10092/5999.

Full text
Abstract:
Intrusion Detection Systems (IDS) tools are deployed within networks to monitor data that is transmitted to particular destinations such as MySQL,Oracle databases or log files. The data is normally dumped to these destinations without a forensic standard structure. When digital evidence is needed, forensic specialists are required to analyse a very large volume of data. Even though forensic tools can be utilised, most of this process has to be done manually, consuming time and resources. In this research, we aim to address this issue by combining several existing tools to archive the original IDS data into a new container (Digital Evidence Bag) that has a structure based upon standard forensic processes. The aim is to develop a method to improve the current IDS database function in a forensic manner. This database will be optimised for future, forensic, analysis. Since evidence validity is always an issue, a secondary aim of this research is to develop a new monitoring scheme. This is to provide the necessary evidence to prove that an attacker had surveyed the network prior to the attack. To achieve this, we will set up a network that will be monitored by multiple IDSs. Open source tools will be used to carry input validation attacks into the network including SQL injection. We will design a new tool to obtain the original data in order to store it within the proposed DEB. This tool will collect the data from several databases of the different IDSs. We will assume that the IDS will not have been compromised.
APA, Harvard, Vancouver, ISO, and other styles
34

Taubmann, Benjamin [Verfasser], Hans P. [Akademischer Betreuer] Reiser, and Nuno [Akademischer Betreuer] Santos. "Improving Digital Forensics and Incident Analysis in Production Environments by Using Virtual Machine Introspection / Benjamin Taubmann ; Hans P. Reiser, Nuno Santos." Passau : Universität Passau, 2020. http://d-nb.info/1216330506/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

gorbov, sergey. "Practical Application of Fast Disk Analysis for Selective Data Acquisition." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2230.

Full text
Abstract:
Using a forensic imager to produce a copy of the storage is a common practice. Due to the large volumes of the modern disks, the imaging may impose severe time overhead which ultimately delays the investigation process. We proposed automated disk analysis techniques that precisely identify regions on the disk that contain data. We also developed a high performance imager that produces AFFv3 images at rates exceeding 300MB/s. Using multiple disk analysis strategies we can analyze a disk within a few minutes and yet reduce the imaging time of by many hours. Partial AFFv3 images produced by our imager can be analyzed by existing digital forensics tools, which makes our approach to be easily incorporated into the workflow of practicing forensics investigators. The proposed approach renders feasible in the forensic environments where the time is critical constraint, as it provides significant performance boost, which facilitates faster investigation turnaround times and reduces case backlogs.
APA, Harvard, Vancouver, ISO, and other styles
36

Ekfeldt, Jonas. "Om informationstekniskt bevis." Doctoral thesis, Stockholms universitet, Juridiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-125286.

Full text
Abstract:
Information technology evidence consists of a mix of representations of various applications of digital electronic equipment, and can be brought to the fore in all contexts that result in legal decisions. The occurrence of such evidence in legal proceedings, and other legal decision-making, is a phenomenon previously not researched within legal science in Sweden. The thesis examines some of the consequences resulting from the occurrence of information technology evidence within Swedish practical legal and judicial decision-making. The thesis has three main focal points. The first consists of a broad identification of legal problems that information technology evidence entails. The second focal point examines the legal terminology associated with information technology evidence. The third focal point consists of identifying sources of error pertaining to information technology evidence from the adjudicator’s point of view. The examination utilizes a Swedish legal viewpoint from a perspective of the public trust in courts. Conclusions include a number of legal problems in several areas, primarily in regards to the knowledge of the adjudicator, the qualification of different means of evidence and the consequences of representational evidence upon its evaluation. In order to properly evaluate information technology evidence, judges are – to a greater extent than for other types of evidence – in need of (objective) knowledge supplementary to that provided by parties and their witnesses and experts. Furthermore, the current Swedish evidence terminology has been identified as a complex of problems in and of itself. The thesis includes suggestions on certain additions to this terminology. Several sources of error have been identified as being attributable to different procedures associated with the handling of information technology evidence, in particular in relation to computer forensic investigations. There is a general need for future research focused on matters regarding both standards of proof for and evaluation of information technology evidence. In addition, a need for deeper legal scientific studies aimed at evidence theory has been identified, inter alia regarding the extent to which frequency theories are applicable in respect to information technology evidence. The need for related further discussions on future emerging areas such as negative evidence and predictive evidence are foreseen.
APA, Harvard, Vancouver, ISO, and other styles
37

Bouchaud, François. "Analyse forensique des écosystèmes intelligents communicants de l'internet des objets." Thesis, Lille, 2021. http://www.theses.fr/2021LILUI014.

Full text
Abstract:
Avec le développement des écosystèmes connectés à Internet, la recherche de données dans un environnement numérique par l’enquêteur judiciaire constitue une tâche de plus en plus ardue. Elle est un véritable défi en particulier par l’hétérogénéité des objets à étudier. A cette affirmation, il convient d'y ajouter l’absence de standardisation des architectures de communication et des remontées de données, des dépendances entre les dispositifs connectés et une dispersion de l’information. Dans cette thèse, nous proposons d’adapter l’approche traditionnelle de l’investigation numérique aux contraintes de l’Internet des objets. Nous développons des méthodologies et des outils d’appréhension et d’analyse de l’environnement connecté pour les praticiens du judiciaire. Nous partons du principe que la scène de crime constitue un tout connecté et non un agrégat d’objets numériques. Elle contient des données clefs dans la compréhension et la contextualisation d’un évènement ou d’un phénomène passé, éléments de preuve pour le procès pénal. L’investigation numérique est une science appliquée pour identifier un incident, collecter, examiner et analyser des données tout en préservant l’intégrité de l’information et en maintenant une chaîne de contrôle stricte pour les données (National Institute of Standards and Technology). Face à une scène de crime, l’enquêteur cherche à comprendre l’évènement criminel, en examinant les traces figées ou emprisonnées dans le support physique et/ou dans une partie déportée sur le Cloud. Nos travaux développent un processus d’identification rapide du phénomène selon quatre phases : détection, localisation, reconnaissance des objets et recoupement de l’information. Il est enrichi d’outils de recherche de traces radioélectriques: simple capteur et réseau maillé multi-capteur. Cette démarche est construite autour de la problématique de l’appréhension d’un environnement connecté multiforme, contenant des dispositifs pas toujours visibles ou identifiables lors d’une approche terrain. Nous intégrons dans notre étude la stratégie de la collecte des équipements. Le défi réside dans la capacité à extraire un ou plusieurs objets connectés, sans compromettre les données stockées, pour les placer dans un environnement contrôlé et sécurisé. L’objet est maintenu dans un état garantissant la non-altération ou la perte des données. L’étude regroupe une première phase de compréhension de l’environnement physique et des dépendances. Elle cherche à déterminer les mécanismes de migration de l’information vers les plates-formes en ligne et à isoler les groupes d’objets en déstructurant avec intelligence les connexions. Les dispositifs sont extraits, puis conditionnés et scellés au regard de leurs caractéristiques techniques et de l’infrastructure connectée. Puis, nous approfondissons l’exploitation de l’information collectée par des méthodes forensiques. La donnée est alors analysée selon les axes temporels, spatiaux et contextuels. Nous proposons par ailleurs une classification et une priorisation de la structure connectée en fonction des caractéristiques de la donnée recherchée. Les travaux donnent une lecture du cycle de vie de la donnée au sein de l’infrastructure de l’Internet des Objets.Dans une approche prospective, nous approfondissons les questions de l’identification fine de l'objet connecté en fonction des caractéristiques du matériel et du logiciel. L'émission acoustique de l'électronique apparaît comme une propriété physique pertinente dans l'étude des équipements. Cet attribut complète notre palette d'outils dans l'identification des objets connectés
With the development of the Internet of Things, searching for data in a digital environment is an increasingly difficult task for the forensic investigator. It is a real challenge, especially given the heterogeneity of the connected objects. There is a lack of standardization in communication architectures and data management policies. It is accompanied by dependencies between connected ecosystems, especially through hidden links and fragmented information. In this thesis, we suggest adjusting the traditional approach of digital investigation to the constraints of the Internet of Things. We develop methodologies and tools to understand and analyze the connected environment. We assume that the crime scene is a connected whole and not an aggregate of independent digital objects. It contains key data for understanding and contextualizing a past event or phenomenon as evidence for the criminal trial. Digital forensics is considered to be the og extit{application of science to the identification, collection, examination, and analysis, of data while preserving the integrity of the information and maintaining a strict chain of custody for the data fg~ (National Institute of Standards and Technology). Faced with a crime scene, the investigator seeks to understand the criminal event. He examines the data stored in the physical medium and/or in a remote part of the cloud. Our work develops a process of rapid identification of the phenomenon according to four phases: detection, localization, object recognition and information crosschecking. It is enriched with radio signature search tools~: single-sensor and multi-sensor mesh network. This approach is built around the problem of apprehending a multiform connected environment, containing devices that are not always visible or identifiable during a field approach. We integrate in our study the strategy of equipment collection. The challenge lies in the ability to extract one or more connected objects, without compromising the stored data, to place them in a controlled and secure environment. The object is maintained in a state that guarantees the non-alteration or loss of data. The study includes a first phase of understanding the physical environment and dependencies. It seeks to determine the mechanisms of information migration to online platforms and to isolate groups of objects by intelligently breaking the connections. Devices are extracted, then packaged and sealed according to their technical characteristics and the connected infrastructure. We then deepen the exploitation of the information collected using forensic methods. The data is then analyzed according to temporal, spatial and contextual axes. We also propose a classification and a prioritization of the connected structure according to the characteristics of the desired data. The work gives a reading of the life cycle of the data within the Internet of Things infrastructure. In a prospective approach, we deepen the questions of the fine identification of the connected object according to these hardware and software characteristics. The acoustic signature of electronics appears as a relevant physical property in the study of equipment. This feature completes our range of tools in the identification of connected objects
APA, Harvard, Vancouver, ISO, and other styles
38

(11218029), Herschel R. Bowling. "A Forensic Analysis of Microsoft Teams." Thesis, 2021.

Find full text
Abstract:
Digital forensic investigators have a duty to understand the relevant components of the cases that they work. However, with the constant evolution of technologies, and the release of new platforms and programs, it is impossible for an investigator to be familiar with every application they encounter. It can also be difficult to know how forensic tools handle certain applications. This is why forensic researchers study and document new and emerging technologies, platforms, and applications, so that investigators have resources to utilize whenever they encounter an unfamiliar element in a case.

n 2017, Microsoft released a new communication platform, Microsoft Teams(Koenigsbauer, 2017). Due to the application’s relatively young age, there has not been any significant forensic research relating to Microsoft Teams. This platform as of April 2021 had 145million daily active users (Wright, 2021), nearly double the number of daily users at the same time in 2020 (Zaveri, 2020). This rapid growth is attributed in part to the need to work from home due to the COVID-19 virus (Zaveri, 2020). Given the size of its user base, it seems likely that forensic investigators will encounter cases where Microsoft Teams is a relevant component but may not have the knowledge required to efficiently investigate the platform.

To help fill this gap, an analysis of data stored at rest by Microsoft Teams was conducted, both on the Windows 10 operating system as well as on mobile operating systems, such as IOS and Android has been conducted. Basic functionality such as messaging, sharing files, participating in video conferences, and other functionalities that Teams provides were performed in an isolated testing environment. These devices were analyzed with both automated forensic tools, and non automated investigation. Specifically, Cellebrite UFED for the mobile devices, and Magnet AXIOM for the Windows device were used. Manual or non-automated investigation recovered, at least partially, the majority of artifacts across all three devices. In this study, the forensic tools used did not recover many of the artifacts that were found with manual investigation. These discovered artifacts, and the results of the tools, are documented in the hopes of aiding future investigations.

APA, Harvard, Vancouver, ISO, and other styles
39

(11205891), Tanvi Milind Gandhi. "Forensic Analysis of GroupMe on Android and iOS Smartphones." Thesis, 2021.

Find full text
Abstract:
The growing popularity of instant messaging has led to the conception of several new applications over the span of the past decade. This has opened up an attack surface for cybercriminals to target susceptible app users. GroupMe is a free IM app widely used by students and so far, no comprehensive forensic analysis has been performed to aid forensic practitioners in recovering evidence from GroupMe on smartphones. This research performs a detailed analysis of the digital artifacts left by the app on Android and iOS devices. This was achieved by installing the app on two mobile phones (Samsung Galaxy S7 Edge and iPhone 6), and identifying each artifact created by performing a series of actions in the app ranging from sending texts, to sharing images and documents, along with their location. Using Cellebrite UFED and Magnet AXIOM, a significant number of artifacts were accurately recovered mainly from the “GroupMe.sqlite” and “GroupMe.sqlite-wal” databases. Out of the 335 artifacts populated on the iPhone, 317 were correctly recovered by both UFED and AXIOM, resulting in an accuracy of 94.62%. No GroupMe related artifacts could be recovered from the Android device. This was due to several physical imaging and rooting limitations imposed by the Samsung SM-935A model, which was used during the study.
APA, Harvard, Vancouver, ISO, and other styles
40

Adedayo, Oluwasola Mary. "Reconstruction in Database Forensics." Thesis, 2015. http://hdl.handle.net/2263/43777.

Full text
Abstract:
The increasing usage of databases in the storage of critical and sensitive information in many organizations has led to an increase in the rate at which databases are exploited in computer crimes. Databases are often manipulated to facilitate crimes and as such are usually of interest during many investigations as useful information relevant to the investigation can be found therein. A branch of digital forensics that deals with the identification, preservation, analysis and presentation of digital evidence from databases is known as database forensics. Despite the large amount of information that can be retrieved from databases and the amount of research that has been done on various aspects of databases, database security and digital forensics in general, very little has been done on database forensics. Databases have also been excluded from traditional digital investigations until very recently. This can be attributed to the inherent complexities of databases and the lack of knowledge on how the information contained in the database can be retrieved, especially in cases where such information have been modified or existed in the past. This thesis addresses one major part of the challenges in database forensics, which is the reconstruction of the information stored in the database at some earlier time. The dimensions involved in a database forensics analysis problem are identified and the thesis focuses on one of these dimensions. Concepts such as the relational algebra log and the inverse relational algebra are introduced as tools in the definition of a theoretical framework that can be used for database forensics. The thesis provides an algorithm for database reconstruction and outlines the correctness proof of the algorithm. Various techniques for a complete regeneration of deleted or lost data during a database forensics analysis are also described. Due to the importance of having adequate logs in order to use the algorithm, specifications of an ideal log configuration for an effective reconstruction process are given, putting into consideration the various dimensions of the database forensics problem space. Throughout the thesis, practical situations that illustrate the application of the algorithms and techniques described are given. The thesis provides a scientific approach that can be used for handling database forensics analysis practice and research, particularly in the aspect of reconstructing the data in a database. It also adds to the field of digital forensics by providing insights into the field of database forensics reconstruction.
Thesis (PhD)--University of Pretoria, 2015.
Computer Science
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Bo-Lin, and 吳柏霖. "A Study on VoIP Attack Analysis and Digital Evidence Forensic Mechanism." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/48828100897889091952.

Full text
Abstract:
碩士
佛光大學
資訊學系
98
From the ancient beacon fire communication, pigeon mail to the present mobile communication, different risks come up with the developments of each communication technology. In the case of pigeon mail, the most critical risk is that the pigeons may be shot by the hunters during the delivery. And the ways of eavesdropping on telephone is undoubtedly numerous. People nowadays benefit from the popularization of communication, the choices of communication tools become various – from the common fixed-line telephone to mobile phone which is invented for the need of convenience in movability. Also, the VoIP has been developed in order to lower down the cost of communication; nevertheless, the new technology becomes the best crime tool for the evildoers and damages the public security because of its high anonymous characteristic, movability and low-cost characteristic. Thus, we should put much emphasis on information communication security (ICT); otherwise it may cause great damage to ourselves and even national security. Research, Development and Evaluation Commission once hold an internet election of “Top Ten Complaints”, the result showed that the overabundance of phone and internet fraud was included. This result represents that the phone and internet fraud trouble people a lot. Thus, this research will first discuss the analysis of the crime mode of phone and internet fraud and the collect the security threats of VoIP. Then this paper will analyze the threats one-by-one and comes up with different reacting policies. Furthermore, this research also brings up the VoIP Digital Evidence Forensics Standard Operating Procedures in order to help the police organizations and sets up an experimenting platform which is able to operate phone calls, attacks and forensics to test and verify the research. Consequently, this paper will discuss how the evildoers use communication technology to commit the crime such as the crime facts, crime techniques and investigation methods of internet phone fraud. Next, the analysis will be focused on the security of internet phone and organize a prevention method of internet phone call attack and the attention points of setting up a internet phone; at the same time, the importance of digital evidence and digital forensics will be pointed out. At last, this research will combine the discussion of the digital evidence mechanism and internet phone call in order to integrate the VoIP DEFSOP for the crime investigators and people who are interested in digital evidence forensics to consult to.
APA, Harvard, Vancouver, ISO, and other styles
42

Yi, Jun. "User Interface Design And Forensic Analysis For DIORAMA, Decision Support System For Mass Casualty Incidents." 2015. https://scholarworks.umass.edu/masters_theses_2/308.

Full text
Abstract:
In this thesis we introduces the user interface design and forensic analysis tool for DIORAMA system. With an Android device, DIORAMA provides emergency personnel the ability to collect information in real time, track the resources and manage them. It allows the responders and commanders to mange multiple incidents simultaneously. This thesis also describes the implementations of commander app and responder app, as well as two different communication strategies used in DIORAMA. Several trials and simulated mass casualty incidents were conducted to test the functionalities and performance of DIORAMA system. All responders that participated in all trials were very satisfied with it. As a result, DIORAMA system significantly reduced the evacuation time by up to 43% when compared to paper based triage systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Kellerman, Lizan. "Evaluating the effectiveness of Benford's law as an investigative tool for forensic accountants / Lizan Kellerman." Thesis, 2014. http://hdl.handle.net/10394/11729.

Full text
Abstract:
“Some numbers really are more popular than others.” Mark J. Nigrini (1998a:15) The above idea appears to defy common sense. In a random sequence of numbers drawn from a company’s financial books, every digit from 1 to 9 seems to have a one-in-nine chance of being the leading digit when used in a series of numbers. But, according to a mathematical formula of over 60 years old making its way into the field of accounting, certain numbers are actually more popular than others (Nigrini, 1998a:15). Accounting numbers usually follow a mathematical law, named Benford’s Law, of which the result is so unpredictable that fraudsters and manipulators, as a rule, do not succeed in observing the Law. With this knowledge, the forensic accountant is empowered to detect irregularities, anomalies, errors or fraud that may be present in a financial data set. The main objective of this study was to evaluate the effectiveness of Benford’s Law as a tool for forensic accountants. The empirical research used data from Company X to test the hypothesis that, in the context of financial fraud investigations, a significant difference between the actual and expected frequencies of Benford’s Law could be an indication of an error, fraud or irregularity. The effectiveness of Benford’s Law was evaluated according to findings from the literature review and empirical study. The results indicated that a Benford’s Law analysis was efficient in identifying the target groups in the data set that needed further investigation as their numbers did not match Benford’s Law.
MCom (Forensic Accountancy), North-West University, Potchefstroom Campus, 2014
APA, Harvard, Vancouver, ISO, and other styles
44

Fei, B. K. L. (Bennie Kar Leung). "Data visualisation in digital forensics." Diss., 2007. http://hdl.handle.net/2263/22996.

Full text
Abstract:
As digital crimes have risen, so has the need for digital forensics. Numerous state-of-the-art tools have been developed to assist digital investigators conduct proper investigations into digital crimes. However, digital investigations are becoming increasingly complex and time consuming due to the amount of data involved, and digital investigators can find themselves unable to conduct them in an appropriately efficient and effective manner. This situation has prompted the need for new tools capable of handling such large, complex investigations. Data mining is one such potential tool. It is still relatively unexplored from a digital forensics perspective, but the purpose of data mining is to discover new knowledge from data where the dimensionality, complexity or volume of data is prohibitively large for manual analysis. This study assesses the self-organising map (SOM), a neural network model and data mining technique that could potentially offer tremendous benefits to digital forensics. The focus of this study is to demonstrate how the SOM can help digital investigators to make better decisions and conduct the forensic analysis process more efficiently and effectively during a digital investigation. The SOM’s visualisation capabilities can not only be used to reveal interesting patterns, but can also serve as a platform for further, interactive analysis.
Dissertation (MSc (Computer Science))--University of Pretoria, 2007.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
45

WANG, PO-KAI, and 王柏凱. "Research on Correlation Analysis of Logs from Digital Forensics Tools." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/jk77zx.

Full text
Abstract:
碩士
中國文化大學
資訊管理學系
106
Digital forensics encompasses the investigation to computer crime in order to trace criminals.Digital authentication has many phases. The difficulty of digital identification lies in the fact that identifiers would have to use distinct tools depending on different purposes. Therefore, identification often results of logs in requiring a number of tools to implement and the execution of tools are complicated. Furthermore, these forensics tools often necessary to interpret the results generated by each tool as a basis for the next step. The current tools, however, are lack of integration and the outputs from which are hard to comprehend as well. In the face of increasingly complex cyber attacks, the demand in Digital Forensics for business organization has been increasing rapidly. Digital Forensics needs experiences from experts and a great deal of time, therefore, it becomes a challenge for Digital Forensics when it comes to how to collect logs information and analyze suspicious computers. This study had collected information from victim's computer through various perspective and tools. This system will perform five-oriented analysis for feasible systems: abnormal network connections, abnormal browsing records, abnormal changes of system, abnormal file checking, and abnormal program checking. Finally, in this research, we will provide an integrated forensic tool: The digital-forensic-system, it analyzes victim’s computer, and looks forward to executing the digital forensic work in a quick and correct way. Key Words: digital forensics, incident investigation tools, information security
APA, Harvard, Vancouver, ISO, and other styles
46

Calé, Joana Cardoso. "Human fingerprint dating in forensic analysis." Master's thesis, 2020. http://hdl.handle.net/10451/47632.

Full text
Abstract:
Tese de mestrado em Bioquímica, Universidade de Lisboa, Faculdade de Ciências, 2021
As impressões digitais, constituídas por cristas e vales, formam um determinado padrão que é único a cada indivíduo. Metabolitos do suor e outras secreções, bem como resíduos químicos presentes na ponta de um dedo deixam não só um padrão específico para o indivíduo, uma impressão digital latente, como também vasta informação que pode ser explorada para propósitos forenses. As impressões digitais la tentes fornecem um forte valor probatório no que diz respeito a investigações forenses há mais de meio século, particularmente através da comparação de padrões como meio de identificação. Do progresso da ciência e da tecnologia, adveio a progressão no que diz respeito à aquisição e interpretação de provas e evidências associadas a impressões digitais latentes [1]. Os mais recentes desenvolvimentos estão asso ciados à determinação da informação química e bioquímica contida numa impressão digital, sendo que vários estudos foram já iniciados no sentido de avaliar a composição química de uma impressão digital [2]. A simples visualização de uma impressão digital e o processamento da mesma numa base de dados em busca de uma correspondência pode terminar sem que qualquer correspondência tenha sido efetivamente encontrada. Desta forma, cessa qualquer uso futuro da impressão digital como meio de identificação ou de investigação [3]. No entanto, o processo de análise da impressão digital não tem de terminar obrigatoriamente com a pesquisa numa base de dados; e é isso que é proposto e investigado ao longo deste trabalho – amplificar as informações que podem ser recuperadas a partir de impressões digitais latentes. O trabalho aqui descrito concentra-se em explorar e aprofundar o conhecimento que pode ser adquirido a partir da informação química de uma impressão digital. Particularmente, a composição de uma im pressão digital no momento em que esta é depositada, bem como as alterações químicas que ocorrem nos compostos que a constituem ao longo do tempo. Tal é de extrema importância, uma vez que a datação das impressões digitais pode ter um impacto significativo na área das ciências forenses, visto que tem o potencial de auxiliar o processo judicial ao avaliar a relevância e o valor de uma impressão digital encontrada na cena do crime. Assim, permitiria aos investigadores não só identificar as pessoas com acesso à cena do crime, mas também criar uma cronologia dos eventos sucedidos a fim de perceber se um dado indivíduo teve ou não acesso a esse local na altura em que o crime ocorreu [2]. É ainda de salientar que não existe atualmente um método que seja capaz de prever a idade de uma impressão digital com rigor e exatidão, não sendo por isso possível determinar quando é que ocorreu a deposição da mesma [4]. Porém, a espetrometria de massa tem-se revelado uma metodologia analítica de crescente interesse no que diz respeito ao estudo de impressões digitais. Este interesse crescente está associado à elevada quantidade de informação que pode ser extraída de uma impressão digital, tanto no que diz respeito à própria impressão digital, como no que concerne ao dador da mesma [4]. Relativamente à investigação, a análise foi realizada num número variável de voluntários – dois voluntários para a análise de uma gama de solventes, nove voluntários para a análise de amostras incubadas durante um mês e oito voluntários para amostras incubadas durante uma semana. Os voluntários fazem parte do grupo de investigação do Laboratório de FT-ICR e Espectrometria de Massa Estrutural (FTICR-MS-Lisboa) da Faculdade de Ciências da Universidade de Lisboa, abrangendo elementos de diferentes faixas etárias e sexos. Deste modo, os resultados obtidos e as conclusões alcançadas apenas refletem uma tendência média da amostra em estudo. Adicionalmente, não houve qualquer tipo de controlo relativamente à dieta de cada voluntário nem sobre a força exercida pelos dedos dos mesmos nas lâminas de vidro, uma vez que se procurou reproduzir o mais próximo possível o cenário encontrado numa cena do crime. A espetrometria de massa de ressonância ciclotrónica de ião com transformada de Fourier (FT-ICR MS) foi a metodologia analítica escolhida, uma vez que permite a determinação precisa da fórmula química de uma molécula a partir da massa medida, facilitando também a pesquisa numa base de dados, devido à sua extrema precisão de massa. Neste caso, ao conhecer a composição elementar das pequenas moléculas encontradas numa impressão digital é possível identifica-las. A extração de compostos químicos de superfícies não porosas e inertes foi otimizada. Para tal, foi analisada uma gama de solventes de modo a determinar qual o melhor para a análise de impressões digitais. Esta determinação teve por base não só o risco químico associado a cada solvente, como a sua disponibilidade e o número de compostos cujo solvente permitia identificar por espetrometria de massa. Deste modo, tendo permitido a identificação de 19 compostos, tratando-se do mais seguro e do mais adequado para o método de ionização utilizado (ESI), o solvente constituído por acetonitrilo : metanol : água foi considerado o mais adequado para prosseguir com o estudo do envelhecimento de impressões digitais. A aplicação da espetrometria de massa permitiu a identificação e confirmação da presença de vários compostos previamente descritos na literatura como presentes em impressões digitais. Contudo, permitiu também a identificação de novos compostos, presentes habitualmente no suor, que aparentam variar com o envelhecimento dos resíduos da impressão digital. No que diz respeito ao aparecimento de compostos após a incubação, foi possível identificar compostos associados à decomposição de colesterol, ácidos carboxílicos e derivados, ácidos gordos saturados e insaturados e aminoácidos. Para além destas classes de compostos, que já haviam sido descritas como aparecendo em impressões digitais com o passar do tempo, foram identificados compostos como o esqualeno e lactapiperanol D. Relativamente aos compostos descritos como desaparecendo após um determinado período de incubação, alguns dos mencionados são ácidos gordos insaturados, esqualeno e derivados do esqualeno. Neste caso, para além da identificação de ácidos gordos insaturados, foi detetada a presença de novos com postos como o ácido esteárico e lactapiperanol D, cujo desaparecimento ainda não foi descrito na literatura. O aumento da quantidade de um dado composto foi medido pelo aumento da intensidade do mesmo. Ácidos gordos saturados de cadeia curta têm tendência a sofrer um aumento, uma vez que podem derivar de ácidos gordos de cadeia longa. Epóxidos de esqualeno e derivados do esqualeno foram também previamente descritos como aumentando ao longo do tempo. Para amostras incubadas durante um mês, verificou-se o aumento da quantidade de alguns ácidos gordos saturados de cadeia média, nomeada mente os ácidos azelaico e láurico. O ácido adípico, uma forma oxidada que deriva do esqualeno, foi também identificado. Por outro lado, a diminuição da quantidade de um determinado composto foi medida através da diminuição da intensidade do mesmo. Ácidos gordos insaturados, triglicéridos, colesterol e esqualeno são compostos cuja quantidade tende a diminuir com o passar do tempo. Neste caso, nenhum composto foi identificado para amostras incubadas durante o período de um mês. Porém, tanto o ácido colnelénico como o metilgingerol foram identificados como tendo diminuído após uma semana de incubação, sendo que o ácido colnelénico se trata de um ácido gordo insaturado Assim sendo, foi possível confirmar não só variações de compostos previamente descritos na literatura, como também identificar novos compostos que aparentam variar similarmente com o envelhecimento dos resíduos deixados pela impressão digital. Embora os resultados apresentados e as conclusões alcançadas sejam preliminares, é possível mostrar que uma técnica de alta resolução como a espectrometria de massa FT-ICR pode vir a ter um papel fulcral na análise forense e na datação de impressões digitais humanas num futuro próximo. Em suma, pretende-se que as conclusões tiradas ao longo desta dissertação potencializem um estudo futuro e mais aprofundado de como as pequenas moléculas e os compostos encontrados em impressões digitais variam com o tempo. Apesar do interesse pelas impressões digitais ter surgido há várias décadas, o seu estudo constitui ainda uma área consideravelmente inexplorada que requer mais tempo, dedicação e investigação. Deste modo, pretende-se que esta dissertação contribua para o que pode constituir o futuro da análise de impressões digitais numa perspetiva forense.
Fingerprints consist of ridges and valleys that form a pattern that is unique to the individual. Metabolites from sweat and other secretions as well as chemical residues that are present on a fingertip leave behind not only an individual specific pattern, a latent fingerprint, but also a wealth of information that may be exploited for forensic purposes [1]. Latent fingerprints have provided evidentiary value in forensic investigations for over a century, mainly through pattern comparison, as a means of identification. As science and technology have progressed, so has the collection and interpretation of evidence associated with latent fingerprints [1]. The latest developments focus on the assessment of the chemical and bio chemical information contained in a fingerprint with several studies being initiated into the chemical composition of a fingerprint [2]. Simply visualizing the fingerprint and running the image through a database in search of a match may lead to no match being actually found, ending any further use of the fingerprint as a means of identifcation [3]. However, the fingerprint analysis procedure does not have to end with a database search, and that is what we are proposing in this work, to extend information retrieval from these latent fingerprints. The work described here focuses on exploring the knowledge which can be obtained from the chemical information provided by a fingerprint, particularly the composition of a fingerprint at the time of deposition as well as the chemical changes that occur over time. This is of very high relevance as age dating of fingerprints could have a significant impact in forensic science, as it has the potential to facilitate the judicial process by assessing the relevance of a fingerprint found at a crime scene. Moreover, there is presently no current method that is capable of reliably predicting the age of a fingerprint [4]. However, mass spectrometry has become an area of increasing interest when it comes to the study of fingerprints due to the massive amount of chemical information that can be extracted regarding both the fingerprint and the donor [4]. Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was the chosen analytical method for it allows the precise determination of a molecule’s chemical formula from its measured mass as well as greatly facilitating database search due to its extreme mass accuracy. Regarding our work, by knowing the elementary composition of small molecules found in fingerprints, it is possible to identify them. Chemical compound extraction from non-porous, inert surfaces was optimized and used to identify the compounds present in fingermark residue before and after an incubation period of either a week or a month. Besides confirming variations in classes of compounds that had previously been described in the literature, it was also possible to specifically identify not only those compounds but compounds that had not been described in former research studies. These compounds’ time variations should be further analysed to confirm their consistency and their potential as target compounds for fingerprint dating. Although the results presented and the conclusions reached are preliminary, it is possible to show that a high-resolution technique such as FT-ICR mass spectrometry may come to play a pivotal role in forensic analysis and dating of human fingerprints in the near future. It is intended that these results will potentiate a further study into how small molecules found in finger prints change with time, as well as contributing to what could be the future of fingerprint analysis in a forensic perspective
APA, Harvard, Vancouver, ISO, and other styles
47

Baril, David-Emmanuel. "La transformation des enquêtes policières due à l’influence des technologies : perspective d’une unité policière spécialisée en analyse judiciaire informatique." Thèse, 2014. http://hdl.handle.net/1866/11888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Louis, Anita Lily. "Unsupervised discovery of relations for analysis of textual data in digital forensics." Diss., 2010. http://hdl.handle.net/2263/27479.

Full text
Abstract:
This dissertation addresses the problem of analysing digital data in digital forensics. It will be shown that text mining methods can be adapted and applied to digital forensics to aid analysts to more quickly, efficiently and accurately analyse data to reveal truly useful information. Investigators who wish to utilise digital evidence must examine and organise the data to piece together events and facts of a crime. The difficulty with finding relevant information quickly using the current tools and methods is that these tools rely very heavily on background knowledge for query terms and do not fully utilise the content of the data. A novel framework in which to perform evidence discovery is proposed in order to reduce the quantity of data to be analysed, aid the analysts' exploration of the data and enhance the intelligibility of the presentation of the data. The framework combines information extraction techniques with visual exploration techniques to provide a novel approach to performing evidence discovery, in the form of an evidence discovery system. By utilising unrestricted, unsupervised information extraction techniques, the investigator does not require input queries or keywords for searching, thus enabling the investigator to analyse portions of the data that may not have been identified by keyword searches. The evidence discovery system produces text graphs of the most important concepts and associations extracted from the full text to establish ties between the concepts and provide an overview and general representation of the text. Through an interactive visual interface the investigator can explore the data to identify suspects, events and the relations between suspects. Two models are proposed for performing the relation extraction process of the evidence discovery framework. The first model takes a statistical approach to discovering relations based on co-occurrences of complex concepts. The second model utilises a linguistic approach using named entity extraction and information extraction patterns. A preliminary study was performed to assess the usefulness of a text mining approach to digital forensics as against the traditional information retrieval approach. It was concluded that the novel approach to text analysis for evidence discovery presented in this dissertation is a viable and promising approach. The preliminary experiment showed that the results obtained from the evidence discovery system, using either of the relation extraction models, are sensible and useful. The approach advocated in this dissertation can therefore be successfully applied to the analysis of textual data for digital forensics Copyright
Dissertation (MSc)--University of Pretoria, 2010.
Computer Science
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
49

(6630773), Emily R. Bartusiak. "An Adversarial Approach to Spliced Forgery Detection and Localization in Satellite Imagery." Thesis, 2019.

Find full text
Abstract:
The widespread availability of image editing tools and improvements in image processing techniques make image manipulation feasible for the general population. Oftentimes, easy-to-use yet sophisticated image editing tools produce results that contain modifications imperceptible to the human observer. Distribution of forged images can have drastic ramifications, especially when coupled with the speed and vastness of the Internet. Therefore, verifying image integrity poses an immense and important challenge to the digital forensic community. Satellite images specifically can be modified in a number of ways, such as inserting objects into an image to hide existing scenes and structures. In this thesis, we describe the use of a Conditional Generative Adversarial Network (cGAN) to identify the presence of such spliced forgeries within satellite images. Additionally, we identify their locations and shapes. Trained on pristine and falsified images, our method achieves high success on these detection and localization objectives.
APA, Harvard, Vancouver, ISO, and other styles
50

Fang, Yen-Fei, and 方彥霏. "Study on Constructing Digital Evidence Forensics Standard Operating Procedures for Mobile Device - Extraction and Analysis of Digital Evidence on Smartphone." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/qrh62j.

Full text
Abstract:
碩士
國立宜蘭大學
多媒體網路通訊數位學習碩士在職專班
104
As Internet technology improves, mobile communications protocols mature and intelligent diversification and the popularity of mobile phones, change the habits of people using their cell phones, cell phone is no longer a traditional phone, smart phone, you can use communications software to communicate, surf the Web page associated with the transaction and storage of personal information ( Such as photos, notes, etc ), It's like computer action. Convenience of mobile phones, making it dependent on deepening, even those tools of crime, smart phone as the computer there are a lot of electromagnetic records, these records are digital evidence forensic value. In view of this, traditional methods of forensic equipment and will not be enough to gather digital evidence in the mobile phone. About digital evidence, and selection and use of forensic tools, is the main professional and examiner must have basic knowledge. Digital evidence collection, analysis, extraction process, you must use the standard digital forensics process in order to enhance its credibility and effectiveness. According to the scholars of this study by Professor I-Lon Lin 's Digital Evidence Forensics Standard Operating Procedure (DEFSOP), To constructing Digital Evidence Forensics Standard Operating Procedure for Mobile Device (DEFSOP For Mobile Device ). With the ISO 27037:2012 and ISO 27041:2015 analysis than to verify DEFSOP Rigorous and availability through the example DEFSOP For Mobile Device integrity and effectiveness. Mobile phone forensics operations, data extraction has some software tools can be used, under the multiple tools of interaction uses, extract the necessary evidence is not a problem, more difficult question is how to properly manage all these digital evidence, its maximum effectiveness. This study selected forensics tool for Android provides the ADB and the Cellebrite UFED through digital evidence forensics software extraction, classification and identification of data availability and validity of cross-analysis, comparison of reducing crime facts. The results of this study contribute: First, this study suggests that the Digital Evidence Forensics Standard Operating Procedure for Mobile Device (DEFSOP For Mobile Device) prototype, and three real cases to prove the four stages theory concepts, phases of preparation, action and reporting, provides event investigation and forensics officers followed standard operating procedures. Second, this study used forensic tools ADB and UFED , Through the comparative analysis of the advantages and disadvantages, operation and presentation of results, provide event investigation forensic tools reference you can follow. Three, an event-handling standards, this paper studies the international funding ISO 27041:2015 and use Cellebrite UFED Forensic tool test for China's first research master's thesis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography