To see the other types of publications on this topic, follow the link: Fake News detection.

Dissertations / Theses on the topic 'Fake News detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Fake News detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nordberg, Pontus. "Automatic fake news detection." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18512.

Full text
Abstract:
Due to the large increase in the proliferation of "fake news" in recent years, it has become a widely discussed menace in the online world. In conjunction with this popularity, research of ways to limit the spread has also increased. This paper aims to look at the current research of this area in order to see what automatic fake news detection methods exist and are being developed, which can help online users in protecting themselves against fake news. A systematic literature review is conducted in order to answer this question, with different detection methods discussed in the literature being divided into categories. The consensus which appears from the collective research between categories is also used to identify common elements between categories which are important to fake news detection; notably the relation of headlines and article content, the importance of high-quality datasets, the use of emotional words, and the circulation of fake news in social media groups.
APA, Harvard, Vancouver, ISO, and other styles
2

O'Brien, Nicole (Nicole J. ). "Machine learning for detection of fake news." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119727.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 55-56).
Recent political events have lead to an increase in the popularity and spread of fake news. As demonstrated by the widespread effects of the large onset of fake news, humans are inconsistent if not outright poor detectors of fake news. With this, efforts have been made to automate the process of fake news detection. The most popular of such attempts include "blacklists" of sources and authors that are unreliable. While these tools are useful, in order to create a more complete end to end solution, we need to account for more difficult cases where reliable sources and authors release fake news. As such, the goal of this project was to create a tool for detecting the language patterns that characterize fake and real news through the use of machine learning and natural language processing techniques. The results of this project demonstrate the ability for machine learning to be useful in this task. We have built a model that catches many intuitive indications of real and fake news as well as an application that aids in the visualization of the classification decision.
by Nicole O'Brien.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Asresu, Yohannes. "Defining fake news for algorithmic deception detection purposes." Thesis, Uppsala universitet, Institutionen för informatik och media, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-390393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

RAJ, CHAHAT. "CONVOLUTIONAL NEURAL NETWORKERS FOR MULTIMODALS FAKE NEWS DETECTION." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18816.

Full text
Abstract:
An upsurge of false information revolves around the internet. Social media and websites are flooded with unverified news posts. These posts are comprised of text, images, audio, and videos. There is a requirement for a system that detects fake content in multiple data modalities. We have seen a considerable amount of research on classification techniques for textual fake news detection, while frameworks dedicated to visual fake news detection are very few. We explored the state-of-the-art methods using deep networks such as CNNs and RNNs for multi-modal online information credibility analysis. They show rapid improvement in classification tasks without requiring pre-processing. To aid the ongoing research over fake news detection using CNN models, we build textual and visual modules to analyze their performances over multi-modal datasets. We exploit latent features present inside text and images using layers of convolutions. We see how well these convolutional neural networks perform classification when provided with only latent features and analyze what type of images are needed to be fed to perform efficient fake news detection. We propose a multi- modal Coupled ConvNet architecture that fuses both the data modules and efficiently classifies online news depending on its textual and visual content. We thence offer a comparative analysis of the results of all the models utilized over three datasets. The proposed architecture outperforms various state-of-the-art methods for fake news detection with considerably high accuracies.
APA, Harvard, Vancouver, ISO, and other styles
5

Kurasinski, Lukas. "Machine Learning explainability in text classification for Fake News detection." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20058.

Full text
Abstract:
Fake news detection gained an interest in recent years. This made researchers try to findmodels that can classify text in the direction of fake news detection. While new modelsare developed, researchers mostly focus on the accuracy of a model. There is little researchdone in the subject of explainability of Neural Network (NN) models constructed for textclassification and fake news detection. When trying to add a level of explainability to aNeural Network model, allot of different aspects have to be taken under consideration.Text length, pre-processing, and complexity play an important role in achieving successfully classification. Model’s architecture has to be taken under consideration as well. Allthese aspects are analyzed in this thesis. In this work, an analysis of attention weightsis performed to give an insight into NN reasoning about texts. Visualizations are usedto show how 2 models, Bidirectional Long-Short term memory Convolution Neural Network (BIDir-LSTM-CNN), and Bidirectional Encoder Representations from Transformers(BERT), distribute their attentions while training and classifying texts. In addition, statistical data is gathered to deepen the analysis. After the analysis, it is concluded thatexplainability can positively influence the decisions made while constructing a NN modelfor text classification and fake news detection. Although explainability is useful, it is nota definitive answer to the problem. Architects should test, and experiment with differentsolutions, to be successful in effective model construction.
APA, Harvard, Vancouver, ISO, and other styles
6

Ghanem, Bilal Hisham Hasan. "On the Detection of False Information: From Rumors to Fake News." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/158570.

Full text
Abstract:
[ES] En tiempos recientes, el desarrollo de las redes sociales y de las agencias de noticias han traído nuevos retos y amenazas a la web. Estas amenazas han llamado la atención de la comunidad investigadora en Procesamiento del Lenguaje Natural (PLN) ya que están contaminando las plataformas de redes sociales. Un ejemplo de amenaza serían las noticias falsas, en las que los usuarios difunden y comparten información falsa, inexacta o engañosa. La información falsa no se limita a la información verificable, sino que también incluye información que se utiliza con fines nocivos. Además, uno de los desafíos a los que se enfrentan los investigadores es la gran cantidad de usuarios en las plataformas de redes sociales, donde detectar a los difusores de información falsa no es tarea fácil. Los trabajos previos que se han propuesto para limitar o estudiar el tema de la detección de información falsa se han centrado en comprender el lenguaje de la información falsa desde una perspectiva lingüística. En el caso de información verificable, estos enfoques se han propuesto en un entorno monolingüe. Además, apenas se ha investigado la detección de las fuentes o los difusores de información falsa en las redes sociales. En esta tesis estudiamos la información falsa desde varias perspectivas. En primer lugar, dado que los trabajos anteriores se centraron en el estudio de la información falsa en un entorno monolingüe, en esta tesis estudiamos la información falsa en un entorno multilingüe. Proponemos diferentes enfoques multilingües y los comparamos con un conjunto de baselines monolingües. Además, proporcionamos estudios sistemáticos para los resultados de la evaluación de nuestros enfoques para una mejor comprensión. En segundo lugar, hemos notado que el papel de la información afectiva no se ha investigado en profundidad. Por lo tanto, la segunda parte de nuestro trabajo de investigación estudia el papel de la información afectiva en la información falsa y muestra cómo los autores de contenido falso la emplean para manipular al lector. Aquí, investigamos varios tipos de información falsa para comprender la correlación entre la información afectiva y cada tipo (Propaganda, Trucos / Engaños, Clickbait y Sátira). Por último, aunque no menos importante, en un intento de limitar su propagación, también abordamos el problema de los difusores de información falsa en las redes sociales. En esta dirección de la investigación, nos enfocamos en explotar varias características basadas en texto extraídas de los mensajes de perfiles en línea de tales difusores. Estudiamos diferentes conjuntos de características que pueden tener el potencial de ayudar a discriminar entre difusores de información falsa y verificadores de hechos.
[CA] En temps recents, el desenvolupament de les xarxes socials i de les agències de notícies han portat nous reptes i amenaces a la web. Aquestes amenaces han cridat l'atenció de la comunitat investigadora en Processament de Llenguatge Natural (PLN) ja que estan contaminant les plataformes de xarxes socials. Un exemple d'amenaça serien les notícies falses, en què els usuaris difonen i comparteixen informació falsa, inexacta o enganyosa. La informació falsa no es limita a la informació verificable, sinó que també inclou informació que s'utilitza amb fins nocius. A més, un dels desafiaments als quals s'enfronten els investigadors és la gran quantitat d'usuaris en les plataformes de xarxes socials, on detectar els difusors d'informació falsa no és tasca fàcil. Els treballs previs que s'han proposat per limitar o estudiar el tema de la detecció d'informació falsa s'han centrat en comprendre el llenguatge de la informació falsa des d'una perspectiva lingüística. En el cas d'informació verificable, aquests enfocaments s'han proposat en un entorn monolingüe. A més, gairebé no s'ha investigat la detecció de les fonts o els difusors d'informació falsa a les xarxes socials. En aquesta tesi estudiem la informació falsa des de diverses perspectives. En primer lloc, atès que els treballs anteriors es van centrar en l'estudi de la informació falsa en un entorn monolingüe, en aquesta tesi estudiem la informació falsa en un entorn multilingüe. Proposem diferents enfocaments multilingües i els comparem amb un conjunt de baselines monolingües. A més, proporcionem estudis sistemàtics per als resultats de l'avaluació dels nostres enfocaments per a una millor comprensió. En segon lloc, hem notat que el paper de la informació afectiva no s'ha investigat en profunditat. Per tant, la segona part del nostre treball de recerca estudia el paper de la informació afectiva en la informació falsa i mostra com els autors de contingut fals l'empren per manipular el lector. Aquí, investiguem diversos tipus d'informació falsa per comprendre la correlació entre la informació afectiva i cada tipus (Propaganda, Trucs / Enganys, Clickbait i Sàtira). Finalment, però no menys important, en un intent de limitar la seva propagació, també abordem el problema dels difusors d'informació falsa a les xarxes socials. En aquesta direcció de la investigació, ens enfoquem en explotar diverses característiques basades en text extretes dels missatges de perfils en línia de tals difusors. Estudiem diferents conjunts de característiques que poden tenir el potencial d'ajudar a discriminar entre difusors d'informació falsa i verificadors de fets.
[EN] In the recent years, the development of social media and online news agencies has brought several challenges and threats to the Web. These threats have taken the attention of the Natural Language Processing (NLP) research community as they are polluting the online social media platforms. One of the examples of these threats is false information, in which false, inaccurate, or deceptive information is spread and shared by online users. False information is not limited to verifiable information, but it also involves information that is used for harmful purposes. Also, one of the challenges that researchers have to face is the massive number of users in social media platforms, where detecting false information spreaders is not an easy job. Previous work that has been proposed for limiting or studying the issue of detecting false information has focused on understanding the language of false information from a linguistic perspective. In the case of verifiable information, approaches have been proposed in a monolingual setting. Moreover, detecting the sources or the spreaders of false information in social media has not been investigated much. In this thesis we study false information from several aspects. First, since previous work focused on studying false information in a monolingual setting, in this thesis we study false information in a cross-lingual one. We propose different cross-lingual approaches and we compare them to a set of monolingual baselines. Also, we provide systematic studies for the evaluation results of our approaches for better understanding. Second, we noticed that the role of affective information was not investigated in depth. Therefore, the second part of our research work studies the role of the affective information in false information and shows how the authors of false content use it to manipulate the reader. Here, we investigate several types of false information to understand the correlation between affective information and each type (Propaganda, Hoax, Clickbait, Rumor, and Satire). Last but not least, in an attempt to limit its spread, we also address the problem of detecting false information spreaders in social media. In this research direction, we focus on exploiting several text-based features extracted from the online profile messages of those spreaders. We study different feature sets that can have the potential to help to identify false information spreaders from fact checkers.
Ghanem, BHH. (2020). On the Detection of False Information: From Rumors to Fake News [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/158570
TESIS
APA, Harvard, Vancouver, ISO, and other styles
7

Wan, Zhibin, and Huatai Xu. "Performance comparison of different machine learningmodels in detecting fake news." Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37576.

Full text
Abstract:
The phenomenon of fake news has a significant impact on our social life, especially in the political world. Fake news detection is an emerging area of research. The sharing of infor-mation on the Web, primarily through Web-based online media, is increasing. The ability to identify, evaluate, and process this information is of great importance. Deliberately created disinformation is being generated on the Internet, either intentionally or unintentionally. This is affecting a more significant segment of society that is being blinded by technology. This paper illustrates models and methods for detecting fake news from news articles with the help of machine learning and natural language processing. We study and compare three different feature extraction techniques and seven different machine classification techniques. Different feature engineering methods such as TF, TF-IDF, and Word2Vec are used to gener-ate feature vectors in this proposed work. Even different machine learning classification al-gorithms were trained to classify news as false or true. The best algorithm was selected to build a model to classify news as false or true, considering accuracy, F1 score, etc., for com-parison. We perform two different sets of experiments and finally obtain the combination of fake news detection models that perform best in different situations.
APA, Harvard, Vancouver, ISO, and other styles
8

Frimodig, Matilda, and Sivertsson Tom Lanhed. "A Comparative study of Knowledge Graph Embedding Models for use in Fake News Detection." Thesis, Malmö universitet, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-43228.

Full text
Abstract:
During the past few years online misinformation, generally referred to as fake news, has been identified as an increasingly dangerous threat. As the spread of misinformation online has increased, fake news detection has become an active line of research. One approach is to use knowledge graphs for the purpose of automated fake news detection. While large scale knowledge graphs are openly available these are rarely up to date, often missing the relevant information needed for the task of fake news detection. Creating new knowledge graphs from online sources is one way to obtain the missing information. However extracting information from unstructured text is far from straightforward. Using Natural Language Processing techniques we developed a pre-processing pipeline for extracting information from text for the purpose of creating knowledge graphs. In order to classify news as fake or not fake with the use of knowledge graphs, these need to be converted into a machine understandable format, called knowledge graph embeddings. These embeddings also allow new information to be inferred or classified based on the already existing information in the knowledge graph. Only one knowledge graph embedding model has previously been used for the purpose of fake news detection while several new models have recently been developed. We compare the performance of three different embedding models, all relying on different fundamental architectures, in the specific context of fake news detection. The models used were the geometric model TransE, the tensor decomposition model ComplEx and the deep learning model ConvKB. The results of this study shows that out of the three models, ConvKB is the best performing. However other aspects than performance need to be considered and as such these results do not necessarily mean that a deep learning approach is the most suitable for real world fake news detection.
APA, Harvard, Vancouver, ISO, and other styles
9

Shell, Joshua L. "Bots and Political Discourse: System Requirements and Proposed Methods of Bot Detection and Political Affiliation via Browser Plugin." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592136507505369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abdallah, Abdallah Sabry. "Investigation of New Techniques for Face detection." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33191.

Full text
Abstract:
The task of detecting human faces within either a still image or a video frame is one of the most popular object detection problems. For the last twenty years researchers have shown great interest in this problem because it is an essential pre-processing stage for computing systems that process human faces as input data. Example applications include face recognition systems, vision systems for autonomous robots, human computer interaction systems (HCI), surveillance systems, biometric based authentication systems, video transmission and video compression systems, and content based image retrieval systems. In this thesis, non-traditional methods are investigated for detecting human faces within color images or video frames. The attempted methods are chosen such that the required computing power and memory consumption are adequate for real-time hardware implementation. First, a standard color image database is introduced in order to accomplish fair evaluation and benchmarking of face detection and skin segmentation approaches. Next, a new pre-processing scheme based on skin segmentation is presented to prepare the input image for feature extraction. The presented pre-processing scheme requires relatively low computing power and memory needs. Then, several feature extraction techniques are evaluated. This thesis introduces feature extraction based on Two Dimensional Discrete Cosine Transform (2D-DCT), Two Dimensional Discrete Wavelet Transform (2D-DWT), geometrical moment invariants, and edge detection. It also attempts to construct a hybrid feature vector by the fusion between 2D-DCT coefficients and edge information, as well as the fusion between 2D-DWT coefficients and geometrical moments. A self organizing map (SOM) based classifier is used within all the experiments to distinguish between facial and non-facial samples. Two strategies are tried to make the final decision from the output of a single SOM or multiple SOM. Finally, an FPGA based framework that implements the presented techniques, is presented as well as a partial implementation. Every presented technique has been evaluated consistently using the same dataset. The experiments show very promising results. The highest detection rate of 89.2% was obtained when using a fusion between DCT coefficients and edge information to construct the feature vector. A second highest rate of 88.7% was achieved by using a fusion between DWT coefficients and geometrical moments. Finally, a third highest rate of 85.2% was obtained by calculating the moments of edges.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Svärd, Mikael, and Philip Rumman. "COMBATING DISINFORMATION : Detecting fake news with linguistic models and classification algorithms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209755.

Full text
Abstract:
The purpose of this study is to examine the possibility of accurately distinguishing fabricated news from authentic news stories using Naive Bayes classification algorithms. This involves a comparative study of two different machine learning classification algorithms. The work also contains an overview of how linguistic text analytics can be utilized in detection purposes and an attempt to extract interesting information was made using Word Frequencies. A discussion of how different actors and parties in businesses and governments are affected by and how they handle deception caused by fake news articles was also made. This study further tries to ascertain what collective steps could be made towards introducing a functioning solution to combat fake news. The result swere inconclusive and the simple Naive Bayes algorithms used did not yieldfully satisfactory results. Word frequencies alone did not give enough information for detection. They were however found to be potentially useful as part of a larger set of algorithms and strategies as part of a solution to handling of misinformation.
Syftet med denna studie är att undersöka möjligheten att på ett pålitligt sättskilja mellan fabricerade och autentiska nyheter med hjälp av Naive bayesalgoritmer,detta involverar en komparativ studie mellan två olika typer avalgoritmer. Arbetet innehåller även en översikt över hur lingvistisk textanalyskan användas för detektion och ett försök gjordes att extrahera information medhjälp av ordfrekvenser. Det förs även en diskussion kring hur de olika aktörernaoch parterna inom näringsliv och regeringar påverkas av och hur de hanterarbedrägeri kopplat till falska nyheter. Studien försöker vidare undersöka vilkasteg som kan tas mot en fungerande lösning för att motarbeta falska nyheter. Algoritmernagav i slutändan otillfredställande resultat och ordfrekvenserna kundeinte ensamma ge nog med information. De tycktes dock potentiellt användbarasom en del i ett större maskineri av algoritmer och strategier ämnade att hanteradesinformation.
APA, Harvard, Vancouver, ISO, and other styles
12

LI, Songyu. "A New Hands-free Face to Face Video Communication Method : Profile based frontal face video reconstruction." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-152457.

Full text
Abstract:
This thesis proposes a method to reconstruct a frontal facial video basedon encoding done with the facial profile of another video sequence.The reconstructed facial video will have the similar facial expressionchanges as the changes in the profile video. First, the profiles for boththe reference video and for the test video are captured by edge detection.Then, asymmetrical principal component analysis is used to model thecorrespondence between the profile and the frontal face. This allows en-coding from a profile and decoding of the frontal face of another video.Another solution is to use dynamic time warping to match the profilesand select the best matching corresponding frontal face frame for re-construction. With this method, we can reconstructed the test frontalvideo to make it have the similar changing in facial expressions as thereference video. To improve the quality of the result video, Local Lin-ear Embedding is used to give the result video a smoother transitionbetween frames.
APA, Harvard, Vancouver, ISO, and other styles
13

Gómez, Castellà Cristina 1985. "Improving detection capabilities of doping agents by identification of new phase I and phase II metabolites by LC-MS/MS." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/132539.

Full text
Abstract:
Metabolic studies of doping agents have been traditionally performed by using gas chromatography coupled to mass spectrometry (GC-MS). In the last years, liquid chromatography coupled to mass spectrometry (LC-MS) has demonstrated to offer new possibilities to perform metabolic studies. The objective of this thesis was to study the metabolism (phase I and phase II) of different doping agents by LC-MS/MS in order to improve the detection capabilities of the studied substances. For mesocarb, a thermolabile compound, the parent drug 19 metabolites were detected in urine including mono-, di- and tri-hydroxylated metabolites excreted free or conjugated with glucuronic acid and sulphate. For toremifene, an anti-estrogenic drug with poor electron ionization properties, the parent drug and 20 metabolites were detected in urine. The structure of most of the metabolites was proposed. Anabolic androgenic steroids (AAS) metabolites conjugated with sulphate were investigated to improve the retrospectivity of the detection of these compounds. A study of hydrolysis and MS/MS behaviour of sulphate metabolites of AAS was performed, and sulphate conjugated metabolites of boldenone, methyltestosterone and metandienone were studied. Boldenone sulphate and epiboldenone sulphate were identified as boldenone metabolites in humans. They can be used as markers of exogenous boldenone administration. For methyltestosterone, three new sulphate metabolites were detected and structures were proposed. One of them, 17β-methyl-5α-androstan-3α,17α-diol 3α-sulphate was detected in urine up to 21 days after methyltestosterone administration, improving three times the retrospectivity of the detection with respect to other previously reported long-term metabolites. Several new sulphate metabolites were detected after metandienone administration. One of them was characterized as 18-nor-17β-hydroxymethyl-17α-methylandrost-1,4,13-triene-3-one conjugated with sulphate, and it was detected up to 26 days after administration, improving the retrospectivity of the detection with respect to other long-term metabolites described. The usefulness of LC-MS/MS for the detection and characterization of metabolites of doping agents has been demonstrated, especially for the study of new phase II metabolites and for metabolic studies of compounds where GC-MS shows relevant limitations.
Els estudis metabòlics de substàncies dopants han estat tradicionalment realitzats mitjançant l’ús de cromatografia de gasos acoblada a espectrometria de masses (GC-MS). En els últims anys, s’ha demostrat la utilitat de la cromatografia líquida acoblada a espectrometria de masses (LC-MS) per realitzar estudis de metabolisme. L’objectiu d’aquesta tesi va ser estudiar el metabolisme (fase I i fase II) de diferents substàncies dopants mitjançant LC-MS/MS per tal de millorar la capacitat de detecció dels compostos estudiats. Per a mesocarb, compost termolàbil, es van detectar en orina el compost inalterat i 19 metabòlits incloent metabòlits mono-, di- i tri-hidroxilats excretats lliures o conjugats amb àcid glucurònic i sulfat. Per a toremifè, un fàrmac anti-estrogènic amb espectre de masses d’impacte electrònic amb pocs ions diganòstic, es van detectar el compost inalterat i 20 metabòlits en orina. Es va proposar l’estructura de la major part de metabòlits detectats. Per tal de millorar la retrospectivitat de la detecció dels esteroides anabolitzants androgènics (AAS) es van estudiar els metabòlits conjugats amb sulfat. Es va realitzar un estudi de la hidròlisi i del comportament espectromètric dels metabòlits conjugats amb sulfat dels AAS. Es van estudiar els metabòlits conjugats amb sulfat de boldenona, metiltestosterona i metandienona. Es van identificar boldenona sulfat i epiboldenona sulfat com a metabòlits de boldenona en humans. Aquests metabòlits poden ser usats com a marcadors de l’administració exògena de boldenona. Per a metiltestosterona, es van detectar i proposar les estructures de tres nous metabòlits conjugats amb sulfat. Un d’ells, el 17β-metil-5α-androstà-3α,17α-diol 3α-sulfat, va ser detectat en orina fins a 21 dies després de l’administració de metiltestosterona. Es van detectar diversos metabòlits de metandienona conjugats amb sulfat no descrits prèviament. Un d’ells, identificat com a 18-nor-17β-hidroximetil-17α-metilandrost-1,4,13-trien-3-ona conjugat amb sulfat, va ser detectat fins 26 dies després de l’administració. Tant per a metiltestosterone com per a metandienone, els metabòlits conjugats amb sulfat permeten millorar la retrospectivitat de la detecció respecte a altres marcadors descrits anteriorment. S’ha demostrat la utilitat del LC-MS/MS per a la detecció i caracterització de metabòlits de substàncies dopants, especialment per a l’estudi de nous metabòlits de fase II i per a estudis de metabolisme de compostos que mostren limitacions en GC-MS.
APA, Harvard, Vancouver, ISO, and other styles
14

CHEN, PO-HONG, and 陳柏宏. "Text Analysis and Detection on Fake News." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/bv337x.

Full text
Abstract:
碩士
國立雲林科技大學
資訊工程系
106
In general, the features of fake news are almost the same as those of real news, so it is not easy to identify them. In this paper, we propose a fake news detection system using a deep learning model. First, news articles are preprocessed and analyzed based on different training models. Then, an ensemble learning model combining four different models called embedding LSTM, depth LSTM, LIWC CNN, and N-gram CNN is proposed for fake news detection. Besides, to achieve high accuracy of detecting fake news, the optimized weights of the ensemble learning model are determined using the Self-Adaptive Harmony Search (SAHS) algorithm. In the experiments, we verify that the proposed model is superior to the state-of-the-art methods, with the highest accuracy 99.4%. Furthermore, we also investigate the crossdomain intangibility issue and achieve the highest accuracy 72.3%. Finally, we believe there is still room for improving the ensemble learning model in addressing the crossdomain intangibility issue.
APA, Harvard, Vancouver, ISO, and other styles
15

Moura, Ricardo Ribeiro Sanfins. "Automated Fake News detection using computational Forensic Linguistics." Master's thesis, 2021. https://hdl.handle.net/10216/135505.

Full text
Abstract:
In our society, everyone has access to the internet and can post anything about any topic at any time. Despite its many advantages, this possibility brought along a serious problem: Fake News. Fake News is news that is not real for not following journalism principles. Instead, Fake News try to mimic the look and feel of real news with the intent to disinform the reader. However, what makes Fake News a real problem is the influence that it can have on our society. Lay people are attracted to this kind of news and often give more attention to them than truthful accounts. Despite the development of systems to detect Fake News, most are based on fact-checking methods, which are unfit when the news's truth is distorted, exaggerated, or even placed out of context. We aim to detect Portuguese Fake News using machine learning techniques with a Forensic Linguistic approach. Contrary to previous approaches, our approach builds upon linguistic and stylistic analysis methods that have been tried and tested in Forensic Linguistic analysis. After collecting the corpus from multiple sources, we formulated the task as a text classification problem and demonstrated the proposed classifier's capability for detecting Fake News. The results reported are promising, achieving high accuracies on the test data.
APA, Harvard, Vancouver, ISO, and other styles
16

Moura, Ricardo Ribeiro Sanfins. "Automated Fake News detection using computational Forensic Linguistics." Dissertação, 2021. https://hdl.handle.net/10216/135505.

Full text
Abstract:
In our society, everyone has access to the internet and can post anything about any topic at any time. Despite its many advantages, this possibility brought along a serious problem: Fake News. Fake News is news that is not real for not following journalism principles. Instead, Fake News try to mimic the look and feel of real news with the intent to disinform the reader. However, what makes Fake News a real problem is the influence that it can have on our society. Lay people are attracted to this kind of news and often give more attention to them than truthful accounts. Despite the development of systems to detect Fake News, most are based on fact-checking methods, which are unfit when the news's truth is distorted, exaggerated, or even placed out of context. We aim to detect Portuguese Fake News using machine learning techniques with a Forensic Linguistic approach. Contrary to previous approaches, our approach builds upon linguistic and stylistic analysis methods that have been tried and tested in Forensic Linguistic analysis. After collecting the corpus from multiple sources, we formulated the task as a text classification problem and demonstrated the proposed classifier's capability for detecting Fake News. The results reported are promising, achieving high accuracies on the test data.
APA, Harvard, Vancouver, ISO, and other styles
17

Chau, Ying-Hung, and 周瑩紅. "Detection of Fake News Using BERT with Sentiment Analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/8gwvn5.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
107
Fake news has become a hot-button issue and received tremendous attention since the 2016 U.S. presidential election. Although ‘fake news’ is an old problem that has been existed for centuries, today’s technology enables the spread of misinformation easier than ever. The internet and social media are the great enablers of the rise of fake news in recent years. The spread of fake news will definitely continue to cause negative impacts on individuals and society. Since most of the fake news revolve around politics, this research is therefore focused on political news. The more worrying trend is fake news can be generated automatically by machine. Early this year, a company named OpenAI has created an artificial intelligence system called GPT2, that is capable to generate coherent sentences, fiction and even fake news by just giving the system a block of text. Based on the above observation, detecting political fake news by text content is the main focus of this paper. Bidirectional Encoder Representations from Transformers (BERT) is one of the hottest open-sourced technique for natural language processing(NLP), that Google AI released last year. It has achieved great performance on a variety of NLP tasks, including question answering, classification and others. As BERT is a crucial technique for NLP tasks, this thesis proposed a fake news detection algorithm based on BERT pre-trained language model. The experimental analysis by using BERT shows the competitive performance for fake news detection on FakeNewsNet over other models. According to previous studies, negative and subjective terms are often used in fake news. Sentiment analysis is therefore considered as an important element for classifying news types in this research. It is shown that sentiment analysis can strengthen the BERT model for detecting fake news.
APA, Harvard, Vancouver, ISO, and other styles
18

RAVISH. "AN EFFECTIVE OPTIMIZED FAKE NEWS DETECTION SYSTEM BASED ON MACHINE LEARNING TECHNIQUES." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19166.

Full text
Abstract:
Fake News generates misleading suspense information that may be discovered. This promotes dishonesty about something like a country's position or overstates the price of specific tasks for a government, eroding democracy in particular places, such as with the Arab Spring. Organisations such as the "House of Representatives and the Background check project" try to address problems such as publisher accountability. But, as they depend exclusively on human detection by people, their coverage is small. This is not sustainable nor practicable in a world where billions of things are removed or uploaded every second. So in this publication, we suggested a strategy by employing Multi-SVM (MSVM) to identify bogus news with higher dependable accuracy and For the purpose, we are using multi-layer PCA for selecting features. The principal component analysis decreases the dimension for dataset having a huge number of the connected variables and remembers the largest change in real data. The key characteristics will be picked using firefly optimised algorithm. Several experiments have been undertaken to increase the standardised firefly algorithm's competency and adjust it to the nature of the situation. Good aspect for the suggested strategy is that it will straighten the algorithms to get a fantastic accuracy of 99.64 percent and lowered 20 percent execution time. Therefore, it delivers superior results for fake news detection performance measurements.
APA, Harvard, Vancouver, ISO, and other styles
19

Bondielli, Alessandro. "Combining natural language processing and machine learning for profiling and fake news detection." Doctoral thesis, 2021. http://hdl.handle.net/2158/1244287.

Full text
Abstract:
In recent years, Natural Language Processing (NLP) and Text Mining have become an ever-increasing field of research, also due to the advancements of Deep Learning and Language Models that allow tackling several interesting and novel problems in different application domains. Traditional techniques of text mining mostly relied on structured data to design machine learning algorithms. Nonetheless, a growing number of online platforms contain a lot of unstructured information that represent a great value for both Industry, especially in the context of Industry 4.0, and Public Administration services, e.g. for smart cities. This holds especially true in the context of social media, where the production of user-generated data is rapidly growing. Such data can be exploited with great benefits for several purposes, including profiling, information extraction, and classification. User-generated texts can in fact provide crucial insight into their interests and their skills and mindset, and can enable the comprehension of wider phenomena such as how information is spread through the internet. The goal of the present work is twofold. Firstly, several case studies are provided to demonstrate how a mixture of NLP and Text Mining approaches, and in particular the notion of distributional semantics, can be successfully exploited to model different kinds of profiles that are purely based on the provided unstructured textual information. First, city areas are profiled exploiting newspaper articles by means of word embeddings and clustering to categorize them based on their tags. Second, experiments are performed using distributional representations (aka embeddings) of entire sequences of texts. Several techniques, including traditional methods and Language Models, aimed at profiling professional figures based on their résumés are proposed and evaluated. Secondly, such key concepts and insights are applied to the challenging and open task of fake news detection and fact-checking, in order to build models capable of distinguishing between trustworthy and not trustworthy information. The proposed method exploits the semantic similarity of texts. An architecture exploiting state-of-the-art language models for semantic textual similarity and classification is proposed to perform fact-checking. The approach is evaluated against real world data containing fake news. To collect and label the data, a methodology is proposed that is able to include both real/fake news and a ground truth. The framework has been exploited to face the problems of data collection and annotation of fake news, also by exploiting fact-checking techniques. In light of the obtained results, advantages and shortcomings of approaches based on distributional text embeddings are discussed, as is the effectiveness of the proposed system for detecting fake news exploiting factually correct information. The proposed method is shown to be a viable alternative to perform fake news detection with respect to a traditional classification-based approach.
APA, Harvard, Vancouver, ISO, and other styles
20

Barros, Maria Francisca de Sousa e. Alvim Lima de. "Fake news: characterization of different individual profiles in relation to different news topics." Master's thesis, 2022. http://hdl.handle.net/10362/133068.

Full text
Abstract:
Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies Management
The existence of fake news is an extremely topical concern which calls into question the veracity of the broadcasted information. Since nowadays the search and production of news is mainly done online, the costs with content production are low and the content’s reach and speed of propagation is very high. These factors facilitate the dissemination of fake news in social platforms that are not specialized means of communication, namely in online social networks. Therefore, this study aims to characterize different profiles of Portuguese individuals based on their susceptibility to several news topics. The attainment of the mentioned profiles is going to be a valuable contribution to information management and it is going to allow future definition of measures to mitigate the propagation of fake news in social platforms. To achieve this, critical literature review was done and accompanied by the creation of a survey to analyze how academic background and topic of the news pieces influence the accuracy of individuals identifying false news. This dissertation intents to understand if there is anyone immune to fake news, or if individuals can be more or less immune depending on the topic.
APA, Harvard, Vancouver, ISO, and other styles
21

"Hidden Fear: Evaluating the Effectiveness of Messages on Social Media." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57340.

Full text
Abstract:
abstract: The development of the internet provided new means for people to communicate effectively and share their ideas. There has been a decline in the consumption of newspapers and traditional broadcasting media toward online social mediums in recent years. Social media has been introduced as a new way of increasing democratic discussions on political and social matters. Among social media, Twitter is widely used by politicians, government officials, communities, and parties to make announcements and reach their voice to their followers. This greatly increases the acceptance domain of the medium. The usage of social media during social and political campaigns has been the subject of a lot of social science studies including the Occupy Wall Street movement, The Arab Spring, the United States (US) election, more recently The Brexit campaign. The wide spread usage of social media in this space and the active participation of people in the discussions on social media made this communication channel a suitable place for spreading propaganda to alter public opinion. An interesting feature of twitter is the feasibility of which bots can be programmed to operate on this platform. Social media bots are automated agents engineered to emulate the activity of a human being by tweeting some specific content, replying to users, magnifying certain topics by retweeting them. Network on these bots is called botnets and describing the collaboration of connected computers with programs that communicates across multiple devices to perform some task. In this thesis, I will study how bots can influence the opinion, finding which parameters are playing a role in shrinking or coalescing the communities, and finally logically proving the effectiveness of each of the hypotheses.
Dissertation/Thesis
Masters Thesis Computer Science 2020
APA, Harvard, Vancouver, ISO, and other styles
22

Elhaddad, Mohamed Kamel Abdelsalam. "Web mining for social network analysis." Thesis, 2021. http://hdl.handle.net/1828/13219.

Full text
Abstract:
Undoubtedly, the rapid development of information systems and the widespread use of electronic means and social networks have played a significant role in accelerating the pace of events worldwide, such as, in the 2012 Gaza conflict (the 8-day war), in the pro-secessionist rebellion in the 2013-2014 conflict in Eastern Ukraine, in the 2016 US Presidential elections, and in conjunction with the COVID-19 outbreak pandemic since the beginning of 2020. As the number of daily shared data grows quickly on various social networking platforms in different languages, techniques to carry out automatic classification of this huge amount of data timely and correctly are needed. Of the many social networking platforms, Twitter is of the most used ones by netizens. It allows its users to communicate, share their opinions, and express their emotions (sentiments) in the form of short blogs easily at no cost. Moreover, unlike other social networking platforms, Twitter allows research institutions to access its public and historical data, upon request and under control. Therefore, many organizations, at different levels (e.g., governmental, commercial), are seeking to benefit from the analysis and classification of the shared tweets to serve in many application domains, for examples, sentiment analysis to evaluate and determine user’s polarity from the content of their shared text, and misleading information detection to ensure the legitimacy and the credibility of the shared information. To attain this objective, one can apply numerous data representation, preprocessing, natural language processing techniques, and machine/deep learning algorithms. There are several challenges and limitations with existing approaches, including issues with the management of tweets in multiple languages, the determination of what features the feature vector should include, and the assignment of representative and descriptive weights to these features for different mining tasks. Besides, there are limitations in existing performance evaluation metrics to fully assess the developed classification systems. In this dissertation, two novel frameworks are introduced; the first is to efficiently analyze and classify bilingual (Arabic and English) textual content of social networks, while the second is for evaluating the performance of binary classification algorithms. The first framework is designed with: (1) An approach to handle Arabic and English written tweets, and can be extended to cover data written in more languages and from other social networking platforms, (2) An effective data preparation and preprocessing techniques, (3) A novel feature selection technique that allows utilizing different types of features (content-dependent, context-dependent, and domain-dependent), in addition to (4) A novel feature extraction technique to assign weights to the linguistic features based on how representative they are in in the classes they belong to. The proposed framework is employed in performing sentiment analysis and misleading information detection. The performance of this framework is compared to state-of-the-art classification approaches utilizing 11 benchmark datasets comprising both Arabic and English textual content, demonstrating considerable improvement over all other performance evaluation metrics. Then, this framework is utilized in a real-life case study to detect misleading information surrounding the spread of COVID-19. In the second framework, a new multidimensional classification assessment score (MCAS) is introduced. MCAS can determine how good the classification algorithm is when dealing with binary classification problems. It takes into consideration the effect of misclassification errors on the probability of correct detection of instances from both classes. Moreover, it should be valid regardless of the size of the dataset and whether the dataset has a balanced or unbalanced distribution of its instances over the classes. An empirical and practical analysis is conducted on both synthetic and real-life datasets to compare the comportment of the proposed metric against those commonly used. The analysis reveals that the new measure can distinguish the performance of different classification techniques. Furthermore, it allows performing a class-based assessment of classification algorithms, to assess the ability of the classification algorithm when dealing with data from each class separately. This is useful if one of the classifying instances from one class is more important than instances from the other class, such as in COVID-19 testing where the detection of positive patients is much more important than negative ones.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
23

Ahmed, Hadeer. "Detecting opinion spam and fake news using n-gram analysis and semantic similarity." Thesis, 2017. https://dspace.library.uvic.ca//handle/1828/8796.

Full text
Abstract:
In recent years, deceptive contents such as fake news and fake reviews, also known as opinion spams, have increasingly become a dangerous prospect, for online users. Fake reviews affect consumers and stores a like. Furthermore, the problem of fake news has gained attention in 2016, especially in the aftermath of the last US presidential election. Fake reviews and fake news are a closely related phenomenon as both consist of writing and spreading false information or beliefs. The opinion spam problem was formulated for the first time a few years ago, but it has quickly become a growing research area due to the abundance of user-generated content. It is now easy for anyone to either write fake reviews or write fake news on the web. The biggest challenge is the lack of an efficient way to tell the difference between a real review or a fake one; even humans are often unable to tell the difference. In this thesis, we have developed an n-gram model to detect automatically fake contents with a focus on fake reviews and fake news. We studied and compared two different features extraction techniques and six machine learning classification techniques. Furthermore, we investigated the impact of keystroke features on the accuracy of the n-gram model. We also applied semantic similarity metrics to detect near-duplicated content. Experimental evaluation of the proposed using existing public datasets and a newly introduced fake news dataset introduced indicate improved performances compared to state of the art.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Tsai, Chung-Chih, and 蔡忠志. "A New Feature Set for Face Detection." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/25044260112501168862.

Full text
Abstract:
碩士
國立清華大學
資訊系統與應用研究所
93
Viola and Jones introduce a fast face detection system which uses a cascaded structure that can achieve high detection rate and low false positive rate. Their system uses integral images to compute values from features. This thesis introduces two new types of integral images which are called triangle integral images and two corresponding features which are named triangle features. And this thesis proposes a method to lower training error by modifying Discrete AdaBoost. As results, to use triangle features can decrease the numbers of features; this research achieves lower false positive rate and fewer features are used.
APA, Harvard, Vancouver, ISO, and other styles
25

Fu, Jing-Tong, and 傅靖桐. "New Face Detection Method Based on Multi-Scale Histograms." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/67401742207564243666.

Full text
Abstract:
碩士
亞洲大學
資訊工程學系
104
As the technology improves every day, the smart society has become an inexorable trend. With the development of face detection and face recognition, the quality of human life has been dramatically promoted. Face detection and face recognition has been used in many applications, such as intelligent video surveillance, access control system and people counting. Therefore, the applications of face detection and face recognition play important roles in our daily lives. This thesis presents a new face detection method which is based on multi-scale histograms. The proposed method adopts a texture descriptor with a coarse to fine, structure representing the characteristics of face texture through constructing multi-scale histograms. Compared with the LBP method, the proposed method in this thesis can achieve similar detecting precision and is validated by the result of experiments. Furthermore, the proposed method is more efficient in computational cost than LBP method. The proposed method is close to ten times faster than LBP method when the block size is set to 4╳4.
APA, Harvard, Vancouver, ISO, and other styles
26

Serre, Thomas, Lior Wolf, and Tomaso Poggio. "A new biologically motivated framework for robust object recognition." 2004. http://hdl.handle.net/1721.1/30504.

Full text
Abstract:
In this paper, we introduce a novel set of features for robust object recognition, which exhibits outstanding performances on a variety ofobject categories while being capable of learning from only a fewtraining examples. Each element of this set is a complex featureobtained by combining position- and scale-tolerant edge-detectors overneighboring positions and multiple orientations.Our system - motivated by a quantitative model of visual cortex -outperforms state-of-the-art systems on a variety of object imagedatasets from different groups. We also show that our system is ableto learn from very few examples with no prior category knowledge. Thesuccess of the approach is also a suggestive plausibility proof for aclass of feed-forward models of object recognition in cortex. Finally,we conjecture the existence of a universal overcompletedictionary of features that could handle the recognition of all objectcategories.
APA, Harvard, Vancouver, ISO, and other styles
27

Barczak, Andre Luis Chautard. "Feature-based rapid object detection : from feature extraction to parallelisation : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Sciences at Massey University, Auckland, New Zealand." 2007. http://hdl.handle.net/10179/742.

Full text
Abstract:
This thesis studies rapid object detection, focusing on feature-based methods. Firstly, modifications of training and detection of the Viola-Jones method are made to improve performance and overcome some of the current limitations such as rotation, occlusion and articulation. New classifiers produced by training and by converting existing classifiers are tested in face detection and hand detection. Secondly, the nature of invariant features in terms of the computational complexity, discrimination power and invariance to rotation and scaling are discussed. A new feature extraction method called Concentric Discs Moment Invariants (CDMI) is developed based on moment invariants and summed-area tables. The dimensionality of this set of features can be increased by using additional concentric discs, rather than using higher order moments. The CDMI set has useful properties, such as speed, rotation invariance, scaling invariance, and rapid contrast stretching can be easily implemented. The results of experiments with face detection shows a clear improvement in accuracy and performance of the CDMI method compared to the standard moment invariants method. Both the CDMI and its variant, using central moments from concentric squares, are used to assess the strength of the method applied to hand-written digits recognition. Finally, the parallelisation of the detection algorithm is discussed. A new model for the specific case of the Viola-Jones method is proposed and tested experimentally. This model takes advantage of the structure of classifiers and of the multi-resolution approach associated with the detection method. The model shows that high speedups can be achieved by broadcasting frames and carrying out the computation of one or more cascades in each node.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography