Academic literature on the topic 'Disinformative data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Disinformative data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Disinformative data"

1

Kauffeldt, A., S. Halldin, A. Rodhe, C. Y. Xu, and I. K. Westerberg. "Disinformative data in large-scale hydrological modelling." Hydrology and Earth System Sciences 17, no. 7 (July 22, 2013): 2845–57. http://dx.doi.org/10.5194/hess-17-2845-2013.

Full text
Abstract:
Abstract. Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i) basin areas for different hydrographic datasets, and (ii) between climate data (precipitation and potential evaporation) and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i) most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii) basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii) the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent model simulations.
APA, Harvard, Vancouver, ISO, and other styles
2

Kauffeldt, A., S. Halldin, A. Rodhe, C. Y. Xu, and I. K. Westerberg. "Disinformative data in large-scale hydrological modelling." Hydrology and Earth System Sciences Discussions 10, no. 1 (January 14, 2013): 487–517. http://dx.doi.org/10.5194/hessd-10-487-2013.

Full text
Abstract:
Abstract. Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aims at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between different hydrographic datasets, and between climate data (precipitation and potential evaporation), and discharge data was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that: (i) most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii) basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii) the occurrence of basins exhibiting losses exceeding the energy limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. These results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent simulations.
APA, Harvard, Vancouver, ISO, and other styles
3

Beven, K., P. J. Smith, and A. Wood. "On the colour and spin of epistemic error (and what we might do about it)." Hydrology and Earth System Sciences 15, no. 10 (October 13, 2011): 3123–33. http://dx.doi.org/10.5194/hess-15-3123-2011.

Full text
Abstract:
Abstract. Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model.
APA, Harvard, Vancouver, ISO, and other styles
4

Beven, K., P. J. Smith, and A. Wood. "On the colour and spin of epistemic error (and what we might do about it)." Hydrology and Earth System Sciences Discussions 8, no. 3 (May 30, 2011): 5355–86. http://dx.doi.org/10.5194/hessd-8-5355-2011.

Full text
Abstract:
Abstract. Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model.
APA, Harvard, Vancouver, ISO, and other styles
5

Almeida, S., N. Le Vine, N. McIntyre, T. Wagener, and W. Buytaert. "Accounting for dependencies in regionalized signatures for predictions in ungauged catchments." Hydrology and Earth System Sciences Discussions 12, no. 6 (June 10, 2015): 5389–426. http://dx.doi.org/10.5194/hessd-12-5389-2015.

Full text
Abstract:
Abstract. A recurrent problem in hydrology is the absence of streamflow data to calibrate rainfall-runoff models. A commonly used approach in such circumstances conditions model parameters on regionalized response signatures. While several different signatures are often available to be included in this process, an outstanding challenge is the selection of signatures that provide useful and complementary information. Different signatures do not necessarily provide independent information, and this has led to signatures being omitted or included on a subjective basis. This paper presents a method that accounts for the inter-signature error correlation structure so that regional information is neither neglected nor double-counted when multiple signatures are included. Using 84 catchments from the MOPEX database, observed signatures are regressed against physical and climatic catchment attributes. The derived relationships are then utilized to assess the joint probability distribution of the signature regionalization errors that is subsequently used in a Bayesian procedure to condition a rainfall-runoff model. The results show that the consideration of the inter-signature error structure may improve predictions when the error correlations are strong. However, other uncertainties such as model structure and observational error may outweigh the importance of these correlations. Further, these other uncertainties cause some signatures to appear repeatedly to be disinformative.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Norita, Nash Milic, and Mohammed Ibahrine. "Data and Disinformation." Computer 54, no. 7 (July 2021): 105–10. http://dx.doi.org/10.1109/mc.2021.3074261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bader, Max. "Disinformation in Elections." Security and Human Rights 29, no. 1-4 (December 12, 2018): 24–35. http://dx.doi.org/10.1163/18750230-02901006.

Full text
Abstract:
In recent years there has been increasing attention to the potentially disruptive influence of disinformation on elections. The most common forms of disinformation in elections include the dissemination of ‘fake news’ in order to discredit opponents or to influence the voting process, the falsification or manipulation of polling data, and the use of fake election monitoring and observation. This article presents an overview of the phenomenon of disinformation in elections in both democratic and undemocratic environments, and discusses measures to reduce its scope and negative impact.
APA, Harvard, Vancouver, ISO, and other styles
8

Colborne, Adrienne, and Michael Smit. "Characterizing Disinformation Risk to Open Data in the Post-Truth Era." Journal of Data and Information Quality 12, no. 3 (July 29, 2020): 1–13. http://dx.doi.org/10.1145/3328747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berliner, David C. "Educational Reform in an Era of Disinformation." education policy analysis archives 1 (February 2, 1993): 2. http://dx.doi.org/10.14507/epaa.v1n2.1993.

Full text
Abstract:
Data which suggest the failure of America's schools to educate its youth well do not survive careful scrutiny. School reforms based on these questionable data are wrongheaded and potentially distructive of quality education. Reforms of the kind proposed by those who have started from an assumption that America's schools have failed will exacerbate the differences between the "have" and the "have-not" school districts.
APA, Harvard, Vancouver, ISO, and other styles
10

Chung, Chung Joo, Minjeong Kim, and Han Woo Park. "Big Data Analysis and Modeling of Disinformation Consumption and Diffusion on YouTube." Discourse and Policy in Social Science 12, no. 2 (October 31, 2019): 105–38. http://dx.doi.org/10.22417/dpss.2019.10.12.2.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Disinformative data"

1

Kauffeldt, Anna. "Disinformative and Uncertain Data in Global Hydrology : Challenges for Modelling and Regionalisation." Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-236864.

Full text
Abstract:
Water is essential for human well-being and healthy ecosystems, but population growth and changes in climate and land-use are putting increased stress on water resources in many regions. To ensure water security, knowledge about the spatiotemporal distribution of these resources is of great importance. However, estimates of global water resources are constrained by limitations in availability and quality of data. This thesis explores the quality of both observational and modelled data, gives an overview of models used for large-scale hydrological modelling, and explores the possibilities to deal with the scarcity of data by prediction of flow-duration curves. The evaluation of the quality of observational data for large-scale hydrological modelling was based on both hydrographic data, and model forcing and evaluation data for basins worldwide. The results showed that a GIS polygon dataset outperformed all gridded hydrographic products analysed in terms of representation of basin areas. Through a screening methodology based on the long-term water-balance equation it was shown that as many as 8–43% of the basins analysed displayed inconsistencies between forcing (precipitation and potential evaporation) and evaluation (discharge) data depending on how datasets were combined. These data could prove disinformative in hydrological model inference and analysis. The quality of key hydrological variables from a numerical weather prediction model was assessed by benchmarking against observational datasets and by analysis of the internal land-surface water budgets of several different model setups. Long-term imbalances were found between precipitation and evaporation on the global scale and between precipitation, evaporation and runoff on both cell and basin scales. These imbalances were mainly attributed to the data assimilation system in which soil moisture is used as a nudge factor to improve weather forecasts. Regionalisation, i.e. transfer of information from data-rich areas to data-sparse areas, is a necessity in hydrology because of a lack of observed data in many areas. In this thesis, the possibility to predict flow-duration curves in ungauged basins was explored by testing several different methodologies including machine learning. The results were mixed, with some well predicted curves, but many predicted curves exhibited large biases and several methods resulted in unrealistic curves.
Vatten är en förutsättning för människors och ekosystems hälsa, men befolkningsökning och förändringar av klimat och markanvändning förväntas öka trycket på vattenresurserna i många regioner i världen. För att kunna säkerställa en god tillgång till vatten krävs kunskap om hur dessa resurser varierar i tid och rum. Tillförlitligheten hos skattningar av globala vattenresurser begränsas dock både av begränsad tillgänglighet av och kvalitet hos observerade data. Denna avhandling utforskar kvaliteten av såväl observations- som modellbaserade data, ger en överblick över modeller som används för storskalig hydrologisk modellering och utforskar möjligheterna att förutsäga varaktighetskurvor som ett sätt att hantera bristen på data i många områden. Utvärderingen av observationsbaserade datas kvalitet baserades på hydrografiska data och driv- och utvärderingsdata för storskaliga hydrologiska modeller. Resultaten visade att en uppsättning data över hydrografin baserad på GIS-polygoner representerade avrinningsområdesareorna bättre än alla de som byggde på rutor. En metod baserad på långtidsvattenbalansen identifierade att kombinationen av drivdata (nederbörd och potentiell avdunstning) och utvärderingsdata (vattenföring) var fysiskt orimlig för så många som 8–43 % av de analyserade avrinningsområdena beroende på hur olika datauppsättningar kombinerades. Sådana data kan vara desinformativa för slutsatser som dras av resultat från hydrologiska modeller och analyser. Kvaliteten hos hydrologiskt viktiga variabler från en numerisk väderprognosmodell utvärderades dels genom jämförelser med observationsdata och dels genom analys av landytans vattenbudget för ett flertal olika modellvarianter. Resultaten visade obalanser mellan långtidsvärden av nederbörd och avdunstning i global skala och mellan långtidsvärden av nederbörd, avdunstning och avrinning i både modellrute- och avrinningsområdesskala. Dessa obalanser skulle till stor del kunna förklaras av den data assimilering som görs, i vilken markvattenlagret används som en justeringsfaktor för att förbättra väderprognoserna. Regionalisering, som innebär en överföring av information från områden med god tillgång på mätdata till områden med otillräcklig tillgång, är i många fall nödvändig för hydrologisk analys på grund av att mätdata saknas i många områden. I denna avhandling utforskades möjligheten att förutsäga varaktighetskurvor för avrinningsområden utan vattenföringsdata genom flera metoder inklusive maskininlärning. Resultaten var blandade med en del kurvor som förutsas väl, och andra kurvor som visade stora systematiska avvikelser. Flera metoder resulterade i orealistiska kurvor (ickemonotona eller med negativa värden).
APA, Harvard, Vancouver, ISO, and other styles
2

Beridzishvili, Jumber. "When the state cannot deal with online content : Reviewing user-driven solutions that counter political disinformation on Facebook." Thesis, Malmö universitet, Malmö högskola, Institutionen för globala politiska studier (GPS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-18502.

Full text
Abstract:
Online disinformation damage on the world’s democracy has been critical. Yet, states fail to handle online content harms. Due to exception from legal liability for hosted content, Facebook, used by a third of the world population, operates ‘duty-free’ along with other social media companies.Concerned with solutions, this has given rise to the idea in studies that social resistance could be one of the most effective ways for combating disinformation. However, how exactly do we resist, is an unsettled subject. Are there any socially-driven processes against disinformation happening out there?This paper aimed to identify such processes for giving a boost to theory-building around the topic. Two central evidence cases were developed: #IAmHere digital movement fighting disinformation and innovative tool ‘Who is Who’ for distinguishing fake accounts. Based on findings, I argue that efforts by even a very small part of society can have a significant impact on defeating online disinformation. This is because digital activism shares phenomenal particularities for shaping online political discourse around disinformation. Tools such as ‘Who is Who’, on the other hand, build social resilience against the issue, also giving boost digital activists for mass reporting of disinformation content. User-driven solutions have significant potential for further research.Keywords: Online disinformation; algorithms; digital activism; user-driven solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Icard, Benjamin. "Lying, deception and strategic omission : definition and evaluation." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE001/document.

Full text
Abstract:
Cette thèse vise à mieux définir ainsi qu'à mieux évaluer les stratégies de tromperie et de manipulation de l'information. Des ressources conceptuelles, formelles et expérimentales sont combinées en vue d'analyser des cas standards de tromperie, tels que le mensonge, mais aussi non-standards, tels que les inférences trompeuses et l'omission stratégique. Les aspects définitionnels sont traités en premier. J'analyse la définition traditionnelle du mensonge en présentant des résultats empiriques en faveur de cette définition classique (dite 'définition subjective'), contre certains arguments visant à défendre une 'définition objective' par l'ajout d'une condition de fausseté. J'examine ensuite une énigme logique issue de R. Smullyan, et qui porte sur un cas limite de tromperie basé sur une règle d'inférence par défaut pour tromper un agent par omission. Je traite ensuite des aspects évaluatifs. Je pars du cadre existant pour l'évaluation du renseignement et propose une typologie des messages fondée sur les dimensions descriptives de vérité (pour leur contenu) et d'honnêteté (pour leur source). Je présente ensuite une procédure numérique pour l'évaluation des messages basée sur les dimensions évaluatives de crédibilité (pour la vérité) et de fiabilité (pour l'honnêteté). Des modèles numériques de plausibilité servent à capturer la crédibilité a priori des messages puis des règles numériques sont proposées pour actualiser ces degrés selon la fiabilité de la source
This thesis aims at improving the definition and evaluation of deceptive strategies that can manipulate information. Using conceptual, formal and experimental resources, I analyze three deceptive strategies, some of which are standard cases of deception, in particular lies, and others non-standard cases of deception, in particular misleading inferences and strategic omissions. Firstly, I consider definitional aspects. I deal with the definition of lying, and present new empirical data supporting the traditional account of the notion (called the ‘subjective definition’), contradicting recent claims in favour of a falsity clause (leading to an ‘objective definition’). Next, I analyze non-standard cases of deception through the categories of misleading defaults and omissions of information. I use qualitative belief revision to examine a puzzle due to R. Smullyan about the possibility of triggering a default inference to deceive an addressee by omission. Secondly, I consider evaluative aspects. I take the perspective of military intelligence data processing to offer a typology of informational messages based on the descriptive dimensions of truth (for message contents) and honesty (for message sources). I also propose a numerical procedure to evaluate these messages based on the evaluative dimensions of credibility (for truth) and reliability (for honesty). Quantitative plausibility models are used to capture degrees of prior credibility of messages, and dynamic rules are defined to update these degrees depending on the reliability of the source
APA, Harvard, Vancouver, ISO, and other styles
4

"Understanding Disinformation: Learning with Weak Social Supervision." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.62707.

Full text
Abstract:
abstract: Social media has become an important means of user-centered information sharing and communications in a gamut of domains, including news consumption, entertainment, marketing, public relations, and many more. The low cost, easy access, and rapid dissemination of information on social media draws a large audience but also exacerbate the wide propagation of disinformation including fake news, i.e., news with intentionally false information. Disinformation on social media is growing fast in volume and can have detrimental societal effects. Despite the importance of this problem, our understanding of disinformation in social media is still limited. Recent advancements of computational approaches on detecting disinformation and fake news have shown some early promising results. Novel challenges are still abundant due to its complexity, diversity, dynamics, multi-modality, and costs of fact-checking or annotation. Social media data opens the door to interdisciplinary research and allows one to collectively study large-scale human behaviors otherwise impossible. For example, user engagements over information such as news articles, including posting about, commenting on, or recommending the news on social media, contain abundant rich information. Since social media data is big, incomplete, noisy, unstructured, with abundant social relations, solely relying on user engagements can be sensitive to noisy user feedback. To alleviate the problem of limited labeled data, it is important to combine contents and this new (but weak) type of information as supervision signals, i.e., weak social supervision, to advance fake news detection. The goal of this dissertation is to understand disinformation by proposing and exploiting weak social supervision for learning with little labeled data and effectively detect disinformation via innovative research and novel computational methods. In particular, I investigate learning with weak social supervision for understanding disinformation with the following computational tasks: bringing the heterogeneous social context as auxiliary information for effective fake news detection; discovering explanations of fake news from social media for explainable fake news detection; modeling multi-source of weak social supervision for early fake news detection; and transferring knowledge across domains with adversarial machine learning for cross-domain fake news detection. The findings of the dissertation significantly expand the boundaries of disinformation research and establish a novel paradigm of learning with weak social supervision that has important implications in broad applications in social media.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2020
APA, Harvard, Vancouver, ISO, and other styles
5

Higgins, Stefan. "Imagining information: the uses of storytelling." Thesis, 2020. http://hdl.handle.net/1828/12555.

Full text
Abstract:
This thesis investigates a cultural logic of information. In a world saturated with information, how is representation defined, and what kinds of boundaries does it consequently set up for establishing what can be known? I argue that a cultural logic of information articulates a common cultural definition for representation: information is understood as either a “true” representation of reality, or a substitute for reality itself. As a result, information comes to be conflated with knowledge. But, in contrast to calls (scholarly and otherwise) to police the boundaries of information, I argue 1) that information is exceedingly difficult to separate, in kind, from storytelling, because 2) the provision of information almost always entails scrambles for narrative representation, which 3) are always staged in the terms of genre. The function of these conclusions is the constant undermining of this cultural logic. I examine the intersection of a variety of cultural and theoretical objects, including: Fox News and “Make America Great Again”; scientific modelling of climate change; Claude Shannon’s mathematical theory of communication; Karl Ove Knausgaard’s My Struggle; YouTube “lifestyle” communities; and the documentary “The Act of Killing.” I suggest that a methodology that accounts for the imbrication of information and storytelling better accounts for the vicissitudes of, and ideological struggles over, these cultural phenomena. It does so, in particular, by engaging with the subjective experience of information, and assessing how subjects imagine their relations to information and to networks. The purpose of this argument is to intervene in conversations about the articulation of life in control societies.
Graduate
2021-06-20
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Disinformative data"

1

Rogers, Richard, and Sabine Niederer, eds. The Politics of Social Media Manipulation. NL Amsterdam: Amsterdam University Press, 2020. http://dx.doi.org/10.5117/9789463724838.

Full text
Abstract:
Disinformation and so-called fake news are contemporary phenomena with rich histories. Disinformation, or the willful introduction of false information for the purposes of causing harm, recalls infamous foreign interference operations in national media systems. Outcries over fake news, or dubious stories with the trappings of news, have coincided with the introduction of new media technologies that disrupt the publication, distribution and consumption of news -- from the so-called rumour-mongering broadsheets centuries ago to the blogosphere recently. Designating a news organization as fake, or der Lügenpresse, has a darker history, associated with authoritarian regimes or populist bombast diminishing the reputation of 'elite media' and the value of inconvenient truths. In a series of empirical studies, using digital methods and data journalism, the authors inquire into the extent to which social media have enabled the penetration of foreign disinformation operations, the widespread publication and spread of dubious content as well as extreme commentators with considerable followings attacking mainstream media as fake.
APA, Harvard, Vancouver, ISO, and other styles
2

Neagu, Marin. Istoria literaturii române în date. Târgoviște [Romania]: Editura Bibliotheca, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Viral data in SOA: An enterprise pandemic. Upper Saddle River, NJ: IBM Press/Pearson, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wassermann, Selma. Teaching in the Age of Disinformation: Don't Confuse Me with the Data, My Mind Is Made Up! Rowman & Littlefield Publishers, Incorporated, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Woolley, Samuel C., and Philip N. Howard. Introduction. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190931407.003.0001.

Full text
Abstract:
Computational propaganda is an emergent form of political manipulation that occurs over the Internet. The term describes the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion. Our research shows that this new mode of interrupting and influencing communication is on the rise around the globe. Advances in computing technology, especially around social automation, machine learning, and artificial intelligence, mean that computational propaganda is becoming more sophisticated and harder to track. This introduction explores the foundations of computational propaganda. It describes the key role of automated manipulation of algorithms in recent efforts to control political communication worldwide. We discuss the social data science of political communication and build upon the argument that algorithms and other computational tools now play an important political role in news consumption, issue awareness, and cultural understanding. We unpack key findings of the nine country case studies that follow—exploring the role of computational propaganda during events from local and national elections in Brazil to the ongoing security crisis between Ukraine and Russia. Our methodology in this work has been purposefully mixed, using quantitative analysis of data from several social media platforms and qualitative work that includes interviews with the people who design and deploy political bots and disinformation campaigns. Finally, we highlight original evidence about how this manipulation and amplification of disinformation is produced, managed, and circulated by political operatives and governments, and describe paths for both democratic intervention and future research in this space.
APA, Harvard, Vancouver, ISO, and other styles
6

Woolley, Samuel C., and Philip N. Howard, eds. Computational Propaganda. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190931407.001.0001.

Full text
Abstract:
Computational propaganda is an emergent form of political manipulation that occurs over the Internet. The term describes the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with the manipulation of public opinion. Our research shows that this new mode of interrupting and influencing communication is on the rise around the globe. Advances in computing technology, especially around social automation, machine learning, and artificial intelligence mean that computational propaganda is becoming more sophisticated and harder to track at an alarming rate. This introduction explores the foundations of computational propaganda. It describes the key role that automated manipulation of algorithms plays in recent efforts to control political communication worldwide. We discuss the social data science of political communication and build upon the argument that algorithms and other computational tools now play an important political role in areas like news consumption, issue awareness, and cultural understanding. We unpack the key findings of the nine country case studies that follow—exploring the role of computational propaganda during events from local and national elections in Brazil to the ongoing security crisis between Ukraine and Russia. Our methodology in this work has been purposefully mixed, we make use of quantitative analysis of data from several social media platforms and qualitative work that includes interviews with the people who design and deploy political bots and disinformation campaigns. Finally, we highlight original evidence about how this manipulation and amplification of disinformation is produced, managed, and circulated by political operatives and governments and describe paths for both democratic intervention and future research in this space.
APA, Harvard, Vancouver, ISO, and other styles
7

Benkler, Yochai, Robert Faris, and Hal Roberts. Can the Internet Survive Democracy? Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190923624.003.0012.

Full text
Abstract:
This chapter examines whether the internet can—or cannot—contribute to democratization, and under what conditions. This chapter discusses five major failure modes that limit the benefits of decentralized digitally-mediated collective action. The first is the failure to convert from a moment’s surge of decentralized passion into a longer-term, sustained effort with competence to engage political institutions systematically over time. The second is the failure to sustain the decentralized openness in the transition to more structured political organization. The third failure mode of the internet and democracy refers to the power of well-organized, data-informed central powers to move millions of people from the center out, instead of the other way around. The fourth failure mode is that precisely what makes decentralized networks so effective at circumventing established forms of control can also make them the vehicles of repressive mobs. The final failure mode is the susceptibility to disinformation and propaganda.
APA, Harvard, Vancouver, ISO, and other styles
8

Graham, Mark, and William H. Dutton, eds. Society and the Internet. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198843498.001.0001.

Full text
Abstract:
How is society being reshaped by the continued diffusion and increasing centrality of the Internet in everyday life and work? Society and the Internet provides key readings for students, scholars, and those interested in understanding the interactions of the Internet and society. This multidisciplinary collection of theoretically and empirically anchored chapters addresses the big questions about one of the most significant technological transformations of this century, through a diversity of data, methods, theories, and approaches. Drawing from a range of disciplinary perspectives, Internet research can address core questions about equality, voice, knowledge, participation, and power. By learning from the past and continuing to look toward the future, it can provide a better understanding of what the ever-changing configurations of technology and society mean, both for the everyday life of individuals and for the continued development of society at large. This second edition presents new and original contributions examining the escalating concerns around social media, disinformation, big data, and privacy. Following a foreword by Manuel Castells, the editors introduce some of the key issues in Internet Studies. The chapters then offer the latest research, in five focused sections: The Internet in Everyday Life; Digital Rights and Human Rights; Networked Ideas, Politics, and Governance; then Networked Businesses, Industries, and Economics; and finally, Technological and Regulatory Histories and Futures. This book is a valuable resource not only for students and researchers, but for anyone seeking a critical examination of the economic, social, and political factors shaping the Internet and its impact on society.
APA, Harvard, Vancouver, ISO, and other styles
9

Taking Stock of Regional Democratic Trends in Asia and the Pacific Before and During the COVID-19 Pandemic. International Institute for Democracy and Electoral Assistance, 2020. http://dx.doi.org/10.31752/idea.2020.70.

Full text
Abstract:
This GSoD In Focus Special Brief provides an overview of the state of democracy in Asia and the Pacific at the end of 2019, prior to the outbreak of the pandemic, and assesses some of the preliminary impacts that the pandemic has had on democracy in the region in 2020. Key fact and findings include: • Prior to the outbreak of the COVID-19 pandemic, countries across Asia and the Pacific faced a range of democratic challenges. Chief among these were continuing political fragility, violent conflict, recurrent military interference in the political sphere, enduring hybridity, deepening autocratization, creeping ethnonationalism, advancing populist leadership, democratic backsliding, shrinking civic space, the spread of disinformation, and weakened checks and balances. The crisis conditions engendered by the pandemic risk further entrenching and/or intensifying the negative democratic trends observable in the region prior to the COVID-19 outbreak. • Across the region, governments have been using the conditions created by the pandemic to expand executive power and restrict individual rights. Aspects of democratic practice that have been significantly impacted by anti-pandemic measures include the exercise of fundamental rights (notably freedom of assembly and free speech). Some countries have also seen deepened religious polarization and discrimination. Women, vulnerable groups, and ethnic and religious minorities have been disproportionately affected by the pandemic and discriminated against in the enforcement of lockdowns. There have been disruptions of electoral processes, increased state surveillance in some countries, and increased influence of the military. This is particularly concerning in new, fragile or backsliding democracies, which risk further eroding their already fragile democratic bases. • As in other regions, however, the pandemic has also led to a range of innovations and changes in the way democratic actors, such as parliaments, political parties, electoral commissions, civil society organizations and courts, conduct their work. In a number of countries, for example, government ministries, electoral commissions, legislators, health officials and civil society have developed innovative new online tools for keeping the public informed about national efforts to combat the pandemic. And some legislatures are figuring out new ways to hold government to account in the absence of real-time parliamentary meetings. • The consideration of political regime type in debates around ways of containing the pandemic also assumes particular relevance in Asia and the Pacific, a region that houses high-performing democracies, such as New Zealand and the Republic of Korea (South Korea), a mid-range performer (Taiwan), and also non-democratic regimes, such as China, Singapore and Viet Nam—all of which have, as of December 2020, among the lowest per capita deaths from COVID-19 in the world. While these countries have all so far managed to contain the virus with fewer fatalities than in the rest of the world, the authoritarian regimes have done so at a high human rights cost, whereas the democracies have done so while adhering to democratic principles, proving that the pandemic can effectively be fought through democratic means and does not necessarily require a trade off between public health and democracy. • The massive disruption induced by the pandemic can be an unparalleled opportunity for democratic learning, change and renovation in the region. Strengthening democratic institutions and processes across the region needs to go hand in hand with curbing the pandemic. Rebuilding societies and economic structures in its aftermath will likewise require strong, sustainable and healthy democracies, capable of tackling the gargantuan challenges ahead. The review of the state of democracy during the COVID-19 pandemic in 2020 uses qualitative analysis and data of events and trends in the region collected through International IDEA’s Global Monitor of COVID-19’s Impact on Democracy and Human Rights, an initiative co-funded by the European Union.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Disinformative data"

1

Mintal, Jozef Michal, Michal Kalman, and Karol Fabián. "Hide and Seek in Slovakia: Utilizing Tracking Code Data to Uncover Untrustworthy Website Networks." In Disinformation in Open Online Media, 101–11. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87031-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Luber, Mattias, Christoph Weisser, Benjamin Säfken, Alexander Silbersdorff, Thomas Kneib, and Krisztina Kis-Katos. "Identifying Topical Shifts in Twitter Streams: An Integration of Non-negative Matrix Factorisation, Sentiment Analysis and Structural Break Models for Large Scale Data." In Disinformation in Open Online Media, 33–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87031-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Röchert, Daniel, German Neubaum, and Stefan Stieglitz. "Identifying Political Sentiments on YouTube: A Systematic Comparison Regarding the Accuracy of Recurrent Neural Network and Machine Learning Models." In Disinformation in Open Online Media, 107–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61841-4_8.

Full text
Abstract:
Abstract Since social media have increasingly become forums to exchange personal opinions, more and more approaches have been suggested to analyze those sentiments automatically. Neural networks and traditional machine learning methods allow individual adaption by training the data, tailoring the algorithm to the particular topic that is discussed. Still, a great number of methodological combinations involving algorithms (e.g., recurrent neural networks (RNN)), techniques (e.g., word2vec), and methods (e.g., Skip-Gram) are possible. This work offers a systematic comparison of sentiment analytical approaches using different word embeddings with RNN architectures and traditional machine learning techniques. Using German comments of controversial political discussions on YouTube, this study uses metrics such as F1-score, precision and recall to compare the quality of performance of different approaches. First results show that deep neural networks outperform multiclass prediction with small datasets in contrast to traditional machine learning models with word embeddings.
APA, Harvard, Vancouver, ISO, and other styles
4

Nagasako, Tomoko. "A Consideration of the Case Study of Disinformation and Its Legal Problems." In Human-Centric Computing in a Data-Driven Society, 262–76. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62803-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spicer, Robert N. "Lies, Damn Lies, Alternative Facts, Fake News, Propaganda, Pinocchios, Pants on Fire, Disinformation, Misinformation, Post-Truth, Data, and Statistics." In Free Speech and False Speech, 1–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-69820-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Westlund, Oscar, and Alfred Hermida. "Data journalism and misinformation." In The Routledge Companion to Media Disinformation and Populism, 142–50. Routledge, 2021. http://dx.doi.org/10.4324/9781003004431-16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sippitt, Amy. "Full Fact." In Data in Society, 359–64. Policy Press, 2019. http://dx.doi.org/10.1332/policypress/9781447348214.003.0029.

Full text
Abstract:
The UK is a fortunate country with high levels of education, well-developed public and civil society institutions, and some highly trusted media. Nevertheless, there is evidence that the public is substantially misinformed on key issues of public debate, and leading figures have pointed to consistent issues involving the inaccurate use of facts in public debate. Full Fact is the UK’s independent, nonpartisan, factchecking charity. We aim to stop the spread of specific bits of inaccurate information and to secure systemic changes that help make misinformation rarer and less harmful. In this piece we discuss the state of misinformation and disinformation in the UK, the role that we think factchecking has in tackling it, and the research we are eager to learn from to inform our work.
APA, Harvard, Vancouver, ISO, and other styles
8

Barela, Steven J., and Jérôme Duberry. "Understanding Disinformation Operations in the Twenty-First Century." In Defending Democracies, 41–72. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780197556979.003.0003.

Full text
Abstract:
Remarkable developments in digital technologies have provided the conditions for a dramatic rise in state-sponsored disinformation operations crossing international borders. Spreading dezinformatsiya has a long history, but today it is done with a volume and accuracy that has left the targeted societies deeply destabilized as facts and events become sharply contested among citizens. This chapter is a descriptive work illustrating the essential components of this activity and draws three important conclusions. First, because disinformation aims to twist the truth in subtle ways when key facts remain secret and unavailable, exposing an operation becomes a tedious and difficult task. Second, the new digital world has opened the door to omnipresent operations that occur below the threshold of armed conflict and are accelerated exponentially by big data warehousing and algorithms that allow individualized targeting during an election cycle. Third, when disinformation operations disrupt the flow of information during a political campaign, the candidates involved and the process itself emerge with a dangerously eroded legitimacy. With a view to fill in critical missing data, the chapter ends with a clarion call to allow access for social scientists to study in detail of what is happening in the opaque public square of online social media wherever more political understanding is being fashioned.
APA, Harvard, Vancouver, ISO, and other styles
9

Rogers, Richard, and Sabine Niederer. "Conclusions." In The Politics of Social Media Manipulation. Nieuwe Prinsengracht 89 1018 VR Amsterdam Nederland: Amsterdam University Press, 2020. http://dx.doi.org/10.5117/9789463724838_ch08.

Full text
Abstract:
To what extent do (foreign) disinformation and so-called fake news resonate in political spaces online within social media around the 2019 provincial elections and the European parliamentary elections in the Netherlands? We found no foreign disinformation, fake advocacy groups or imposter news organizations, but we did take notice of a polarised media landscape, where problematic information, including extreme content, is engaged with (liked, shared, retweeted, etc.) or returned in search engines when querying political parties, political leaders as well as social issues. The study ultimately recommends media training as well as disengagement with extreme content, together with a call for continued access to social media platform data for media monitoring purposes.
APA, Harvard, Vancouver, ISO, and other styles
10

Vitale, Maria Prosperina, Maria Carmela Catone, Ilaria Primerano, and Giuseppe Giordano. "Unveiling Network Data Patterns in Social Media." In Handbook of Research on Advanced Research Methodologies for a Digital Society, 571–88. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8473-6.ch033.

Full text
Abstract:
The present study focuses on the usefulness of social network analysis in unveiling network patterns in social media. Specifically, the propagation and consumption of information on Twitter through network analysis tools are investigated to discover the presence of specific conversational patterns in the derived online data. The choosing of Twitter is motivated by the fact that it induces the definition of relationships between users by following communication flows on specific topics of interest and identifying key profiles who influence debates in the digital space. Further lines of research are discussed regarding the tools for discovering the spread of fake news. Considerable disinformation can be generated on social networks, offering a complex picture of informational disorientation in the digital society.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Disinformative data"

1

Lemieux, Victoria, and Tyler D. Smith. "Leveraging Archival Theory to Develop A Taxonomy of Online Disinformation." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Isle, Brian, and Tyler Smith. "Real World Examples Suggest a Path to Automated Mitigation of Disinformation." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8622153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Yichuan, Bohan Jiang, Kai Shu, and Huan Liu. "Toward A Multilingual and Multimodal Data Repository for COVID-19 Disinformation." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nakov, Preslav, and Giovanni Da San Martino. "Fake News, Disinformation, Propaganda, Media Bias, and Flattening the Curve of the COVID-19 Infodemic." In KDD '21: The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3447548.3470790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dupuis, Marc J., and Andrew Williams. "The Spread of Disinformation on the Web: An Examination of Memes on Social Networking." In 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). IEEE, 2019. http://dx.doi.org/10.1109/smartworld-uic-atc-scalcom-iop-sci.2019.00256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tayyar Madabushi, Harish, Elena Kochkina, and Michael Castelle. "Cost-Sensitive BERT for Generalisable Sentence Classification on Imbalanced Data." In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-5018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tosyalı, Hikmet. "Political Communication in the Digital Age: Algorithms and Bots." In COMMUNICATION AND TECHNOLOGY CONGRESS. ISTANBUL AYDIN UNIVERSITY, 2021. http://dx.doi.org/10.17932/ctcspc.21/ctc21.004.

Full text
Abstract:
Technology is one factor that has formed the basis for change in the media throughout history. Analog data and information shared by verbal, visual or written methods are now stored, processed, reproduced and shared in digital format due to developments in information technologies. On the other hand, social media, which is an important part of the digital media system, has become an important medium for political communication studies due to its prevalence and big data. As political actors better understand the value of data sets of millions of users, their interest in social media has also increased. However, this growing interest has also brought concerns such as digital profiling, informatics surveillance, systematic disinformation, and privacy violations. It has long been discussed that the practices of governments and technology companies for creating a structure similar to the gatekeeping in traditional media by taking social media under control. In recent years, some of these discussions are (ro)bot accounts on social media because online social networks are no longer just connecting people. Machines talk and interact with people, and even machines do this with other machines. Automatic posts made by bot accounts through algorithms to imitate people’s behavior on social media are liked, reposted or commented on by people and other bots. Bots that make political shares are also used by political actors worldwide, especially during election periods. Politicians use political bots to appear more popular on social media, disrupt their rivals’ communication strategies, and manipulate public opinion. This study aimed to reveal the effects of bots on political communication. After explaining the concepts of propaganda, algorithm, bot and computational propaganda, how political bots could affect the public sphere and elections were discussed in the light of current political communication literature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography