To see the other types of publications on this topic, follow the link: Disinformative data.

Journal articles on the topic 'Disinformative data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Disinformative data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kauffeldt, A., S. Halldin, A. Rodhe, C. Y. Xu, and I. K. Westerberg. "Disinformative data in large-scale hydrological modelling." Hydrology and Earth System Sciences 17, no. 7 (July 22, 2013): 2845–57. http://dx.doi.org/10.5194/hess-17-2845-2013.

Full text
Abstract:
Abstract. Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i) basin areas for different hydrographic datasets, and (ii) between climate data (precipitation and potential evaporation) and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i) most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii) basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii) the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent model simulations.
APA, Harvard, Vancouver, ISO, and other styles
2

Kauffeldt, A., S. Halldin, A. Rodhe, C. Y. Xu, and I. K. Westerberg. "Disinformative data in large-scale hydrological modelling." Hydrology and Earth System Sciences Discussions 10, no. 1 (January 14, 2013): 487–517. http://dx.doi.org/10.5194/hessd-10-487-2013.

Full text
Abstract:
Abstract. Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aims at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between different hydrographic datasets, and between climate data (precipitation and potential evaporation), and discharge data was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that: (i) most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii) basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii) the occurrence of basins exhibiting losses exceeding the energy limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. These results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent simulations.
APA, Harvard, Vancouver, ISO, and other styles
3

Beven, K., P. J. Smith, and A. Wood. "On the colour and spin of epistemic error (and what we might do about it)." Hydrology and Earth System Sciences 15, no. 10 (October 13, 2011): 3123–33. http://dx.doi.org/10.5194/hess-15-3123-2011.

Full text
Abstract:
Abstract. Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model.
APA, Harvard, Vancouver, ISO, and other styles
4

Beven, K., P. J. Smith, and A. Wood. "On the colour and spin of epistemic error (and what we might do about it)." Hydrology and Earth System Sciences Discussions 8, no. 3 (May 30, 2011): 5355–86. http://dx.doi.org/10.5194/hessd-8-5355-2011.

Full text
Abstract:
Abstract. Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model.
APA, Harvard, Vancouver, ISO, and other styles
5

Almeida, S., N. Le Vine, N. McIntyre, T. Wagener, and W. Buytaert. "Accounting for dependencies in regionalized signatures for predictions in ungauged catchments." Hydrology and Earth System Sciences Discussions 12, no. 6 (June 10, 2015): 5389–426. http://dx.doi.org/10.5194/hessd-12-5389-2015.

Full text
Abstract:
Abstract. A recurrent problem in hydrology is the absence of streamflow data to calibrate rainfall-runoff models. A commonly used approach in such circumstances conditions model parameters on regionalized response signatures. While several different signatures are often available to be included in this process, an outstanding challenge is the selection of signatures that provide useful and complementary information. Different signatures do not necessarily provide independent information, and this has led to signatures being omitted or included on a subjective basis. This paper presents a method that accounts for the inter-signature error correlation structure so that regional information is neither neglected nor double-counted when multiple signatures are included. Using 84 catchments from the MOPEX database, observed signatures are regressed against physical and climatic catchment attributes. The derived relationships are then utilized to assess the joint probability distribution of the signature regionalization errors that is subsequently used in a Bayesian procedure to condition a rainfall-runoff model. The results show that the consideration of the inter-signature error structure may improve predictions when the error correlations are strong. However, other uncertainties such as model structure and observational error may outweigh the importance of these correlations. Further, these other uncertainties cause some signatures to appear repeatedly to be disinformative.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Norita, Nash Milic, and Mohammed Ibahrine. "Data and Disinformation." Computer 54, no. 7 (July 2021): 105–10. http://dx.doi.org/10.1109/mc.2021.3074261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bader, Max. "Disinformation in Elections." Security and Human Rights 29, no. 1-4 (December 12, 2018): 24–35. http://dx.doi.org/10.1163/18750230-02901006.

Full text
Abstract:
In recent years there has been increasing attention to the potentially disruptive influence of disinformation on elections. The most common forms of disinformation in elections include the dissemination of ‘fake news’ in order to discredit opponents or to influence the voting process, the falsification or manipulation of polling data, and the use of fake election monitoring and observation. This article presents an overview of the phenomenon of disinformation in elections in both democratic and undemocratic environments, and discusses measures to reduce its scope and negative impact.
APA, Harvard, Vancouver, ISO, and other styles
8

Colborne, Adrienne, and Michael Smit. "Characterizing Disinformation Risk to Open Data in the Post-Truth Era." Journal of Data and Information Quality 12, no. 3 (July 29, 2020): 1–13. http://dx.doi.org/10.1145/3328747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berliner, David C. "Educational Reform in an Era of Disinformation." education policy analysis archives 1 (February 2, 1993): 2. http://dx.doi.org/10.14507/epaa.v1n2.1993.

Full text
Abstract:
Data which suggest the failure of America's schools to educate its youth well do not survive careful scrutiny. School reforms based on these questionable data are wrongheaded and potentially distructive of quality education. Reforms of the kind proposed by those who have started from an assumption that America's schools have failed will exacerbate the differences between the "have" and the "have-not" school districts.
APA, Harvard, Vancouver, ISO, and other styles
10

Chung, Chung Joo, Minjeong Kim, and Han Woo Park. "Big Data Analysis and Modeling of Disinformation Consumption and Diffusion on YouTube." Discourse and Policy in Social Science 12, no. 2 (October 31, 2019): 105–38. http://dx.doi.org/10.22417/dpss.2019.10.12.2.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gradoń, Kacper T., Janusz A. Hołyst, Wesley R. Moy, Julian Sienkiewicz, and Krzysztof Suchecki. "Countering misinformation: A multidisciplinary approach." Big Data & Society 8, no. 1 (January 2021): 205395172110138. http://dx.doi.org/10.1177/20539517211013848.

Full text
Abstract:
The article explores the concept of infodemics during the COVID-19 pandemic, focusing on the propagation of false or inaccurate information proliferating worldwide throughout the SARS-CoV-2 health crisis. We provide an overview of disinformation, misinformation and malinformation and discuss the notion of “fake news”, and highlight the threats these phenomena bear for health policies and national and international security. We discuss the mis-/disinformation as a significant challenge to the public health, intelligence, and policymaking communities and highlight the necessity to design measures enabling the prevention, interdiction, and mitigation of such threats. We then present an overview of selected opportunities for applying technology to study and combat disinformation, outlining several approaches currently being used to understand, describe, and model the phenomena of misinformation and disinformation. We focus specifically on complex networks, machine learning, data- and text-mining methods in misinformation detection, sentiment analysis, and agent-based models of misinformation spreading and the detection of misinformation sources in the network. We conclude with the set of recommendations supporting the World Health Organization’s initiative on infodemiology. We support the implementation of integrated preventive procedures and internationalization of infodemic management. We also endorse the application of the cross-disciplinary methodology of Crime Science discipline, supplemented by Big Data analysis and related information technologies to prevent, disrupt, and detect mis- and disinformation efficiently.
APA, Harvard, Vancouver, ISO, and other styles
12

Hashemi, Mahdi. "A Data-Driven Framework for Coding the Intent and Extent of Political Tweeting, Disinformation, and Extremism." Information 12, no. 4 (March 31, 2021): 148. http://dx.doi.org/10.3390/info12040148.

Full text
Abstract:
Disinformation campaigns on online social networks (OSNs) in recent years have underscored democracy’s vulnerability to such operations and the importance of identifying such operations and dissecting their methods, intents, and source. This paper is another milestone in a line of research on political disinformation, propaganda, and extremism on OSNs. A total of 40,000 original Tweets (not re-Tweets or Replies) related to the U.S. 2020 presidential election are collected. The intent, focus, and political affiliation of these political Tweets are determined through multiple discussions and revisions. There are three political affiliations: rightist, leftist, and neutral. A total of 171 different classes of intent or focus are defined for Tweets. A total of 25% of Tweets were left out while defining these classes of intent. The purpose is to assure that the defined classes would be able to cover the intent and focus of unseen Tweets (Tweets that were not used to determine and define these classes) and no new classes would be required. This paper provides these classes, their definition and size, and example Tweets from them. If any information is included in a Tweet, its factuality is verified through valid news sources and articles. If any opinion is included in a Tweet, it is determined that whether or not it is extreme, through multiple discussions and revisions. This paper provides analytics with regard to the political affiliation and intent of Tweets. The results show that disinformation and extreme opinions are more common among rightists Tweets than leftist Tweets. Additionally, Coronavirus pandemic is the topic of almost half of the Tweets, where 25.43% of Tweets express their unhappiness with how Republicans have handled this pandemic.
APA, Harvard, Vancouver, ISO, and other styles
13

Chadwick, Andrew, Cristian Vaccari, and Ben O’Loughlin. "Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing." New Media & Society 20, no. 11 (April 20, 2018): 4255–74. http://dx.doi.org/10.1177/1461444818769689.

Full text
Abstract:
The use of social media for sharing political information and the status of news as an essential raw material for good citizenship are both generating increasing public concern. We add to the debates about misinformation, disinformation, and “fake news” using a new theoretical framework and a unique research design integrating survey data and analysis of observed news sharing behaviors on social media. Using a media-as-resources perspective, we theorize that there are elective affinities between tabloid news and misinformation and disinformation behaviors on social media. Integrating four data sets we constructed during the 2017 UK election campaign—individual-level data on news sharing ( N = 1,525,748 tweets), website data ( N = 17,989 web domains), news article data ( N = 641 articles), and data from a custom survey of Twitter users ( N = 1313 respondents)—we find that sharing tabloid news on social media is a significant predictor of democratically dysfunctional misinformation and disinformation behaviors. We explain the consequences of this finding for the civic culture of social media and the direction of future scholarship on fake news.
APA, Harvard, Vancouver, ISO, and other styles
14

Wilson, Steven Lloyd, and Charles Wiysonge. "Social media and vaccine hesitancy." BMJ Global Health 5, no. 10 (October 2020): e004206. http://dx.doi.org/10.1136/bmjgh-2020-004206.

Full text
Abstract:
BackgroundUnderstanding the threat posed by anti-vaccination efforts on social media is critically important with the forth coming need for world wide COVID-19 vaccination programs. We globally evaluate the effect of social media and online foreign disinformation campaigns on vaccination rates and attitudes towards vaccine safety.MethodsWeuse a large-n cross-country regression framework to evaluate the effect ofsocial media on vaccine hesitancy globally. To do so, we operationalize social media usage in two dimensions: the use of it by the public to organize action(using Digital Society Project indicators), and the level of negative lyoriented discourse about vaccines on social media (using a data set of all geocoded tweets in the world from 2018-2019). In addition, we measure the level of foreign-sourced coordinated disinformation operations on social media ineach country (using Digital Society Project indicators). The outcome of vaccine hesitancy is measured in two ways. First, we use polls of what proportion ofthe public per country feels vaccines are unsafe (using Wellcome Global Monitor indicators for 137 countries). Second, we use annual data of actual vaccination rates from the WHO for 166 countries.ResultsWe found the use of social media to organise offline action to be highly predictive of the belief that vaccinations are unsafe, with such beliefs mounting as more organisation occurs on social media. In addition, the prevalence of foreign disinformation is highly statistically and substantively significant in predicting a drop in mean vaccination coverage over time. A 1-point shift upwards in the 5-point disinformation scale is associated with a 2-percentage point drop in mean vaccination coverage year over year. We also found support for the connection of foreign disinformation with negative social media activity about vaccination. The substantive effect of foreign disinformation is to increase the number of negative vaccine tweets by 15% for the median country.ConclusionThere is a significant relationship between organisation on social media and public doubts of vaccine safety. In addition, there is a substantial relationship between foreign disinformation campaigns and declining vaccination coverage.
APA, Harvard, Vancouver, ISO, and other styles
15

Andrei, Andreia Gabriela, Adriana Zait, Claudia Stoian, Oana Tugulea, and Adriana Manolica. "Citizen engagement in the “post-truth era”." Kybernetes 49, no. 5 (July 29, 2019): 1429–43. http://dx.doi.org/10.1108/k-03-2019-0178.

Full text
Abstract:
Purpose The purpose of this study is to analyze citizen engagement and to explain the underlying mechanism that makes well-intended people to act as disinformation amplifiers in the online space. The study offers new insights to be used by knowledge management for improving society’s potential to downsize the impact of disinformation that puts both knowledge system and social trust (ST) under high pressure. Design/methodology/approach The study proposes an integrative research model to explain how ST and conspiracy mentality (CM) are influencing citizen engagement in public life through different forms of action that is specific to offline or online spaces. The research model and its nine hypotheses are tested based on a survey for data collection and partial least squares method for data analysis. Findings The study finds that both online and offline actions are mediating the positive effect of ST on citizen engagement. Yet, CM has a high impact on online actions, and it exerts a significant indirect influence on citizen engagement in this manner. Originality/value Revealing the mediator role of online actions in the relationship between CM and civic engagement, the paper brings novel insights on disinformation spreading. The study explains how citizen engagement can sometimes be turned against social well-being because those prone to belief in conspiracies are the perfect targets of deceivers seeking for disinformation amplifiers in the online environment.
APA, Harvard, Vancouver, ISO, and other styles
16

KOCOŃ, Paweł. "MILITARY DISINFORMATION OPERATIONS VIEWED FROM THE PERSPECTIVE OF SYMBOLIC INTERACTIONISM." Scientific Journal of the Military University of Land Forces 166, no. 4 (October 1, 2012): 21–31. http://dx.doi.org/10.5604/01.3001.0002.3519.

Full text
Abstract:
The main aim of this article is to present the connections between military disinformation and symbolic interactionism.The article may as well be seen as the beginning of the discussion concerning the diversity of research, examining the interactions taking place during war. The understanding of such notions as military disinformation operations and symbolic interactionism covers various analytical methods, different techniques of data gathering and coding, as well as multiple facts and opinions. Therefore, this is where the practical value of the issues discussed in this article come from.
APA, Harvard, Vancouver, ISO, and other styles
17

Kholit, Noviar Jamaal, and Muhamad Nastain. "Mapping of data communication networks on social media." INJECT (Interdisciplinary Journal of Communication) 5, no. 2 (January 27, 2021): 143–62. http://dx.doi.org/10.18326/inject.v5i2.143-162.

Full text
Abstract:
Information technology is developing very fast, this has an impact on real changes in every element of life. In addition to hitting the information media industry, developments in information technology have brought updates to public spaces with easy access and an increasingly massive pattern of information distribution. Ease of access does not always present a positive side but there is a negative side, namely shifting communication patterns with the spread of false information or disinformation methods that invite public upheaval. This research uses the case study method, which is one way to investigate contemporary phenomena in the context of real life, where the boundaries between the phenomenon and the context are not clearly visible. Through the Social Network Analyzer (SNA) theoretical approach, this research will find three communication network patterns, namely a centralized network, a decentralized network and a distributed network.
APA, Harvard, Vancouver, ISO, and other styles
18

Jussila, Jari, Anu Helena Suominen, Atte Partanen, and Tapani Honkanen. "Text Analysis Methods for Misinformation–Related Research on Finnish Language Twitter." Future Internet 13, no. 6 (June 17, 2021): 157. http://dx.doi.org/10.3390/fi13060157.

Full text
Abstract:
The dissemination of disinformation and fabricated content on social media is growing. Yet little is known of what the functional Twitter data analysis methods are for languages (such as Finnish) that include word formation with endings and word stems together with derivation and compounding. Furthermore, there is a need to understand which themes linked with misinformation—and the concepts related to it—manifest in different countries and language areas in Twitter discourse. To address this issue, this study explores misinformation and its related concepts: disinformation, fake news, and propaganda in Finnish language tweets. We utilized (1) word cloud clustering, (2) topic modeling, and (3) word count analysis and clustering to detect and analyze misinformation-related concepts and themes connected to those concepts in Finnish language Twitter discussions. Our results are two-fold: (1) those concerning the functional data analysis methods and (2) those about the themes connected in discourse to the misinformation-related concepts. We noticed that each utilized method individually has critical limitations, especially all the automated analysis methods processing for the Finnish language, yet when combined they bring value to the analysis. Moreover, we discovered that politics, both internal and external, are prominent in the Twitter discussions in connection with misinformation and its related concepts of disinformation, fake news, and propaganda.
APA, Harvard, Vancouver, ISO, and other styles
19

Wolverton, Colleen, and David Stevens. "The impact of personality in recognizing disinformation." Online Information Review 44, no. 1 (December 19, 2019): 181–91. http://dx.doi.org/10.1108/oir-04-2019-0115.

Full text
Abstract:
Purpose The purpose of this paper is to investigate and quantify the effects of personality traits, as defined by the five-factor model (FFM) on an individual’s ability to detect fake news. The findings of this study are increasingly important because of the proliferation of social media news stories and the exposure of organizational stakeholders and business decision makers to a tremendous amount of information, including information that is not correct (a.k.a. disinformation). Design/methodology/approach The data were collected utilizing the snowball sampling methodology. Students in an Management Information Systems course completed the survey. Since a diverse sample was sought, survey participants were instructed to recruit another individual from a different generation. The survey questions of the FFM identify particular personality traits in respondents. Survey respondents were given a collection of nine news stories, five of which were false and four that were true. The number of correctly identified stories was recorded, and the effect of personality traits on the ability of survey respondents to identify fake news was calculated using eta-squared and the effect size index. Findings Each of the five factors in the FFM demonstrated an effect on an individual’s ability to detect disinformation. In fact, every single variable studied had at least a small effect size index, with one exception: gender, which had basically no effect. Therefore, each variable studied (with the exception of gender) explained a portion of the variability in the number of correctly identified false news stories. Specifically, this quantitative research demonstrates that individuals with the following personality traits are better able to identify disinformation: closed to experience or cautious, introverted, disagreeable or unsympathetic, unconscientious or undirected and emotionally stable. Originality/value There is scant research on an individual’s ability to detect false news stories, although some research has been conducted on the ability to detect phishing (a type of social engineering attack to obtain funds or personal information from the person being deceived). The results of this study enable corporations to determine which of their customers, investors and other stakeholders are most likely to be deceived by disinformation. With this information, they can better prepare for and combat the impacts of misinformation on their organization, and thereby avoid the negative financial impacts that result.
APA, Harvard, Vancouver, ISO, and other styles
20

Onuch, Olga, Emma Mateo, and Julian G. Waller. "Mobilization, Mass Perceptions, and (Dis)information: “New” and “Old” Media Consumption Patterns and Protest." Social Media + Society 7, no. 2 (April 2021): 205630512199965. http://dx.doi.org/10.1177/2056305121999656.

Full text
Abstract:
When people join in moments of mass protest, what role do different media sources play in their mobilization? Do the same media sources align with positive views of mass mobilizations among the public in their aftermath? And, what is the relationship between media consumption patterns and believing disinformation about protest events? Addressing these questions helps us to better understand not only what brings crowds onto the streets, but also what shapes perceptions of, and disinformation about mass mobilization among the wider population. Employing original data from a nationally representative panel survey in Ukraine ( Hale, Colton, Onuch, & Kravets, 2014 ) conducted shortly after the 2013–2014 EuroMaidan mobilization, we examine patterns of media consumption among both participants and non-participants, as well as protest supporters and non-supporters. We also explore variation in media consumption among those who believe and reject disinformation about the EuroMaidan. We test hypotheses, prominent in current protest literature, related to the influence of “new” (social media and online news) and “old” media (television) on protest behavior and attitudes. Making use of the significance of 2014 Ukraine as a testing ground for Russian disinformation tactics, we also specifically test for consumption of Russian-owned television. Our findings indicate that frequent consumption of “old” media, specifically Russian-owned television, is significantly associated with both mobilization in and positive perceptions of protest and is a better predictor of believing “fake news” than consuming “new” media sources.
APA, Harvard, Vancouver, ISO, and other styles
21

Bonsu, Kwadwo Osei. "Weighted Accuracy Algorithmic Approach in Counteracting Fake News and Disinformation." Economic and Regional Studies / Studia Ekonomiczne i Regionalne 14, no. 1 (March 1, 2021): 99–107. http://dx.doi.org/10.2478/ers-2021-0007.

Full text
Abstract:
Abstract Subject and purpose of work: Fake news and disinformation are polluting information environment. Hence, this paper proposes a methodology for fake news detection through the combined weighted accuracies of seven machine learning algorithms. Materials and methods: This paper uses natural language processing to analyze the text content of a list of news samples and then predicts whether they are FAKE or REAL. Results: Weighted accuracy algorithmic approach has been shown to reduce overfitting. It was revealed that the individual performance of the different algorithms improved after the data was extracted from the news outlet websites and ‘quality’ data was filtered by the constraint mechanism developed in the experiment. Conclusions: This model is different from the existing mechanisms in the sense that it automates the algorithm selection process and at the same time takes into account the performance of all the algorithms used, including the less performing ones, thereby increasing the mean accuracy of all the algorithm accuracies.
APA, Harvard, Vancouver, ISO, and other styles
22

Robinson, Anthony C. "Design, Dissemination, and Disinformation in Viral Maps." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-314-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Social media has made it possible for maps to reach massive audiences outside of traditional media sources. In some cases, social media maps are original designs crafted by users, in other cases they are modified or replicated from previous sources. It is now relatively easy for novice Internet users to create new maps or manipulate existing images, and social media provides a vehicle for these maps to become visible in ways that were simply not possible even a decade ago. In addition, traditional media sources now harvest content from social streams, and in some cases may amplify what was originally a socially-shared map.</p><p>Maps that rapidly reach popularity via social media can be considered viral maps. A key element of virality in social media is the structure of how content becomes viral. The concept of structural virality suggests that the nature of how media are shared is more important than the raw population that might see something (Goel, Anderson et al. 2016). For example, a social media user with millions of followers can broadcast their content to a large audience, but structurally viral content is media that does not require a major broadcaster in order to reach a large audience.</p><p>Previous work on viral cartography has shown how viral maps may develop conditions in which their audiences begin creating and repurposing maps in response, resulting in large collections of social media maps. For example, Robinson (Robinson 2018) showed how a viral election map resulted in hundreds of maps shared by social media users in response to the original work.</p><p>Viral maps and the maps that emerge in subsequent responses from social media users pose interesting challenges for cartographers to address. Understanding their design dimensions and the ways in which these maps are disseminated (often outside of the social media stream where they may have originated) are two key areas of potential research inquiry. Knowledge of design and dissemination in social mapping is necessary as well if we wish to understand the capability of social media maps to inform or actively disinform the public. We argue that the latter topic is of utmost importance given the relative ease of making maps today versus their clear rhetorical power in public discussion and debate.</p><p>New methods are emerging to characterize the design elements of social media maps and their context on the internet. For example, proprietary machine learning services such as Google Cloud Vison and Amazon Rekognition are used for real-time detection of faces, text, sentiment, image structure, and relevant web results. While the primary use case for these services is to support image moderation on social media, to improve search results, and to support marketing activities, these methods can also be applied to the study of social media maps in support of cartographic research.</p><p>For example, we have used Google Cloud Vision to characterize the design and dissemination of a viral map created and shared by Kenneth Field, a cartographer at Esri. In March of 2018, Field tweeted an image of a dot-density map showing the 2016 United States Presidential Election results. A unique aspect of this map was its ability to show one dot for each of the more than 60 million votes cast in the 2016 election. Field’s tweet was liked more than 10,000 times and retweeted over 4000 times, reaching millions of potential viewers.</p><p>Google Cloud Vision analysis of Field’s map highlights a range of election and cartographic entities that it finds relevant to the original posting (Figure 1). Field’s map generated website content that focused on both its meaning in terms of interpreting the 2016 election, as well as its technical execution in terms of cartography. It could be argued that these are not terribly surprising results, but this demonstrates nevertheless that an automated routine has the power to deliver sensible contextual information about map images. Extrapolating from one map to the millions that appear each year on social media, it becomes plausible then to apply machine learning methods to characterize their design and web context, even from streaming sources, as these methods are already built to support real-time analysis of streaming data.</p><p>The dissemination of a viral map can be characterized by the number of engagements via social means in both direct and indirect forms. Direct forms of engagement may include user actions to like, share, or reply directly to a social media post. Indirect types of engagement can include the number of people who saw an item in their social media feed, and the potential audience who may have the opportunity to see an item in their social media feeds. In addition, viral maps can become the focus of media attention from traditional news sources, and amplified further to their respective audiences. Finally, users may blog about a viral map or share them in private messages or group chats.</p><p>One way to understand the dissemination of a viral map is to take advantage of image analysis service capabilities to produce URLs that show full and partially matching versions of an image. Google Cloud Vision provides this capability along with its other image analysis functions. In the case of the Field dot density map of the 2016 election, webpages that reference the exact image from Field’s original tweet include media stories about his map, blog postings, e-commerce sites that sell printed versions of the map, and message forum discussions that reference the map. Partial image matching results reveal only a few sites that have derived versions of Field’s original maps, and all of those we reviewed were simply resampled versions of the original. Other partial image matching results included other types of dasymetric and thematic maps located on the web. For example, multiple cellular phone coverage maps are highlighted as partial matches to Field’s original work (Figure 2).</p><p>We hypothesize that there is considerable potential for social media maps to be sources of disinformation. Map remain a powerful means of communication, and it is easier than ever to create a new map or modify an existing map to convey misleading information. Future research may be able to leverage the attributes and links derived from machine learning image analysis services such as Google Cloud Vision to assess the potential for a viral map to be an agent of disinformation. For example, being able to quickly identify the original source for a map image and to characterize the constellation of websites on which it has been shared may aid users in evaluating the credibility of what they are seeing.</p><p>In November 2018, climate scientist Brian Brettschneider shared a map on Twitter that purported to show regions of the United States and their preferred Thanksgiving pie. This map went viral, drawing attention from traditional media sources as well as Twitter users with large audiences of their own, including one U.S. Senator. Many who saw this absurd map argued about its content because they incorrectly assumed it was based on real data. Brettschneider reflected on the power of creating and sharing fake viral maps in a subsequent article for Forbes (Brettschneider 2018), stating, “We cannot let maps, as a medium for communicating information, be co-opted by people with nefarious intentions. I pledge to do my part by clearly noting if a map is a parody in the future.”</p>
APA, Harvard, Vancouver, ISO, and other styles
23

Setiawana, Anang, Achmad Nurmandi, Eko Priyo Purnomo, and Arif Muhammad. "Disinformation and Miscommunication in Government Communication in Handling COVID-19 Pandemic." Webology 18, no. 1 (April 29, 2021): 203–18. http://dx.doi.org/10.14704/web/v18i1/web18084.

Full text
Abstract:
This study explores how the Indonesian government uses websites to respond to public information as the COVID-19 pandemic has developed into a global crisis.The government is expected to act quickly and decisively in responding to the public's communication and information crisis. Communication is becoming the most crucial part, especially when it comes to delivering the facts. The accuracy of the information provided also plays a significant role in shaping public perception of the situation. Data obtained were gathered from the central government and provincial government regions' official report, analyzed using SimilarWeb: Website Traffic. The findings showed that the Indonesian government did not have enough response tools set up in the event of a viral outbreak, was not well prepared in the event of communicating with the international community in the event of such an outbreak, and did not have integrated actions to be made between the central government and the second regional government in managing their response. As for the data provided by the central and regional governments, the data have now gone public, showing how good it is.
APA, Harvard, Vancouver, ISO, and other styles
24

Canare, Tristan A., Ronald U. Mendoza, Jurel K. Yap, Leonardo M. Jaminola, and Gabrielle Ann S. Mendoza. "Unpacking Presidential Satisfaction: Preliminary Insights from Survey Data on the Bottom Poor in Metro Manila." Philippine Political Science Journal 42, no. 1 (July 16, 2021): 1–29. http://dx.doi.org/10.1163/2165025x-bja10017.

Full text
Abstract:
Abstract Measures of presidential satisfaction have long been in the public’s attention, but the factors that drive them have brought about much discussion. As a contribution to the literature, this study empirically examines presidential approval data in the Philippines using a unique survey of 1200 low-income voting age residents of Metro Manila. Using individual-level data, this study unpacks the possible factors underpinning survey results on citizens’ satisfaction with leadership in the Philippines. While accounting for the personal circumstances of the respondents, this study finds evidence of bandwagoning among survey respondents; and partial evidence of personal economic conditions and disinformation possibly linked to presidential satisfaction. The findings here suggest there should be more caution in interpreting presidential satisfaction indicators.
APA, Harvard, Vancouver, ISO, and other styles
25

Lateiner, Donald. "“Bad News” in Herodotos and Thoukydides: misinformation, disinformation, and propaganda." Journal of Ancient History 9, no. 1 (June 1, 2021): 53–99. http://dx.doi.org/10.1515/jah-2020-0005.

Full text
Abstract:
Abstract Herodotos and Thoukydides report on many occasions that kings, polis leaders, and other politicians speak and behave in ways that unintentionally announce or analyze situations incorrectly (misinformation). Elsewhere, they represent as facts knowingly false constructs or “fake news” (disinformation), or they slant data in ways that advance a cause personal or public (propaganda, true or false). Historians attempt to or claim to acquaint audiences with a truer fact situation and to identify subjects’ motives for distortion such as immediate personal advantage, community advantage, or to encourage posterity’s better (if mistaken) opinion. Such historiographical bifocalism enhances the historian’s authority with readers (as he sees through intentional or unintentional misrepresentations) as well as sets straight distorted historical records. This paper surveys two paradigmatic Hellenic historians’ texts, how they build their investigative and analytic authority, and how they encourage confidence in their truth-determining skills. The material collected confirms and assesses the frequency of persons and governments misleading their own citizens and subjects as well as rival persons and powers. Finally, it demonstrates that these two historians were aware of information loss, information control (dissemination and suppression), and information chaos.
APA, Harvard, Vancouver, ISO, and other styles
26

Rodríguez-Virgili, Jordi, Javier Serrano-Puche, and Carmen Beatriz Fernández. "Digital Disinformation and Preventive Actions: Perceptions of Users from Argentina, Chile, and Spain." Media and Communication 9, no. 1 (March 3, 2021): 323–37. http://dx.doi.org/10.17645/mac.v9i1.3521.

Full text
Abstract:
This article explores audience perceptions of different types of disinformation, and the actions that users take to combat them, in three Spanish-speaking countries: Argentina, Chile, and Spain. Quantitative data from the Digital News Report (2018 and 2019), based on a survey of more than 2000 digital users from each country was used for the analysis. Results show remarkable similarities among the three countries, and how digital users identically ranked the types of problematic information that concerned them most. Survey participants were most concerned by stories where facts are spun or twisted to push a particular agenda, followed by, those that are completely made up for political or commercial reasons, and finally, they were least concerned by poor journalism (factual mistakes, dumbed-down stories, misleading headlines/clickbait). A general index of “Concern about disinformation” was constructed using several sociodemographic variables that might influence the perception. It showed that the phenomenon is higher among women, older users, those particularly interested in political news, and among left-wingers. Several measures are employed by users to avoid disinformation, such as checking a number of different sources to see whether a news story is reported in the same way, relying on the reputation of the news company, and/or deciding not to share a news story due to doubts regarding its accuracy. This article concludes that the perceived relevance of different types of problematic information, and preventive actions, are not homogeneous among different population segments.
APA, Harvard, Vancouver, ISO, and other styles
27

Bastos, Marco, and Dan Mercea. "The public accountability of social platforms: lessons from a study on bots and trolls in the Brexit campaign." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2128 (August 6, 2018): 20180003. http://dx.doi.org/10.1098/rsta.2018.0003.

Full text
Abstract:
In this article, we review our study of 13 493 bot-like Twitter accounts that tweeted during the UK European Union membership referendum debate and disappeared from the platform after the ballot. We discuss the methodological challenges and lessons learned from a study that emerged in a period of increasing weaponization of social media and mounting concerns about information warfare. We address the challenges and shortcomings involved in bot detection, the extent to which disinformation campaigns on social media are effective, valid metrics for user exposure, activation and engagement in the context of disinformation campaigns, unsupervised and supervised posting protocols, along with infrastructure and ethical issues associated with social sciences research based on large-scale social media data. We argue for improving researchers' access to data associated with contentious issues and suggest that social media platforms should offer public application programming interfaces to allow researchers access to content generated on their networks. We conclude with reflections on the relevance of this research agenda to public policy. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.
APA, Harvard, Vancouver, ISO, and other styles
28

Ross, Andrew S., and Damian J. Rivers. "Discursive Deflection: Accusation of “Fake News” and the Spread of Mis- and Disinformation in the Tweets of President Trump." Social Media + Society 4, no. 2 (April 2018): 205630511877601. http://dx.doi.org/10.1177/2056305118776010.

Full text
Abstract:
Twitter is increasingly being used within the sociopolitical domain as a channel through which to circulate information and opinions. Throughout the 2016 US Presidential primaries and general election campaign, a notable feature was the prolific Twitter use of Republican candidate and then nominee, Donald Trump. This use has continued since his election victory and inauguration as President. Trump’s use of Twitter has drawn criticism due to his rhetoric in relation to various issues, including Hillary Clinton, the size of the crowd in attendance at his inauguration, the policies of the former Obama administration, and immigration and foreign policy. One of the most notable features of Trump’s Twitter use has been his repeated ridicule of the mainstream media through pejorative labels such as “fake news” and “fake media.” These labels have been deployed in an attempt to deter the public from trusting media reports, many of which are critical of Trump’s presidency, and to position himself as the only reliable source of truth. However, given the contestable nature of objective truth, it can be argued that Trump himself is a serial offender in the propagation of mis- and disinformation in the same vein that he accuses the media. This article adopts a corpus analysis of Trump’s Twitter discourse to highlight his accusations of fake news and how he operates as a serial spreader of mis- and disinformation. Our data show that Trump uses these accusations to demonstrate allegiance and as a cover for his own spreading of mis- and disinformation that is framed as truth.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhu, Peidong, Peng Xun, Yifan Hu, and Yinqiao Xiong. "Social Collective Attack Model and Procedures for Large-Scale Cyber-Physical Systems." Sensors 21, no. 3 (February 2, 2021): 991. http://dx.doi.org/10.3390/s21030991.

Full text
Abstract:
A large-scale Cyber-Physical System (CPS) such as a smart grid usually provides service to a vast number of users as a public utility. Security is one of the most vital aspects in such critical infrastructures. The existing CPS security usually considers the attack from the information domain to the physical domain, such as injecting false data to damage sensing. Social Collective Attack on CPS (SCAC) is proposed as a new kind of attack that intrudes into the social domain and manipulates the collective behavior of social users to disrupt the physical subsystem. To provide a systematic description framework for such threats, we extend MITRE ATT&CK, the most used cyber adversary behavior modeling framework, to cover social, cyber, and physical domains. We discuss how the disinformation may be constructed and eventually leads to physical system malfunction through the social-cyber-physical interfaces, and we analyze how the adversaries launch disinformation attacks to better manipulate collective behavior. Finally, simulation analysis of SCAC in a smart grid is provided to demonstrate the possibility of such an attack.
APA, Harvard, Vancouver, ISO, and other styles
30

Guerra Chala, Bárbara, Cíntia Burille, and Lucas Moreschi Paulo. "The Protection of Consumer’s Personal Data and the Electronic Geodiscrimination Practice." Revista da Faculdade de Direito da Universidade Federal de Uberlândia 49, no. 1 (September 7, 2021): 709–31. http://dx.doi.org/10.14393/rfadir-v49n1a2021-62777.

Full text
Abstract:
The purpose of this study is to analyse the General Data Protection Law for the Protection of Personal Data from the perspective of the protection of the consumer's personal data, with a view to ascertaining the main aspects of the legislation and verifying its impacts in relation to geopricing practices and geoblocking. To that effect, it begins by addressing the principles of the new legislation that inform the activity of processing personal data. Right after, the main axes of structuring the law are presented, focusing on aspects that concern the processing of consumer data. Finally, the practices of geodiscrimination will be examined, with the effect of assessing the legal treatment in relation to such techniques and how they may be affected after the entry into force of the General Data Protection Law. For that, the hypothetico-deductive methodology and the bibliographic research technique were adopted. Thus, it is observed that new data protection legislation added to the protection of consumers' rights in relation to the practices of geopricing and geoblocking, insofar as the standard was designed to prevent the disinformation of the personal data holder on the purpose of the treatment of your information and the illegitimate treatment of personal data, as well as covering the possibility of redressing the consumer who holds personal data if he experiences damage.
APA, Harvard, Vancouver, ISO, and other styles
31

Kaur, Kanchan. "A review of the fake news ecosystem in India and the need for the News Literacy project." Przegląd Politologiczny, no. 4 (December 15, 2019): 23–29. http://dx.doi.org/10.14746/pp.2019.24.4.2.

Full text
Abstract:
In India, in the last year alone, over 30 people have died due to child kidnapping rumors spread on social media, specifically WhatsApp. India’s access to the internet shot up in the recent years with the entry of Reliance Jio which made data plans affordable and therefore accessible. WhatsApp has been the most frequently downloaded application. As the country gears up for an important election, the spread of disinformation has accelerated. The right-wing ruling party has claimed that it has over 3 million people in its WhatsApp groups. A recent study by BBC has shown that in the country, most of the disinformation has been spread by the right wing. Call it propaganda, disinformation or plain fake news, false or wrong information has become a part of the political process in India. Moreover, the Indian media no longer seem to be standing up to the government; in the last few years, it has generally toed the government line. The reasons are many, including corporate ownership, regressive laws, and a complete bypass of the media by the powers. The Prime Minister has spoken only to a few selected media houses and has never been asked any tough questions in his five-year tenure. Furthermore, the media has been completely sidelined by this government by it going to the public, directly through social media. All of this has produced a very turgid and messy information situation. With the government also interfering in education, it has become all the more difficult for most educators to introduce critical thinking courses in the country, even though various efforts have been made by Google News Initiative, Facebook and BBC Schools to introduce tools to debunk false information.
APA, Harvard, Vancouver, ISO, and other styles
32

Al-Rawi, Ahmed. "Disinformation under a networked authoritarian state: Saudi trolls’ credibility attacks against Jamal Khashoggi." Open Information Science 5, no. 1 (January 1, 2021): 140–62. http://dx.doi.org/10.1515/opis-2020-0118.

Full text
Abstract:
Abstract This paper deals with a case study that provides unique and original insight into social media credibility attacks against the Saudi journalist and activist, Jamal Khashoggi. To get the data, I searched all the state-run tweets sent by Arab trolls (78,274,588 in total), and I used Cedar, Canada’s supercomputer, to extract all the videos and images associated with references to Khashoggi. In addition, I searched Twitter’s full data archive to cross-examine some of the hashtag campaigns that were launched the day Khashoggi disappeared and afterwards. Finally, I used CrowdTangle to understand whether some of these hashtags were also used on Facebook and Instagram. I present here evidence that just a few hours after Khashoggi’s disappearance in the Saudi Consulate in Istanbul, Saudi trolls started a coordinated disinformation campaign against him to frame him as a terrorist, foreign agent for Qatar and Turkey, liar.... etc. The trolls also emphasized that the whole story of his disappearance and killing is a fabrication or a staged play orchestrated by Turkey and Qatar. The campaign also targeted his fiancée, Hatice Cengiz, alleging she was a spy, while later they cast doubt about her claims. Some of these campaigns were launched a few months after Khashoggi’s death. Theoretically, I argue that state-run disinformation campaigns need to incorporate the dimension of intended effect. In this case study, the goal is to tarnish the reputation and credibility of Khashoggi, even after he died, in an attempt to discredit his claims and political cause, influence different audiences especially the Saudi public, and potentially reduce sympathy towards him.
APA, Harvard, Vancouver, ISO, and other styles
33

Bogdan Gabriel, ȘTEFAN. "UNDERSTANDING SPUTNIK NEWS AGENCY INTERNET TRAFFIC ANALYSIS." SERIES VII - SOCIAL SCIENCES AND LAW 13(62), no. 1 (2020): 113–24. http://dx.doi.org/10.31926/but.ssl.2020.13.62.1.12.

Full text
Abstract:
Sputnik news agency remains one of the main-channels used by Russia to conduct disinformation campaigns across its borders, affecting both Romania and the Republic of Moldova virtual communities. This research offers a practical methodological solution for measuring communication outcomes and describing audience and its behavior and it shows that, at the end of 2018, Sputnik was a peripheral news platform for the Romanian informational space and a growing threat for the Republic of Moldova, where it occupied a leading position. The evaluation was conducted with data extracted through Alexa service provided by Amazon and Gemius data - the Moldovan Audit Office of Circuits and Internet.
APA, Harvard, Vancouver, ISO, and other styles
34

Codó, Eva. "Regimenting discourse, controlling bodies: Disinformation, evaluation and moral categorization in a state bureaucratic agency." Discourse & Society 22, no. 6 (November 2011): 723–42. http://dx.doi.org/10.1177/0957926511411696.

Full text
Abstract:
This article examines, from a critical and ethnographic sociolinguistic perspective, the socio-discursive practices unfolding at the information desk of a Spanish immigration office in Barcelona. Drawing on a corpus of ethnographic materials and interactional data, the article discusses why frontline communication became constituted as it did, what practical routines and ideological considerations grounded it, and how multiple social and institutional orders intersected in the shaping of practical and symbolic gatekeeping. I claim that, through various micro-strategies of control, evaluation and moral hierarchization, the government employees at this bureaucratic agency enacted the disciplinary and exclusionary regime of the nation-state, and socialized their clients into becoming ‘good’ migrants.
APA, Harvard, Vancouver, ISO, and other styles
35

Sorabji, Richard. "FREE SPEECH ON SOCIAL MEDIA: HOW TO PROTECT OUR FREEDOMS FROM SOCIAL MEDIA THAT ARE FUNDED BY TRADE IN OUR PERSONAL DATA." Social Philosophy and Policy 37, no. 2 (2020): 209–36. http://dx.doi.org/10.1017/s0265052521000121.

Full text
Abstract:
AbstractI have argued elsewhere that in past history, freedom of speech, whether granted to few or many, was granted as bestowing some important benefit. John Stuart Mill, for example, in On Liberty, saw it as enabling us to learn from each other through discussion. By the test of benefit, I here argue that social media that are funded through trade in our personal data with advertisers, including propagandists, cannot claim to be supporting free speech. We lose our freedoms, if the personal data we entrust to online social media are used to target us with information, or disinformation, tailored as persuasive to different personalities, in order to maximize revenue from advertisers or propagandists. Among the serious consequences described, particularly dangerous because of its effect on democracy, is the use of such targeted advertisements to swing voting campaigns. Control is needed both of the social media and of any political parties that pay social media for differential targeting of voters based on personality. Using UK government documents, I recommend legislation for reform and enforcement.
APA, Harvard, Vancouver, ISO, and other styles
36

Smołucha, Danuta. "Internet – the First Source of (dis)Information." Perspektywy Kultury 25, no. 2 (April 27, 2020): 97–116. http://dx.doi.org/10.35765/pk.2019.2502.08.

Full text
Abstract:
The Internet is the first medium in which controlling the content has become difficult or even impossible. One of its reasons is the fact that the Internet users – who so far were only passive recipients of media messages – have gained the possibility to create and distribute their own messages. Thus, they have become active participants of the participatory culture, in which it is difficult to distinguish between professional and amateur content. The boundaries between private and public domains have become blurred. The distribution of forces shaping public opinion has changed, because the content comes from large media corporations and nonprofessional creators alike. The Internet message is characterized by instantaneous distribution, the ease of editing and modifying its content, and vagueness of authorship. These features make the Internet particularly susceptible to disinformation purposefully aimed at manipulating its users. The fact that every activity undertaken by the Internet users is recorded and analysed is also conducive to manipulation attempts, as the data obtained this way are used to shape their opinions and influence their decisions. The aim of the article is to undertake a discourse on information and disinformation on the Internet in the context of the development of new digital communication tools. The article provides the examples of information manipulation, which could happen only in such an interactive and multimedia medium as the Internet.
APA, Harvard, Vancouver, ISO, and other styles
37

Moreno-Gil, Victoria, Xavier Ramon, and Ruth Rodríguez-Martínez. "Fact-Checking Interventions as Counteroffensives to Disinformation Growth: Standards, Values, and Practices in Latin America and Spain." Media and Communication 9, no. 1 (March 3, 2021): 251–63. http://dx.doi.org/10.17645/mac.v9i1.3443.

Full text
Abstract:
As democracy-building tools, fact-checking platforms serve as critical interventions in the fight against disinformation and polarization in the public sphere. The Duke Reporters’ Lab notes that there are 290 active fact-checking sites in 83 countries, including a wide range of initiatives in Latin America and Spain. These regions share major challenges such as limited journalistic autonomy, difficulties of accessing public data, politicization of the media, and the growing impact of disinformation. This research expands upon the findings presented in previous literature to gain further insight into the standards, values, and underlying practices embedded in Spanish and Latin American projects while identifying the specific challenges that these organizations face. In-depth interviews were conducted with decision-makers of the following independent platforms: <em>Chequeado</em> (Argentina), <em>UYCheck</em> (Uruguay), <em>Maldita.es</em> and <em>Newtral</em> (Spain), <em>Fact Checking</em> (Chile), <em>Agência Lupa</em> (Brazil), <em>Ecuador Chequea</em> (Ecuador), and <em>ColombiaCheck</em> (Colombia). This qualitative approach offers nuanced data on the volume and frequency of checks, procedures, dissemination tactics, and the perceived role of the public. Despite relying on small teams, the examined outlets’ capacity to verify facts is noteworthy. Inspired by best practices in the US and Europe and the model established by <em>Chequeado</em>, all the sites considered employ robust methodologies while leveraging the power of digital tools and audience participation. Interviewees identified three core challenges in fact-checking practice: difficulties in accessing public data, limited resources, and the need to reach wider audiences. Starting from these results, the article discusses the ways in which fact-checking operations could be strengthened.
APA, Harvard, Vancouver, ISO, and other styles
38

Tomaz, Raíssa Mendes, and Jerzui Mendes Torres Tomaz. "The Brazilian Presidential Election of 2018 and the relationship between technology and democracy in Latin America." Journal of Information, Communication and Ethics in Society 18, no. 4 (January 11, 2020): 497–509. http://dx.doi.org/10.1108/jices-12-2019-0134.

Full text
Abstract:
Purpose The purpose of this paper selected by ICIL 2019 committee in Rome is to demonstrate the current importance of the internet in the protection of democracy in developting countries. Design/methodology/approach It is intended to make a comparison with the growing and current phenomenon of Brazilian disinformation with other contemporary phenomena related to new technologies through literature review methodology. Findings The Brazilian elections in 2018 represent an authentic model in a post-Cambridge Analytical phase where the myth of the sanctity of data has been broken. The big influence of the algorithmic revolution on democracies in Latin America has never been more evident. The misuse of algorithms created an artificial environment that does not put us in contact with different realities; the consequences of this conjuncture have the deepest impacts, especially in countries that rely on a deficient educational system. Social implications Besides that, the broad use of zero-rating on the internet delivery in developing countries is also considered a factor of fake news dissemination. The information bubbles promote political polarization to the detriment of diversity – and the diversity is par excellence one of the pillars of democracy. Originality/value The research about the impact that the phenomenon of disinformation has on underdeveloped countries, it is essential to analyze the new role of the legislator in the elaboration of hypercomplex laws with multi-stakeholder interests that respect the essential core of digital human rights as the freedom of expression online.
APA, Harvard, Vancouver, ISO, and other styles
39

Acomi, Nicoleta, Luis Ochoa Siguencia, and Ovidiu Acomi. "An Appropriate Set of Skills for Limiting the Spread of Fake News." Revista Romaneasca pentru Educatie Multidimensionala 13, no. 1 (March 16, 2021): 71–80. http://dx.doi.org/10.18662/rrem/13.1/360.

Full text
Abstract:
The diversity of news distributed via social media communication channels exposes citizens to large scale disinformation including misleading and false information. In this context of the massive use of social media and considering the EU Youth Strategy 2019-2027 with regards to democracy, there is a strong need for analytical skills. The main problem is the reduced level of commitment of people to evaluate social media news and to develop the proper analytical skills. This paper aims at exemplifying the utility of conducting survey-based primary research for identifying the most appropriate analytical skills for dealing with fake news. The research method consists of establishing and distributing a questionnaire targeting various categories of people. Feedback was collected through an online survey in 2020. The questionnaire included category questions aiming at analysing the responses from the age, youth category and time spent online perspective. This approach is thought to provide data of sufficient quality and quantity to meet the objective of identifying the most appropriate analytical skills for dealing with fake news. The results of this study emphasize the views of respondents with regards to fake news approach, the extent to which various categories of people are checking the news before sharing, as well as the preferred criteria used for verifying the correctness of the news from social media. Based on the analysis of the results, the author proposed a set of solutions to empower youth to evaluate fake news and to detect disinformation campaigns across social networks.
APA, Harvard, Vancouver, ISO, and other styles
40

Shaban Rafi, Muhammad. "Dialogic Content Analysis of Misinformation about COVID- 19 on Social Media in Pakistan." Linguistics and Literature Review 6, no. 2 (October 10, 2020): 131–43. http://dx.doi.org/10.32350/llr.v6i2.960.

Full text
Abstract:
This study aims to explore the most common misinformation topics about COVID-19, people's perceptions concerning disinformation, and its consequences. A purposive sample of 50 posts and thousands of comments on coronavirus was drawn from social media networking sites. The data were also collected through informal interviews of 30 participants of different demographic backgrounds. The selected data were analyzed as dialogic communicative content between the participants. The study reveals that the most common topics regarding coronavirus misinformation are about cure and conspiracy theories. The participants have shown a mixed response towards the misinformation. The study has concluded the severe consequences of misinformation concerning the virus. Hence, I would like to recommend compulsory social media education for the internet users regarding how to respond to such a crisis while Abiding by the Internet regulations.
APA, Harvard, Vancouver, ISO, and other styles
41

Nguyen, An, and Daniel Catalan-Matamoros. "Digital Mis/Disinformation and Public Engagment with Health and Science Controversies: Fresh Perspectives from Covid-19." Media and Communication 8, no. 2 (June 25, 2020): 323–28. http://dx.doi.org/10.17645/mac.v8i2.3352.

Full text
Abstract:
Digital media, while opening a vast array of avenues for lay people to effectively engage with news, information and debates about important science and health issues, have become a fertile land for various stakeholders to spread misinformation and disinformation, stimulate uncivil discussions and engender ill-informed, dangerous public decisions. Recent developments of the Covid-19 infodemic might just be the tipping point of a process that has been long simmering in controversial areas of health and science (e.g., climate-change denial, anti-vaccination, anti-5G, Flat Earth doctrines). We bring together a wide range of fresh data and perspectives from four continents to help media scholars, journalists, science communicators, scientists, health professionals and policy-makers to better undersand these developments and what can be done to mitigate their impacts on public engagement with health and science controversies.
APA, Harvard, Vancouver, ISO, and other styles
42

Carrillo, Nereida, and Marta Montagut. "Tackling online disinformation through media literacy in Spain: The project ‘Que no te la cuelen’." Catalan Journal of Communication & Cultural Studies 13, no. 1 (April 1, 2021): 149–57. http://dx.doi.org/10.1386/cjcs_00044_7.

Full text
Abstract:
Media literacy of schoolchildren is a key political goal worldwide: institutions and citizens consider media literacy training to be essential – among other aspects – to combat falsehoods and generate healthy public opinion in democratic contexts. In Spain, various media literacy projects address this phenomenon one of which is ‘Que no te la cuelen’ (‘Don’t be fooled’, QNTLC). The project, which has been developed by the authors of this viewpoint, is implemented through theoretical–practical workshops aimed at public and private secondary pupils (academic years 2018–19, 2019–20 and 2020–21), based around training in fake news detection strategies and online fact-checking tools for students and teachers. This viewpoint describes and reflects on this initiative, conducted in 36 training sessions with schoolchildren aged 14–16 years attending schools in Madrid, Valencia and Barcelona. The workshops are based on van Dijk’s media literacy model, with a special focus on the ‘informational skills’ dimension. The amount of information available through all kinds of online platforms implies an extra effort in selecting, evaluating and sharing information, and the workshop focuses on this process through seven steps: suspect, read/listen/watch carefully, check the source, look for other reliable sources, check the data/location, be self-conscious of your bias and decide whether to share the information or not. The QNTLC sessions teach and train these skills combining gamification strategies – online quiz, verification challenges, ‘infoxication’ dynamics in the class – as well as through a public deliberation among students. Participants’ engagement and stakeholders’ interest in the programme suggest that this kind of training is important or, at least, attract the attention of these collectives in the Spanish context.
APA, Harvard, Vancouver, ISO, and other styles
43

Alonso, Bernardo. "Zero-Order Privacy Violations and Automated Decision-Making about Individuals." Revista de Filosofia Moderna e Contemporânea 8, no. 3 (January 31, 2021): 69–80. http://dx.doi.org/10.26512/rfmc.v8i3.34503.

Full text
Abstract:
In this article, it is presented the notion of zero-order privacy violation as a grounding practice within a new type of human exploitation, namely, data colonialism: massive appropriation of social life through data extraction, acquiring digital “territory” and resources from which economic value can be extracted by capital (Couldry & Mejias, 2019). At first, I claim that privacy violations do not depend on the nature of the agents involved. Robots read your email, and not having humans involved in the process does not make it less of a violation. It is considered that the harvested data stream is better understood as being a commodity when clean, well-formed, meaningful data standards are respected. Then, it is suggested that scenarios like the covid-19 pandemic make a perfect case to expand surveillance via tracking applications. Companies and governments with pre-existing tendencies to secrecy, tech-enabled authoritarianism, and austerity, capitalize on disinformation strategies. Finally, remarks on the value of encryption, and strategic deleting as measures to reinforce privacy are made.
APA, Harvard, Vancouver, ISO, and other styles
44

Ognyanova, Katherine. "The Social Context of Media Trust: A Network Influence Model." Journal of Communication 69, no. 5 (October 1, 2019): 539–62. http://dx.doi.org/10.1093/joc/jqz031.

Full text
Abstract:
Abstract Concerns about the low public trust in U.S. media institutions have recently deepened amid increasing partisan polarization, large-scale digital disinformation campaigns, and frequent attacks on the press from political elites. This study explored the social factors that shape our trust in mainstream news sources. An examination of longitudinal network data from 13 residential student communities highlighted the importance of interpersonal influence on views about the media. The results show that the media trust of participants was predicted by the trust scores of their online and offline social contacts. The most robust and consistent effect comes from face-to-face interactions with politically like-minded conversation partners. Among online social ties, the analysis found effects from contact with others who distrust the media, but not from communication with people who reported high levels of media trust.
APA, Harvard, Vancouver, ISO, and other styles
45

Kovalčíková, Nad’a, and Ariane Tabatabai. "Lessons earned and lessons learned: What should be done next to counter the COVID-19 infodemic?" European View 19, no. 2 (October 2020): 154–63. http://dx.doi.org/10.1177/1781685820975878.

Full text
Abstract:
As governments and citizens around the world have struggled with the novel coronavirus, the information space has turned into a battleground. Authoritarian countries, including Russia, China and Iran, have spread disinformation on the causes of and responses to the pandemic. The over-abundance of information, also referred to as an ‘infodemic’, including manipulated information, has been both a cause and a result of the exacerbation of the public health crisis. It is further undermining trust in democratic institutions, the independent press, and facts and data, and exacerbating the rising tensions driven by economic, political and societal challenges. This article discusses the challenges democracies have faced and the measures they have adopted to counter information manipulation that impedes public health efforts. It draws seven lessons learned from the information war and offers a set of recommendations on tackling future infodemics related to public health.
APA, Harvard, Vancouver, ISO, and other styles
46

Bandala, Erick R., Jesus Rodrigo-Comino, and Mohd Talib Latif. "2021: The New Normal and the Air, Soil and Water Research Perspective." Air, Soil and Water Research 14 (January 2021): 117862212098831. http://dx.doi.org/10.1177/1178622120988318.

Full text
Abstract:
With over 64.1 million cases worldwide (by December 1, 2020) and a death toll surpassing 1.48 million the COVID-19 pandemics has filled not only with fear and isolation our day-to-day lives but also with a significant amount of disinformation, the unreliability of data, and lack of trust on the response of governmental officers and authorities that, sadly, can be translated in loss of lives in our closest circles (colleagues, friends, family). At Air, Soil and Water Research (ASW), we believe that knowledge is the only way out of this and any other crisis faced by humankind, and our team has been working elbow-to-elbow (but online) to offer the best quality research and scientific knowledge that will certainly assist for better decision making and led towards the best path to get us through this so hard time.
APA, Harvard, Vancouver, ISO, and other styles
47

Puentes, Michael, Irene Arroyo Delgado, Oscar Carrillo, Carlos J. Barrios H, and Frédèric Le Mouel. "Towards Smart-City Implementation for Crisis Management in Fast-Growing and Unplanned Cities: the Colombian Scenario." Ingeniería y Ciencia 16, no. 32 (November 11, 2020): 151–69. http://dx.doi.org/10.17230/ingciencia.16.32.7.

Full text
Abstract:
Natural or human-made disasters could do huge damage in urban areas and eventually could take lives. It is fundamental to get knowledge of the event’s characteristics to dispose of hasty information to help affected people or to prevent all the citizens from the danger zone, and then it will get time to respond to the crisis. Internet of Things (IoT) has a big impact on this kind of situation because a large amount of data through different devices could provide information about the situation, and about the people that are involved in the crisis. In a disaster, one of the big problems adding to the principal crisis is the disinformation, for that reason is necessary to have available and trusty data in case of disaster, also to know the data that provided the information system. To inform the affected people around the crisis event, there is exist some previous works that have used data from sensors, social networks text, or images, to finally be processed [1],[2],[3],[4],[5],[6],[7],[8]. This paper aims to review study-cases where cities implement crisis management platforms, focus on IoT environment where applications use hybrid data to be processed to help citizens in a crisis situation.
APA, Harvard, Vancouver, ISO, and other styles
48

Lim, Elisha. "The Protestant Ethic and the Spirit of Facebook: Updating Identity Economics." Social Media + Society 6, no. 2 (April 2020): 205630512091014. http://dx.doi.org/10.1177/2056305120910144.

Full text
Abstract:
Scholars and news media generally name Facebook’s two central problems: that its data collection practices are a threat to user privacy, and that stricter regulations are required to prevent “bad actor” from spreading hate and disinformation. However separating these two concerns—personal data collection and bad actors—overlooks the way that one generates the other. First, this article builds on critical race scholarship to examine how identity politics are historically distorted and commodified into profitable vigilance and intolerance, in what I call a transition from identity politics, to personal identity economics. Facebook’s Ad Manager, for example, reveals how personal identities are itemized as advertising assets, which are cultivated through deeper, more trenchant identity politics. Second, this article theorizes about what makes such staunch, intolerant identity politics addictive. Drawing on Max Weber’s theories of the Protestant Ethic, this article explores how Facebook activism thrives on deep-rooted Christian paradigms of dogma, virtue, redemption, and piety. As dogmatic personal identity economics spread across the globe, they testify to how Facebook’s business model manufactures bad actors.
APA, Harvard, Vancouver, ISO, and other styles
49

Patkowski, Adam. "„Cicha reakcja” na zdalne ataki teleinformatyczne." Przegląd Teleinformatyczny 5 (23), no. 3 (September 30, 2017): 33–51. http://dx.doi.org/10.5604/01.3001.0012.9734.

Full text
Abstract:
„Cicha reakcja” systemu zabezpieczeń zasobów teleinformatycznych to zastąpienie blokowania cyberataków innymi działaniami, niedostrzegalnymi dla napastnika. Proponuje się usunięcie atakowanych zasobów przez zastąpienie ich spreparowanymi danymi. Pozwoli to na rozpoznawanie poczynań napastnika przy znacznie mniejszych szansach wykrycia niż użycie oddzielnych honeypotów. Przede wszystkim jednak umożliwia to prowadzenie dezinformacji przeciwników/konkurentów właściciela systemu na poziomie operacyjnym. Ponadto wprowadzenie mechanizmu opóźnionego zapisu danych w systemie teleinformatycznym zwiększa graniczny czas na wykrycie cyberataków zanim nastąpią nieodwracalne zmiany zasobów informacyjnych. A “hidden response” of an ICT security system is a substitution of cyberattack blocking by other actions invisible to an attacker. It is proposed to remove attacked resources from the attacker's operating field by replacing them with dissected data. This allows to investigate attacker's actions with much less chance of detection than by using separated honeypots. First of all, it gives the ability to perform disinformation activities against opponents/competitors of the system owner. In addition, the introduction of the delayed data recording mechanism increases the time limit for detecting cyberattacks before irreversible changes to information resources occur.
APA, Harvard, Vancouver, ISO, and other styles
50

Jabardi, Mohammed, and Asaad Hadi. "Ontology Meter for Twitter Fake Accounts Detection." International Journal of Intelligent Engineering and Systems 14, no. 1 (February 28, 2021): 410–19. http://dx.doi.org/10.22266/ijies2021.0228.38.

Full text
Abstract:
One of the most popular social media platforms, Twitter is used by millions of people to share information, broadcast tweets, and follow other users. Twitter is an open application programming interface and thus vulnerable to attack from fake accounts, which are primarily created for advertisement and marketing, defamation of an individual, consumer data acquisition, increase fake blog or website traffic, share disinformation, online fraud, and control. Fake accounts are harmful to both users and service providers, and thus recognizing and filtering out such content on social media is essential. This study presents a new approach to detect fake Twitter accounts using ontology and Semantic Web Rule Language (SWRL) rules. SWRL rules-based reasoner is utilized under predefined rules to infer whether the profile is trust or fake. This approach achieves a high detection accuracy of 97%. Furthermore, ontology classifier is an interpretable model that offers straightforward and human-interpretable decision rules.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography