Academic literature on the topic 'Bots'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Bots.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Bots"

1

Svenaeus, Agaton. "Fantastic bots and where to find them." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411330.

Full text
Abstract:
Research on bot detection on online social networks has received a considerable amount of attention in Swedish news media. Recently however, criticism of the research field of bot detection on onlinesocial networks has been presented, highlighting the need to investigate the research field to determine if information based on flawed research has been spread. To investigate the research field, this study has attempted to review the process of bot detection on online social networks and evaluate the proposed criticism of current bot detection research by: conducting a literature review of bots on online social networks, conducting a literature review of methods for bot detection on online social networks, and detecting bots in three different politically associated data sets with Swedish Twitter accounts usingfive different bot detection methods. Results of the study showed minor evidence that previous research may have been flawed. Still, based on the literature review of bot detection methods, it was determined that this criticism was not extensive enough to critique the research fieldof bot detection on online social networks as a whole. Further, problems highlighted in the criticism were recognized to potentially have arose from a lack of differentiation between bot types in research. An insufficient differentiation between bot types in research was also acknowledged as a factor which could lead to difficulties ingeneralizing the results from bot detection studies measuring the effect of bots on political opinions. Instead, the study acknowledged that a good bot differentiation could potentially improve bot detection.
APA, Harvard, Vancouver, ISO, and other styles
2

Santos, Rafael Pereira dos. "Interação com Wikis por meio de Mensageiros Instantâneos." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24022009-133201/.

Full text
Abstract:
A utilização da Internet cresceu amplamente nos últimos anos e tem propiciado o desenvolvimento de diversas ferramentas de comunicação via web. Têm se destacado, de maneira especial, as ferramentas que possibilitam a disponibilização online de conteúdos diversos, criados pelos próprios usuários, como as wikis. O sucesso obtido pelas wikis deve-se, em grande parte, à pequena quantidade de esforço necessário para a edição das páginas, indicando que esta característica é muito apreciada pelos usuários. Visando tornar o processo de edição de wikis ainda mais ágil, este trabalho apresenta uma proposta de como os mensageiros instantâneos podem auxiliar nesta tarefa. Assim, uma nova nova forma de interação no processo de edição de wikis, por meio de Mensageiro Instantâneo, foi projetada e implementada. Essa forma de interação proposta altera a forma de interação com wikis convencional, no sentido de possibilitar que o autor do conteúdo a ser editado na wiki não necessite mudar de seu ambiente de comunicação, que atualmente tem sido muito utilizado, o de troca de mensagens por meio do mensageiro instantâneo. Além disso, esta pesquisa possibilitou a identificação de diversas vantagens e desvantagens da utilização de bots de mensageiros instantâneos, encontradas na literatura, bem como durante os experimentos e estudos de caso realizados<br>The Internet usage has grown significantly in recent years and has fomented the development of several communication tools via web. Tools that make available the various online contents created by the users, such as wikis should be especially highlighted. The success achieved by wikis is due in large extent to the small amount of effort required to edit pages. This is an indicator that this feature is very appreciated by users. In order to make the editing process of wikis even faster, this work presents a proposal of using integrated Instant Messaging tools features with wikis. Thus, a new means of interaction in the process of editing in wikis, via Instant Messenger, was designed and implemented. This proposed means of interaction augments the way of interaction with conventional wikis, by enabling authors to edit the wiki content without having to shift from his/her communication environment in use. This proposal is supported by the fact that Instant Messaging systems have been widely used and adopted. Moreover, this research provides evidences to help the identification of advantages and disadvantages of the use of bots in Instant Messaging, from the results of the experiments and case studies conducted
APA, Harvard, Vancouver, ISO, and other styles
3

Dolya. "APPLICATION AND CHARACTERISTICS OF CHAT-BOTS." Thesis, Київ 2018, 2018. http://er.nau.edu.ua/handle/NAU/33685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bergande, Eirik Falk Georg, and Jon Fjeldberg Smedsrud. "Using Honeypots to Analyze Bots and Botnets." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9566.

Full text
Abstract:
<p>In this Master thesis we will perform honeypot experiments where we allow malicious users access to systems and analyze their behaviour. Our focus will be on botnets, and how attackers progress to infect systems and add them to their botnet. Our experiments will include both high-interaction honeypots where we let attackers manually access our system, and low interaction-honeypots where we receive automated malware. The high-interaction honeypots are normal Linux distributions accessing the internet through a Honeywall that captures and controls the data flow, while the low-interaction honeypots are running the Nepenthes honeypot. Nepenthes acts by passively emulating known vulnerabilities and downloading the exploiting malware. The honeypots have been connected to both the ITEA and UNINETT networks at NTNU. The network traffic filtering on the IP addresses we have received, has been removed in order to capture more information. Installing the honeypots is a rather complicated matter, and has been described with regard to setup and configuration on both the high and low interaction honeypots. Data that is captures has been thoroughly analyzed with regard to both intent and origin. The results from the high-interaction honeypots focus on methods and techniques that the attackers are using. The low-interaction honeypot data comes from automated sources, and is primary used for code and execution analysis. By doing this, we will gain a higher degree of understanding of the botnet phenomenon, and why they are so popular amongst blackhats. During the experiments we have captures six attacks toward the high-interaction honeypots which have all been analyzed. The low-interaction honeypot, Nepenthes, has captured 56 unique malware samples and of those 14 have been analysed. In addition there has been a thorough analysis of the Rbot.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Ljung, Fredrik, and Oscar Arnflo. "How malicious bots interact with an online contest with gamification : A study in methods for identifying and protecting against bots." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186445.

Full text
Abstract:
Setting up online contests with gamification is an effective marketing method, but which brings security complications. By having rewards with high value, cheaters are attracted to participate with the use of malicious bots. To distinguish bots from humans different methods are used which are divided into Human Interactive Proof (HIP) and Human Observational Proof (HOP). This report aims to look at the effectiveness of the most popular HIPs and HOPs and how an attacker is able to bypass them. From the results, parameters that are of interest when implementing a framework to detect and prevent malicious bots, are presented. Data was collected from five honeypot systems. It is concluded that CAPTCHAs should be used as much as possible, together with HMAC and an Intrusion Detection System (IDS) based on click diversity and submissions per IP-address.
APA, Harvard, Vancouver, ISO, and other styles
6

Västerbo, Simon. "CLASSIFYING TWITTER BOTSA comparasion of methods for classifying whethert weets are written by humans or bots." Thesis, Umeå universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-175897.

Full text
Abstract:
The use of bots to inuence public debate, spread disinformation and spam, creates a need for efficient methods for detecting the usage of bots. This study will compare different machine learning methods in the task of classifying if the author of a tweet is a bot or a human, using tweet level features. The study will look at how well the methods are able to generalize to unseen data. The methods included in the comparison are Random forest, AdaBoost and the Contextual LSTM model, to compare the models Area under the receiver operating  characteristic curve and Average precision will be used. In the study five datasets with tweets from bots are used, and one with tweets from humans. Two tests have been used to evaluate the performance. In the first test all but one bot set is used during training, where the models are evaluated on the excluded set. The second test the models was trained on the separate datasets, and evaluated on the separate datasets. In the results from the first test, the difference in performance of the models where very low. The same was true for Random forest and AdaBoost in the second test. The Contextual LSTM model achieved low performance in some combinations of data sets, in the second test. The low difference in performance between the models in the first test, and between Random forest and AdaBoost in the second test, makes it hard to determine what model is best at the task. When taking the time required to train and test using the models into consideration, Random forest seem to be the most suitable for the task.
APA, Harvard, Vancouver, ISO, and other styles
7

Patel, Purvag. "Improving Computer Game Bots' behavior using Q-Learning." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1966544161&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Robin. "Cooperative Behaviors BetweenTwo Teaming RTS Bots in StarCraft." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10955.

Full text
Abstract:
Context. Video games are a big entertainment industry. Many video games let players play against or together. Some video games also make it possible for players to play against or together with computer controlled players, called bots. Artificial Intelligence (AI) is used to create bots. Objectives. This thesis aims to implement cooperative behaviors between two bots and determine if the behaviors lead to an increase in win ratio. This means that the bots should be able to cooperate in certain situations, such as when they are attacked or when they are attacking. Methods. The bots win ratio will be tested with a series of quantitative experiments where in each experiment two teaming bots with cooperative behavior will play against two teaming bots without any cooperative behavior. The data will be analyzed with a t-test to determine if the data are statistical significant. Results and Conclusions. The results show that cooperative behavior can increase performance of two teaming Real Time Strategy bots against a non-cooperative team with two bots. However, the performance could either be increased or decreased depending on the situation. In three cases there were an increase in performance and in one the performance was decreased. In three cases there was no difference in performance. This suggests that more research is needed for these cases.
APA, Harvard, Vancouver, ISO, and other styles
9

Pianov, Dmitrii. "FIGHTING AGAINST SOCIAL BOTS: THE ISSUE OF IDENTIFICATION." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2368.

Full text
Abstract:
Widespread use of social bots becomes an important issue in the social media policy making. Automatic users are used to promote political ideas, advertise, and derail the public discourse. Identifying the bots have become an increasingly difficult task due to sophistication of the tools used to run them. In this paper I explore the domain of social bot detection. The difficulty of bot classification is well studied (Kudugunta and Ferrara (2018a), Cresci, Pietro, Petrocchi, Spognardi, and Tesconi (2017)) and arises due to high dimensionality of the data and unbalanceness of the classes. In this paper, we attempt to improve the algorithm used to detect the bots by exploiting character based GRU infrastructure. We train our model on the labeled data consisted of 8 million of human- and bot-generated tweets. For a reference, we are using several other classifiers as a benchmark to estimate the performance of the model.
APA, Harvard, Vancouver, ISO, and other styles
10

Hu, Tianrui. "Detecting Bots using Stream-based System with Data Synthesis." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98595.

Full text
Abstract:
Machine learning has shown great success in building security applications including bot detection. However, many machine learning models are difficult to deploy since model training requires the continuous supply of representative labeled data, which are expensive and time-consuming to obtain in practice. In this thesis, we build a bot detection system with a data synthesis method to explore detecting bots with limited data to address this problem. We collected the network traffic from 3 online services in three different months within a year (23 million network requests). We develop a novel stream-based feature encoding scheme to support our model to perform real-time bot detection on anonymized network data. We propose a data synthesis method to synthesize unseen (or future) bot behavior distributions to enable our system to detect bots with extremely limited labeled data. The synthesis method is distribution-aware, using two different generators in a Generative Adversarial Network to synthesize data for the clustered regions and the outlier regions in the feature space. We evaluate this idea and show our method can train a model that outperforms existing methods with only 1% of the labeled data. We show that data synthesis also improves the model's sustainability over time and speeds up the retraining. Finally, we compare data synthesis and adversarial retraining and show they can work complementary with each other to improve the model generalizability.<br>Master of Science<br>An internet bot is a computer-controlled software performing simple and automated tasks over the internet. Although some bots are legitimate, many bots are operated to perform malicious behaviors causing severe security and privacy issues. To address this problem, machine learning (ML) models that have shown great success in building security applications are widely used in detecting bots since they can identify hidden patterns learning from data. However, many ML-based approaches are difficult to deploy since model training requires labeled data, which are expensive and time-consuming to obtain in practice, especially for security tasks. Meanwhile, the dynamic-changing nature of malicious bots means bot detection models need the continuous supply of representative labeled data to keep the models up-to-date, which makes bot detection more challenging. In this thesis, we build an ML-based bot detection system to detect advanced malicious bots in real-time by processing network traffic data. We explore using a data synthesis method to detect bots with limited training data to address the limited and unrepresentative labeled data problem. Our proposed data synthesis method synthesizes unseen (or future) bot behavior distributions to enable our system to detect bots with extremely limited labeled data. We evaluate our approach using real-world datasets we collected and show that our model outperforms existing methods using only 1% of the labeled data. We show that data synthesis also improves the model's sustainability over time and helps to keep it up-to-date easier. Finally, we show that our method can work complementary with adversarial retraining to improve the model generalizability.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography