Journal articles on the topic 'Crowdsourcing, classification, task design, crowdsourcing experiments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 journal articles for your research on the topic 'Crowdsourcing, classification, task design, crowdsourcing experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yang, Keyu, Yunjun Gao, Lei Liang, Song Bian, Lu Chen, and Baihua Zheng. "CrowdTC: Crowd-powered Learning for Text Classification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (2021): 1–23. http://dx.doi.org/10.1145/3457216.

Full text
Abstract:
Text classification is a fundamental task in content analysis. Nowadays, deep learning has demonstrated promising performance in text classification compared with shallow models. However, almost all the existing models do not take advantage of the wisdom of human beings to help text classification. Human beings are more intelligent and capable than machine learning models in terms of understanding and capturing the implicit semantic information from text. In this article, we try to take guidance from human beings to classify text. We propose Crowd-powered learning for Text Classification (Crow
APA, Harvard, Vancouver, ISO, and other styles
2

Ramírez, Jorge, Marcos Baez, Fabio Casati, and Boualem Benatallah. "Understanding the Impact of Text Highlighting in Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 144–52. http://dx.doi.org/10.1609/hcomp.v7i1.5268.

Full text
Abstract:
Text classification is one of the most common goals of machine learning (ML) projects, and also one of the most frequent human intelligence tasks in crowdsourcing platforms. ML has mixed success in such tasks depending on the nature of the problem, while crowd-based classification has proven to be surprisingly effective, but can be expensive. Recently, hybrid text classification algorithms, combining human computation and machine learning, have been proposed to improve accuracy and reduce costs. One way to do so is to have ML highlight or emphasize portions of text that it believes to be more
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Shikai, Rong Chen, Hui Li, Tianlun Zhang, and Yaqing Liu. "Identify Severity Bug Report with Distribution Imbalance by CR-SMOTE and ELM." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (2019): 139–75. http://dx.doi.org/10.1142/s0218194019500074.

Full text
Abstract:
Manually inspecting bugs to determine their severity is often an enormous but essential software development task, especially when many participants generate a large number of bug reports in a crowdsourced software testing context. Therefore, boosting the capabilities of methods of predicting bug report severity is critically important for determining the priority of fixing bugs. However, typical classification techniques may be adversely affected when the severity distribution of the bug reports is imbalanced, leading to performance degradation in a crowdsourcing environment. In this study, w
APA, Harvard, Vancouver, ISO, and other styles
4

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Full text
Abstract:
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing mark
APA, Harvard, Vancouver, ISO, and other styles
5

Ceschia, Sara, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, Luca Di Gaspero, and Andrea Schaerf. "Task design in complex crowdsourcing experiments: Item assignment optimization." Computers & Operations Research 148 (December 2022): 105995. http://dx.doi.org/10.1016/j.cor.2022.105995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Yuyin, Adish Singla, Tori Yan, Andreas Krause, and Dieter Fox. "Evaluating Task-Dependent Taxonomies for Navigation." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 229–38. http://dx.doi.org/10.1609/hcomp.v4i1.13286.

Full text
Abstract:
Taxonomies of concepts are important across many application domains, for instance, online shopping portals use catalogs to help users navigate and search for products. Task-dependent taxonomies, e.g., adapting the taxonomy to a specific cohort of users, can greatly improve the effectiveness of navigation and search. However, taxonomies are usually created by domain experts and hence designing task-dependent taxonomies can be an expensive process: this often limits the applications to deploy generic taxonomies. Crowdsourcing-based techniques have the potential to provide a cost-efficient solut
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Christopher, Mausam Mausam та Daniel Weld. "Dynamically Switching between Synergistic Workflows for Crowdsourcing". Proceedings of the AAAI Conference on Artificial Intelligence 26, № 1 (2021): 87–93. http://dx.doi.org/10.1609/aaai.v26i1.8121.

Full text
Abstract:
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they experiment with several alternative workflows to accomplish the task, and eventually deploy the one that achieves the best performance during early trials. Surprisingly, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically t
APA, Harvard, Vancouver, ISO, and other styles
8

Rothwell, Spencer, Steele Carter, Ahmad Elshenawy, and Daniela Braga. "Job Complexity and User Attention in Crowdsourcing Microtasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (March 28, 2016): 20–25. http://dx.doi.org/10.1609/hcomp.v3i1.13265.

Full text
Abstract:
This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.
APA, Harvard, Vancouver, ISO, and other styles
9

Qarout, Rehab, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. "Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 135–43. http://dx.doi.org/10.1609/hcomp.v7i1.5264.

Full text
Abstract:
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used to train Artificial Intelligence (AI) systems or to evaluate the effectiveness of algorithms. The datasets generated by means of crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, the population sample bias introduced by aspects like task reward, requester reputation, and other filters introduced by the task design.In this paper, we analyse platform-related factors and study how they affect dataset characteri
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Donglai, and Yanhua Liu. "Fairness of Task Allocation in Crowdsourcing Workflows." Mathematical Problems in Engineering 2021 (April 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/5570192.

Full text
Abstract:
Fairness plays a vital role in crowd computing by attracting its workers. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Requester-centric task allocation algorithms aim to maximize the completion quality of the entire workflow and minimize its total cost, which are discriminatory for workers. The crowdsourcing workflow needs to balance two objectives, namely, fairness and cost. In this study, w
APA, Harvard, Vancouver, ISO, and other styles
11

Cui, Lizhen, Xudong Zhao, Lei Liu, Han Yu, and Yuan Miao. "Complex crowdsourcing task allocation strategies employing supervised and reinforcement learning." International Journal of Crowd Science 1, no. 2 (2017): 146–60. http://dx.doi.org/10.1108/ijcs-08-2017-0011.

Full text
Abstract:
Purpose Allocation of complex crowdsourcing tasks, which typically include heterogeneous attributes such as value, difficulty, skill required, effort required and deadline, is still a challenging open problem. In recent years, agent-based crowdsourcing approaches focusing on recommendations or incentives have emerged to dynamically match workers with diverse characteristics to tasks to achieve high collective productivity. However, existing approaches are mostly designed based on expert knowledge grounded in well-established theoretical frameworks. They often fail to leverage on user-generated
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Yongsung, Emily Harburg, Shana Azria, et al. "Studying the Effects of Task Notification Policies on Participation and Outcomes in On-the-go Crowdsourcing." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 99–108. http://dx.doi.org/10.1609/hcomp.v4i1.13275.

Full text
Abstract:
Recent years have seen the growth of physical crowdsourcing systems (e.g., Uber; TaskRabbit) that motivate large numbers of people to provide new and improved physical tasking and delivery services on-demand. In these systems, opportunistically relying on people to make convenient contributions may lead to incomplete solutions, while directing people to do inconvenient tasks requires high incentives. To increase people's willingness to participate and reduce the need to incentivize participation, we study on-the-go crowdsourcing as an alternative approach that suggests tasks along people’s exi
APA, Harvard, Vancouver, ISO, and other styles
13

Zeng, Zhiyuan, Jian Tang, and Tianmei Wang. "Motivation mechanism of gamification in crowdsourcing projects." International Journal of Crowd Science 1, no. 1 (2017): 71–82. http://dx.doi.org/10.1108/ijcs-12-2016-0001.

Full text
Abstract:
Purpose The purpose of this paper is to study the participation behaviors in the context of crowdsourcing projects from the perspective of gamification. Design/methodology/approach This paper first proposed a model to depict the effect of four categories of game elements on three types of motivation based upon several motivation theories, which may, in turn, influence user participation. Then, 5 × 2 between-subject Web experiments were designed for collecting data and validating this model. Findings Game elements which provide participants with rewards and recognitions or remind participants o
APA, Harvard, Vancouver, ISO, and other styles
14

Bu, Qiong, Elena Simperl, Adriane Chapman, and Eddy Maddalena. "Quality assessment in crowdsourced classification tasks." International Journal of Crowd Science 3, no. 3 (2019): 222–48. http://dx.doi.org/10.1108/ijcs-06-2019-0017.

Full text
Abstract:
Purpose Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks. Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality. Design/methodology/approach The authors present a model to capture the information of the workflow, questions and answers for both single- and multiple-quest
APA, Harvard, Vancouver, ISO, and other styles
15

Shin, Suho, Hoyong Choi, Yung Yi, and Jungseul Ok. "Power of Bonus in Pricing for Crowdsourcing." ACM SIGMETRICS Performance Evaluation Review 50, no. 1 (2022): 43–44. http://dx.doi.org/10.1145/3547353.3522633.

Full text
Abstract:
We consider a simple form of pricing for a crowdsourcing system, where pricing policy is published a priori, and workers then decide their task acceptance. Such a pricing form is widely adopted in practice for its simplicity, e.g., Amazon Mechanical Turk, although additional sophistication to pricing rule can enhance budget efficiency. With the goal of designing efficient and simple pricing rules, we study the impact of the following two design features in pricing policies: (i) personalization tailoring policy worker-by-worker and (ii) bonus payment to qualified task completion. In the Bayesia
APA, Harvard, Vancouver, ISO, and other styles
16

Shin, Suho, Hoyong Choi, Yung Yi, and Jungseul Ok. "Power of Bonus in Pricing for Crowdsourcing." Proceedings of the ACM on Measurement and Analysis of Computing Systems 5, no. 3 (2021): 1–25. http://dx.doi.org/10.1145/3491048.

Full text
Abstract:
We consider a simple form of pricing for a crowdsourcing system, where pricing policy is published a priori, and workers then decide their task acceptance. Such a pricing form is widely adopted in practice for its simplicity, e.g., Amazon Mechanical Turk, although additional sophistication to pricing rule can enhance budget efficiency. With the goal of designing efficient and simple pricing rules, we study the impact of the following two design features in pricing policies: (i) personalization tailoring policy worker-by-worker and (ii) bonus payment to qualified task completion. In the Bayesia
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Yi, Yurong Cheng, Ye Yuan, Guoren Wang, Lei Chen, and Yongjiao Sun. "Privacy-preserving cooperative online matching over spatial crowdsourcing platforms." Proceedings of the VLDB Endowment 16, no. 1 (2022): 51–63. http://dx.doi.org/10.14778/3561261.3561266.

Full text
Abstract:
With the continuous development of spatial crowdsourcing platform, online task assignment problem has been widely studied as a typical problem in spatial crowdsourcing. Most of the existing studies are based on a single-platform task assignment to maximize the platform's revenue. Recently, cross online task assignment has been proposed, aiming at increasing the mutual benefit through cooperations. However, existing methods fail to consider the data privacy protection in the process of cooperation and cause the leakage of sensitive data such as the location of a request and the historical data
APA, Harvard, Vancouver, ISO, and other styles
18

Jacques, Jason, and Per Ola Kristensson. "Crowdsourcing a HIT: Measuring Workers' Pre-Task Interactions on Microtask Markets." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 1 (November 3, 2013): 86–93. http://dx.doi.org/10.1609/hcomp.v1i1.13085.

Full text
Abstract:
The ability to entice and engage crowd workers to participate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effective recruitment of crowd workers is often overlooked. To help us gain a better understanding of crowd recruitment strategies we propose three new metrics for measuring crowd workers' willingness to participate in advertised HITs: conversion rate, conversion rate over time, and nominal conversion rate. We discu
APA, Harvard, Vancouver, ISO, and other styles
19

Suissa, Omri, Avshalom Elmalech, and Maayan Zhitomirsky-Geffet. "Toward the optimized crowdsourcing strategy for OCR post-correction." Aslib Journal of Information Management 72, no. 2 (2019): 179–97. http://dx.doi.org/10.1108/ajim-07-2019-0189.

Full text
Abstract:
Purpose Digitization of historical documents is a challenging task in many digital humanities projects. A popular approach for digitization is to scan the documents into images, and then convert images into text using optical character recognition (OCR) algorithms. However, the outcome of OCR processing of historical documents is usually inaccurate and requires post-processing error correction. The purpose of this paper is to investigate how crowdsourcing can be utilized to correct OCR errors in historical text collections, and which crowdsourcing methodology is the most effective in different
APA, Harvard, Vancouver, ISO, and other styles
20

Gao, Li-Ping, Tao Jin, and Chao Lu. "A Long-Term Quality Perception Incentive Strategy for Crowdsourcing Environments with Budget Constraints." International Journal of Cooperative Information Systems 29, no. 01n02 (2020): 2040005. http://dx.doi.org/10.1142/s0218843020400055.

Full text
Abstract:
Quality control is a critical design goal for crowdsourcing. However, when measuring the long-term quality of workers, the existing strategies do not make effective use of workers’ historical information, whereas others regard workers’ conditions as fixed values, even if they do not consider the impact of workers’ quality. This paper proposes a long-term quality perception incentive model (called QAI model) in a crowdsourcing environment with budget constraints. In this work, QAI divides the entire long-term activity cycle into multiple stages based on proportional allocation rules. Each stage
APA, Harvard, Vancouver, ISO, and other styles
21

Musi, Elena, Debanjan Ghosh, and Smaranda Muresan. "ChangeMyView Through Concessions: Do Concessions Increase Persuasion?" Dialogue & Discourse 9, no. 1 (2018): 107–27. http://dx.doi.org/10.5087/dad.2018.104.

Full text
Abstract:
In Discourse Studies concessions are considered among those argumentative strategies that increase persuasion. We aim to empirically test this hypothesis by calculating the distribution of argumentative concessions in persuasive vs. non-persuasive comments from the the ChangeMyView subreddit. This constitutes a challenging task since concessions do not always bear an argumentative role and are expressed through polysemous lexical markers. Drawing from a theoretically-informed typology of concessions, we first conduct a crowdsourcing task to label a set of polysemous lexical markers as introduc
APA, Harvard, Vancouver, ISO, and other styles
22

Sayin, Burcu, Evgeny Krivosheev, Jie Yang, Andrea Passerini, and Fabio Casati. "A review and experimental analysis of active learning over crowdsourced data." Artificial Intelligence Review 54, no. 7 (2021): 5283–305. http://dx.doi.org/10.1007/s10462-021-10021-3.

Full text
Abstract:
AbstractTraining data creation is increasingly a key bottleneck for developing machine learning, especially for deep learning systems. Active learning provides a cost-effective means for creating training data by selecting the most informative instances for labeling. Labels in real applications are often collected from crowdsourcing, which engages online crowds for data labeling at scale. Despite the importance of using crowdsourced data in the active learning process, an analysis of how the existing active learning approaches behave over crowdsourced data is currently missing. This paper aims
APA, Harvard, Vancouver, ISO, and other styles
23

Shiraishi, Yuhki, Jianwei Zhang, Daisuke Wakatsuki, Katsumi Kumai, and Atsuyuki Morishima. "Crowdsourced real-time captioning of sign language by deaf and hard-of-hearing people." International Journal of Pervasive Computing and Communications 13, no. 1 (2017): 2–25. http://dx.doi.org/10.1108/ijpcc-02-2017-0014.

Full text
Abstract:
Purpose The purpose of this paper is to explore the issues on how to achieve crowdsourced real-time captioning of sign language by deaf and hard-of-hearing (DHH) people, such that how a system structure should be designed, how a continuous task of sign language captioning should be divided into microtasks and how many DHH people are required to maintain a high-quality real-time captioning. Design/methodology/approach The authors first propose a system structure, including the new design of worker roles, task division and task assignment. Then, based on an implemented prototype, the authors ana
APA, Harvard, Vancouver, ISO, and other styles
24

Trippas, Johanne R. "Spoken conversational search." ACM SIGIR Forum 53, no. 2 (2019): 106–7. http://dx.doi.org/10.1145/3458553.3458570.

Full text
Abstract:
Speech-based web search where no keyboard or screens are available to present search engine results is becoming ubiquitous, mainly through the use of mobile devices and intelligent assistants such as Apple's HomePod, Google Home, or Amazon Alexa. Currently, these intelligent assistants do not maintain a lengthy information exchange. They do not track context or present information suitable for an audio-only channel, and do not interact with the user in a multi-turn conversation. Understanding how users would interact with such an audio-only interaction system in multi-turn information seeking
APA, Harvard, Vancouver, ISO, and other styles
25

Hasegawa-Johnson, Mark, Jennifer Cole, Preethi Jyothi, and Lav R. Varshney. "Models of dataset size, question design, and cross-language speech perception for speech crowdsourcing applications." Laboratory Phonology 6, no. 3-4 (2015). http://dx.doi.org/10.1515/lp-2015-0012.

Full text
Abstract:
AbstractTranscribers make mistakes. Workers recruited in a crowdsourcing marketplace, because of their varying levels of commitment and education, make more mistakes than workers in a controlled laboratory setting. Methods for compensating transcriber mistakes are desirable because, with such methods available, crowdsourcing has the potential to significantly increase the scale of experiments in laboratory phonology. This paper provides a brief tutorial on statistical learning theory, introducing the relationship between dataset size and estimation error, then presents a theoretical descriptio
APA, Harvard, Vancouver, ISO, and other styles
26

Ramírez, Jorge, Marcos Baez, Fabio Casati, and Boualem Benatallah. "Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks." BMC Research Notes 12, no. 1 (2019). http://dx.doi.org/10.1186/s13104-019-4858-z.

Full text
Abstract:
Abstract Objectives Text classification is a recurrent goal in machine learning projects and a typical task in crowdsourcing platforms. Hybrid approaches, leveraging crowdsourcing and machine learning, work better than either in isolation and help to reduce crowdsourcing costs. One way to mix crowd and machine efforts is to have algorithms highlight passages from texts and feed these to the crowd for classification. In this paper, we present a dataset to study text highlighting generation and its impact on document classification. Data description The dataset was created through two series of
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Yu, Haonan Feng, Zhankui Peng, Li Zhou, and Jian Wan. "Diversity-aware unmanned vehicle team arrangement in mobile crowdsourcing." EURASIP Journal on Wireless Communications and Networking 2022, no. 1 (2022). http://dx.doi.org/10.1186/s13638-022-02139-x.

Full text
Abstract:
AbstractWith the continuous development of mobile edge computing and the improvement of unmanned vehicle technology, unmanned vehicle could handle ever-increasing demands. As a significant application of unmanned vehicle, spatial crowdsourcing will provide an important application scenario, which is about to organize a lot of unmanned vehicle to conduct the spatial tasks by physically moving to its locations, called task assignment. Previous works usually focus on assigning a spatial task to one single vehicle or a group of vehicles. Few of them consider that vehicle team diversity is essentia
APA, Harvard, Vancouver, ISO, and other styles
28

Butyaev, Alexander, Chrisostomos Drogaris, Olivier Tremblay-Savard, and Jérôme Waldispühl. "Human-supervised clustering of multidimensional data using crowdsourcing." Royal Society Open Science 9, no. 5 (2022). http://dx.doi.org/10.1098/rsos.211189.

Full text
Abstract:
Clustering is a central task in many data analysis applications. However, there is no universally accepted metric to decide the occurrence of clusters. Ultimately, we have to resort to a consensus between experts. The problem is amplified with high-dimensional datasets where classical distances become uninformative and the ability of humans to fully apprehend the distribution of the data is challenged. In this paper, we design a mobile human-computing game as a tool to query human perception for the multidimensional data clustering problem. We propose two clustering algorithms that partially o
APA, Harvard, Vancouver, ISO, and other styles
29

Moradi, Mohammad, and Mohammad Reza Keyvanpour. "CAPTCHA for crowdsourced image annotation: directions and efficiency analysis." Aslib Journal of Information Management, January 4, 2022. http://dx.doi.org/10.1108/ajim-08-2021-0215.

Full text
Abstract:
Purpose Image annotation plays an important role in image retrieval process, especially when it comes to content-based image retrieval. In order to compensate the intrinsic weakness of machines in performing cognitive task of (human-like) image annotation, leveraging humans’ knowledge and abilities in the form of crowdsourcing-based annotation have gained momentum. Among various approaches for this purpose, an innovative one is integrating the annotation process into the CAPTCHA workflow. In this paper, the current state of the research works in the field and experimental efficiency analysis o
APA, Harvard, Vancouver, ISO, and other styles
30

Yasmin, Romena, Md Mahmudulla Hassan, Joshua T. Grassel, Harika Bhogaraju, Adolfo R. Escobedo, and Olac Fuentes. "Improving Crowdsourcing-Based Image Classification Through Expanded Input Elicitation and Machine Learning." Frontiers in Artificial Intelligence 5 (June 29, 2022). http://dx.doi.org/10.3389/frai.2022.848056.

Full text
Abstract:
This work investigates how different forms of input elicitation obtained from crowdsourcing can be utilized to improve the quality of inferred labels for image classification tasks, where an image must be labeled as either positive or negative depending on the presence/absence of a specified object. Five types of input elicitation methods are tested: binary classification (positive or negative); the (x, y)-coordinate of the position participants believe a target object is located; level of confidence in binary response (on a scale from 0 to 100%); what participants believe the majority of the
APA, Harvard, Vancouver, ISO, and other styles
31

Ahmed, Faez, John Dickerson, and Mark Fuge. "Forming Diverse Teams From Sequentially Arriving People." Journal of Mechanical Design 142, no. 11 (2020). http://dx.doi.org/10.1115/1.4046998.

Full text
Abstract:
Abstract Collaborative work often benefits from having teams or organizations with heterogeneous members. In this paper, we present a method to form such diverse teams from people arriving sequentially over time. We define a monotone submodular objective function that combines the diversity and quality of a team and proposes an algorithm to maximize the objective while satisfying multiple constraints. This allows us to balance both how diverse the team is and how well it can perform the task at hand. Using crowd experiments, we show that, in practice, the algorithm leads to large gains in team
APA, Harvard, Vancouver, ISO, and other styles
32

Yan, Chengxi, Xuemei Tang, Hao Yang, and Jun Wang. "A deep active learning-based and crowdsourcing-assisted solution for named entity recognition in Chinese historical corpora." Aslib Journal of Information Management, December 13, 2022. http://dx.doi.org/10.1108/ajim-03-2022-0107.

Full text
Abstract:
PurposeThe majority of existing studies about named entity recognition (NER) concentrate on the prediction enhancement of deep neural network (DNN)-based models themselves, but the issues about the scarcity of training corpus and the difficulty of annotation quality control are not fully solved, especially for Chinese ancient corpora. Therefore, designing a new integrated solution for Chinese historical NER, including automatic entity extraction and man-machine cooperative annotation, is quite valuable for improving the effectiveness of Chinese historical NER and fostering the development of l
APA, Harvard, Vancouver, ISO, and other styles
33

Mohan, Anuraj, Karthika P.V., Parvathi Sankar, Maya Manohar K., and Amala Peter. "Improving anti-money laundering in bitcoin using evolving graph convolutions and deep neural decision forest." Data Technologies and Applications, November 9, 2022, 1–17. http://dx.doi.org/10.1108/dta-06-2021-0167.

Full text
Abstract:
PurposeMoney laundering is the process of concealing unlawfully obtained funds by presenting them as coming from a legitimate source. Criminals use crypto money laundering to hide the illicit origin of funds using a variety of methods. The most simplified form of bitcoin money laundering leans hard on the fact that transactions made in cryptocurrencies are pseudonymous, but open data gives more power to investigators and enables the crowdsourcing of forensic analysis. With the motive to curb these illegal activities, there exist various rules, policies and technologies collectively known as an
APA, Harvard, Vancouver, ISO, and other styles
34

McQuillan, Dan. "The Countercultural Potential of Citizen Science." M/C Journal 17, no. 6 (2014). http://dx.doi.org/10.5204/mcj.919.

Full text
Abstract:
What is the countercultural potential of citizen science? As a participant in the wider citizen science movement, I can attest that contemporary citizen science initiatives rarely characterise themselves as countercultural. Rather, the goal of most citizen science projects is to be seen as producing orthodox scientific knowledge: the ethos is respectability rather than rebellion (NERC). I will suggest instead that there are resonances with the counterculture that emerged in the 1960s, most visibly through an emphasis on participatory experimentation and the principles of environmental sustaina
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!