Academic literature on the topic 'Crowdsourcing, classification, task design, crowdsourcing experiments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Crowdsourcing, classification, task design, crowdsourcing experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Crowdsourcing, classification, task design, crowdsourcing experiments"

1

Yang, Keyu, Yunjun Gao, Lei Liang, Song Bian, Lu Chen, and Baihua Zheng. "CrowdTC: Crowd-powered Learning for Text Classification." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (2021): 1–23. http://dx.doi.org/10.1145/3457216.

Full text
Abstract:
Text classification is a fundamental task in content analysis. Nowadays, deep learning has demonstrated promising performance in text classification compared with shallow models. However, almost all the existing models do not take advantage of the wisdom of human beings to help text classification. Human beings are more intelligent and capable than machine learning models in terms of understanding and capturing the implicit semantic information from text. In this article, we try to take guidance from human beings to classify text. We propose Crowd-powered learning for Text Classification (Crow
APA, Harvard, Vancouver, ISO, and other styles
2

Ramírez, Jorge, Marcos Baez, Fabio Casati, and Boualem Benatallah. "Understanding the Impact of Text Highlighting in Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 144–52. http://dx.doi.org/10.1609/hcomp.v7i1.5268.

Full text
Abstract:
Text classification is one of the most common goals of machine learning (ML) projects, and also one of the most frequent human intelligence tasks in crowdsourcing platforms. ML has mixed success in such tasks depending on the nature of the problem, while crowd-based classification has proven to be surprisingly effective, but can be expensive. Recently, hybrid text classification algorithms, combining human computation and machine learning, have been proposed to improve accuracy and reduce costs. One way to do so is to have ML highlight or emphasize portions of text that it believes to be more
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Shikai, Rong Chen, Hui Li, Tianlun Zhang, and Yaqing Liu. "Identify Severity Bug Report with Distribution Imbalance by CR-SMOTE and ELM." International Journal of Software Engineering and Knowledge Engineering 29, no. 02 (2019): 139–75. http://dx.doi.org/10.1142/s0218194019500074.

Full text
Abstract:
Manually inspecting bugs to determine their severity is often an enormous but essential software development task, especially when many participants generate a large number of bug reports in a crowdsourced software testing context. Therefore, boosting the capabilities of methods of predicting bug report severity is critically important for determining the priority of fixing bugs. However, typical classification techniques may be adversely affected when the severity distribution of the bug reports is imbalanced, leading to performance degradation in a crowdsourcing environment. In this study, w
APA, Harvard, Vancouver, ISO, and other styles
4

Baba, Yukino, Hisashi Kashima, Kei Kinoshita, Goushi Yamaguchi, and Yosuke Akiyoshi. "Leveraging Crowdsourcing to Detect Improper Tasks in Crowdsourcing Marketplaces." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 2 (2021): 1487–92. http://dx.doi.org/10.1609/aaai.v27i2.18987.

Full text
Abstract:
Controlling the quality of tasks is a major challenge in crowdsourcing marketplaces. Most of the existing crowdsourcing services prohibit requesters from posting illegal or objectionable tasks. Operators in the marketplaces have to monitor the tasks continuously to find such improper tasks; however, it is too expensive to manually investigate each task. In this paper, we present the reports of our trial study on automatic detection of improper tasks to support the monitoring of activities by marketplace operators. We perform experiments using real task data from a commercial crowdsourcing mark
APA, Harvard, Vancouver, ISO, and other styles
5

Ceschia, Sara, Kevin Roitero, Gianluca Demartini, Stefano Mizzaro, Luca Di Gaspero, and Andrea Schaerf. "Task design in complex crowdsourcing experiments: Item assignment optimization." Computers & Operations Research 148 (December 2022): 105995. http://dx.doi.org/10.1016/j.cor.2022.105995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Yuyin, Adish Singla, Tori Yan, Andreas Krause, and Dieter Fox. "Evaluating Task-Dependent Taxonomies for Navigation." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 229–38. http://dx.doi.org/10.1609/hcomp.v4i1.13286.

Full text
Abstract:
Taxonomies of concepts are important across many application domains, for instance, online shopping portals use catalogs to help users navigate and search for products. Task-dependent taxonomies, e.g., adapting the taxonomy to a specific cohort of users, can greatly improve the effectiveness of navigation and search. However, taxonomies are usually created by domain experts and hence designing task-dependent taxonomies can be an expensive process: this often limits the applications to deploy generic taxonomies. Crowdsourcing-based techniques have the potential to provide a cost-efficient solut
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Christopher, Mausam Mausam та Daniel Weld. "Dynamically Switching between Synergistic Workflows for Crowdsourcing". Proceedings of the AAAI Conference on Artificial Intelligence 26, № 1 (2021): 87–93. http://dx.doi.org/10.1609/aaai.v26i1.8121.

Full text
Abstract:
To ensure quality results from unreliable crowdsourced workers, task designers often construct complex workflows and aggregate worker responses from redundant runs. Frequently, they experiment with several alternative workflows to accomplish the task, and eventually deploy the one that achieves the best performance during early trials. Surprisingly, this seemingly natural design paradigm does not achieve the full potential of crowdsourcing. In particular, using a single workflow (even the best) to accomplish a task is suboptimal. We show that alternative workflows can compose synergistically t
APA, Harvard, Vancouver, ISO, and other styles
8

Rothwell, Spencer, Steele Carter, Ahmad Elshenawy, and Daniela Braga. "Job Complexity and User Attention in Crowdsourcing Microtasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 3 (March 28, 2016): 20–25. http://dx.doi.org/10.1609/hcomp.v3i1.13265.

Full text
Abstract:
This paper examines the importance of presenting simple, intuitive tasks when conducting microtasking on crowdsourcing platforms. Most crowdsourcing platforms allow the maker of a task to present any length of instructions to crowd workers who participate in their tasks. Our experiments show, however, most workers who participate in crowdsourcing microtasks do not read the instructions, even when they are very brief. To facilitate success in microtask design, we highlight the importance of making simple, easy to grasp tasks that do not rely on instructions for explanation.
APA, Harvard, Vancouver, ISO, and other styles
9

Qarout, Rehab, Alessandro Checco, Gianluca Demartini, and Kalina Bontcheva. "Platform-Related Factors in Repeatability and Reproducibility of Crowdsourcing Tasks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (October 28, 2019): 135–43. http://dx.doi.org/10.1609/hcomp.v7i1.5264.

Full text
Abstract:
Crowdsourcing platforms provide a convenient and scalable way to collect human-generated labels on-demand. This data can be used to train Artificial Intelligence (AI) systems or to evaluate the effectiveness of algorithms. The datasets generated by means of crowdsourcing are, however, dependent on many factors that affect their quality. These include, among others, the population sample bias introduced by aspects like task reward, requester reputation, and other filters introduced by the task design.In this paper, we analyse platform-related factors and study how they affect dataset characteri
APA, Harvard, Vancouver, ISO, and other styles
10

Fu, Donglai, and Yanhua Liu. "Fairness of Task Allocation in Crowdsourcing Workflows." Mathematical Problems in Engineering 2021 (April 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/5570192.

Full text
Abstract:
Fairness plays a vital role in crowd computing by attracting its workers. The power of crowd computing stems from a large number of workers potentially available to provide high quality of service and reduce costs. An important challenge in the crowdsourcing market today is the task allocation of crowdsourcing workflows. Requester-centric task allocation algorithms aim to maximize the completion quality of the entire workflow and minimize its total cost, which are discriminatory for workers. The crowdsourcing workflow needs to balance two objectives, namely, fairness and cost. In this study, w
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!