Academic literature on the topic 'Appropriate benchmarks for crowd sourcing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Appropriate benchmarks for crowd sourcing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Appropriate benchmarks for crowd sourcing"

1

Swidan, Marwa B., Ali A. Alwan, Sherzod Turaev, and Yonis Gulzar. "A Model for Processing Skyline Queries in Crowd-sourced Databases." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 2 (2018): 798. http://dx.doi.org/10.11591/ijeecs.v10.i2.pp798-806.

Full text
Abstract:
Nowadays, in most of the modern database applications, lots of critical queries and tasks cannot be completely addressed by machine. Crowd-sourcing database has become a new paradigm for harness human cognitive abilities to process these computer hard tasks. In particular, those problems that are difficult for machines but easier for humans can be solved better than ever, such as entity resolution, fuzzy matching for predicates and joins, and image recognition. Additionally, crowd-sourcing database allows performing database operators on incomplete data as human workers can be involved to provide estimated values during run-time. Skyline queries which received formidable attention by database community in the last decade, and exploited in a variety of applications such as multi-criteria decision making and decision support systems. Various works have been accomplished address the issues of skyline query in crowd-sourcing database. This includes a database with full and partial complete data. However, we argue that processing skyline queries with partial incomplete data in crowd-sourcing database has not received an appropriate attention. Therefore, an efficient approach processing skyline queries with partial incomplete data in crowd-sourcing database is needed. This paper attempts to present an efficient model tackling the issue of processing skyline queries in incomplete crowd-sourcing database. The main idea of the proposed model is exploiting the available data in the database to estimate the missing values. Besides, the model tries to explore the crowd-sourced database in order to provide more accurate results, when local database failed to provide precise values. In order to ensure high quality result could be obtained, certain factors should be considered for worker selection to carry out the task such as workers quality and the monetary cost. Other critical factors should be considered such as time latency to generate the results.
APA, Harvard, Vancouver, ISO, and other styles
2

Anas, Abdullahi, Mansur Aliyu, Adamu Bashir Ismail, et al. "Safe campus: An intelligent campus video surveillance system for crowd analysis." Dutse Journal of Pure and Applied Sciences 9, no. 1b (2023): 103–9. http://dx.doi.org/10.4314/dujopas.v9i1b.10.

Full text
Abstract:
Nigerian campuses comprises of diverse culture, ethnicity and religion. Controlling these campuses is a very big deal. Cameras are installed to help security personnel in carrying out these enormous tasks. However, the installed closed circuit television [CCTV] cameras are only used for evidence sourcing rather than prevention of campus vices. With the use of appropriate technique, campus vices can be prevented thus securing the campus. In this research, an Artificial Intelligent (AI) video surveillance system was developed. The system captures, analyses video for any abnormal behaviour and alerts relevant personnel for appropriate required action. The scheme uses crowd surge a crowd analyzer for videos with vulnerability and threats. The result shows that when deployed at strategic flagged locations early campus vices are detected and reported to relevant personnel whom take appropriate actions and measures to curb escalation of the vices.
APA, Harvard, Vancouver, ISO, and other styles
3

Marwa, B. Swidan, A. Alwan Ali, Turaev Sherzod, and Gulzar Yonis. "A Model for Processing Skyline Queries in Crowd-sourced Databases." Indonesian Journal of Electrical Engineering and Computer Science 10, no. 2 (2018): 798–806. https://doi.org/10.11591/ijeecs.v10.i2.pp798-806.

Full text
Abstract:
Nowadays, in most of the modern database applications, lots of critical queries and tasks cannot be completely addressed by machine. Crowdsourcing database has become a new paradigm for harness human cognitive abilities to process these computer hard tasks. In particular, those problems that are difficult for machines but easier for humans can be solved better than ever, such as entity resolution, fuzzy matching for predicates and joins, and image recognition. Additionally, crowd-sourcing database allows performing database operators on incomplete data as human workers can be involved to provide estimated values during run-time. Skyline queries which received formidable attention by database community in the last decade, and exploited in a variety of applications such as multi-criteria decision making and decision support systems. Various works have been accomplished address the issues of skyline query in crowd-sourcing database. This includes a database with full and partial complete data. However, we argue that processing skyline queries with partial incomplete data in crowd-sourcing database has not received an appropriate attention. Therefore, an efficient approach processing skyline queries with partial incomplete data in crowdsourcing database is needed. This paper attempts to present an efficient model tackling the issue of processing skyline queries in incomplete crowdsourcing database. The main idea of the proposed model is exploiting the available data in the database to estimate the missing values. Besides, the model tries to explore the crowd-sourced database in order to provide more accurate results, when local database failed to provide precise values. In order to ensure high quality result could be obtained, certain factors should be considered for worker selection to carry out the task such as workers quality and the monetary cost. Other critical factors should be considered such as time latency to generate the results.
APA, Harvard, Vancouver, ISO, and other styles
4

Malyi, Roman, and Pavlo Serdyuk. "Developing a Performance Evaluation Benchmark for Event Sourcing Databases." Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì 15 (July 15, 2024): 159–68. http://dx.doi.org/10.23939/sisn2024.15.159.

Full text
Abstract:
In the domain of software architecture, Event Sourcing (ES) has emerged as a significant paradigm, especially for systems requiring high levels of auditability, traceability, and intricate state management. Systems such as financial transaction platforms, inventory management systems, customer relationship management (CRM) software, and any application requiring a detailed audit trail can significantly benefit from this approach. Numerous aspects of ES remain unexplored, as they have yet to be thoroughly investigated by scientific research. The unique demands of such systems, particularly in terms of database performance and functionality, are not adequately addressed by existing database benchmarks. By establishing benchmarks, organizations can compare different databases to determine which best meets their needs for applications. This aids in selecting the most appropriate technology based on empirical data rather than assumptions or marketing claims.This paper introduces a novel benchmarking framework specifically designed for evaluating databases in the context of event sourcing. The framework addresses critical aspects unique to ES, including event append performance, efficient handling of Projections (separate databases for read operations), strong consistency, ordered data insertion, and robust versioning controls. Through rigorous testing and analysis, this framework aims to fill the gap in existing database benchmarking tools, providing a more accurate and relevant assessment for ES systems. We also conducted experiments that not only demonstrated the effectiveness of our approach but also yielded meaningful results, substantiating its practicality and applicability.
APA, Harvard, Vancouver, ISO, and other styles
5

Senst, Tobias, Volker Eiselein, Alexander Kuhn, and Thomas Sikora. "Crowd Violence Detection Using Global Motion-Compensated Lagrangian Features and Scale-Sensitive Video-Level Representation." Transactions on Information Forensics and Security PP, no. 99 (2017): 1–12. https://doi.org/10.1109/TIFS.2017.2725820.

Full text
Abstract:
Lagrangian theory provides a rich set of tools for analyzing non-local, long-term motion information in computer vision applications. Based on this theory, we present a specialized Lagrangian technique for the automated detection of violent scenes in video footage. We present a novel feature using Lagrangian direction fields that is based on a spatio-temporal model and uses appearance, background motion compensation, and long-term motion information. To ensure appropriate spatial and temporal feature scales, we apply an extended bag-of-words procedure in a late-fusion manner as classification scheme on a per-video basis.We demonstrate that the temporal scale, captured by the Lagrangian integration time parameter, is crucial for violence detection and show how it correlates to the spatial scale of characteristic events in the scene. The proposed system is validated on multiple public benchmarks and non-public, real-world data from the London Metropolitan Police. Our experiments confirm that the inclusion of Lagrangian measures is a valuable cue for automated violence detection and increases the classification performance considerably compared to stateof- the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Ellul, C., J. P. de Almeida, and R. Romano. "DOES COIMBRA NEED A 3D CADASTRE? PROTOTYPING A CROWDSOURCING APP AS A FIRST STEP TO FINDING OUT." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W1 (October 5, 2016): 55–62. http://dx.doi.org/10.5194/isprs-annals-iv-2-w1-55-2016.

Full text
Abstract:
The Municipality of Coimbra in Portugal, and indeed the country as a whole, is currently undergoing a long-term land registration (cadastre creation) exercise, with approximately 50 % of the country having been surveyed, amounting to 1/3 of the total properties, by the end of 2013. The survey process is currently generating two-dimensional (2D) maps. However, as with many other countries, these maps have limitations when representing the real three-dimensional (3D) complexities of land and property ownership. Capturing 2D cadastre is an expensive process, and does not provide the required insight into the number of properties where the ownership situation is inadequately represented, as the survey does not include the internal building structure. Having information about the extent of the 2D/3D issue is, however, fundamental to making a decision as to whether to invest resources in even more expensive 3D survey. <br><br> Given that the 3D complexity inside buildings is only known to residents/occupants - thus making crowd sourcing perhaps the only economically feasible approach for its capture - this paper describes the development of a web-based App envisaged for use by the general public to flag different land and property ownership situations. The paper focuses on two aspects of the problem - firstly, identifying an appropriate, clear, set of diagrams depicting the various different ownership situations from which the user can then pick one, and secondly prototyping and user testing an App for multi-platform VGI data capture in absence of direct feedback from the final end users - i.e. the general public.
APA, Harvard, Vancouver, ISO, and other styles
7

Boyer, Doug M., Gregg F. Gunnell, Seth Kaufman, and Timothy M. McGeary. "MORPHOSOURCE: ARCHIVING AND SHARING 3-D DIGITAL SPECIMEN DATA." Paleontological Society Papers 22 (September 2016): 157–81. http://dx.doi.org/10.1017/scs.2017.13.

Full text
Abstract:
AbstractAdvancement of understanding in paleontology and biology has always been hindered by difficulty in accessing comparative data. With current and burgeoning technology, the severity of this hindrance can be substantially reduced. Researchers and museum personnel generating three-dimensional (3-D) digital models of museum specimens can archive them using internet repositories that can then be explored and utilized by other researchers and private individuals without a museum trip. We focus on MorphoSource, the largest web archive for 3-D museum data at present. We describe the site, how to use it most effectively in its current form, and best practices for file formats and metadata inclusion to aid the growing community wishing to utilize it for distributing 3-D digital data. The potential rewards of successfully crowd sourcing the digitization of museum collections from the research community are great, as it should ensure rapid availability of the most important datasets. Challenges include long-term governance (i.e., maintaining site functionality, supporting large amounts of digital storage, and monitoring/updating file to prevent bit rot, which is the slow and random corruption of electronic data over time, and data format obsolescence, which is the problem of data becoming unreadable or ineffective because of the loss of functional software necessary for access), and utilization by the community (i.e., detecting and minimizing user error in creating data records, incentivizing data sharing by researchers and institutions alike, and protecting stakeholder rights to data, while maximizing accessibility and discoverability).MorphoSource serves as a proof-of-concept of how these kinds of challenges can be met. Accordingly, it is generally recognized as the most appropriate repository for large, raw datasets of fossil organisms and/or comparative samples. Its existence has begun to transform data transparency standards because journal reviewers, editors, and grant officers now often suggest or require that 3-D data be made available through this site.
APA, Harvard, Vancouver, ISO, and other styles
8

Pun, Raymond. "Conceptualizing the integration of digital humanities in instructional services." Library Hi Tech 33, no. 1 (2015): 134–42. http://dx.doi.org/10.1108/lht-06-2014-0055.

Full text
Abstract:
Purpose – The purpose of this paper is to conceptualize how digital humanities (DH) projects can be integrated into instructional services programs in libraries. The paper draws on three digital projects from the New York Public Library (NYPL) and explores how librarians can creatively utilize these resources to teach new digital literacy skills such as data analysis and data management. For patrons, they can learn about the content of these crowd-sourcing projects as well. By integrating DH projects into library instruction, the possibilities and opportunities to expand and explore new research and teaching areas are timely and relevant. Design/methodology/approach – The approach of this paper is to explore NYPL’s three digital projects and underscore how they can be integrated into instructional services: “What’s On the Menu,” “Direct Me NYC” and “Map Warper” all offer strengths and limitations but they serve as paradigms to explore how digital resources can serve multipurpose use: they are databases, digital repositories and digital libraries but they can also serve as instructional service tools. Findings – The paper conceptualizes how three DH projects can serve as teaching opportunities for instructional services, particularly teaching digital literacy skills. By exploring the content of each digital project, the paper suggests that users can develop traditional information literacy skills but also digital literacy skills. In addition, as crowdsourcing projects, the Library also benefits from this engagement since users are adding transcriptions or rectified maps to the Library’s site. Patrons develop visual literacy skills as well. The paper addresses how librarians can meet the needs of the scholarly community through these new digital resources. While the paper only addresses the possibilities of these integrations, these ideas can be considered and implemented in any library. Practical implications – The paper addresses positive outcomes with these digital resources to be used for library instructional services. Based on these projects, the paper recommends that DH projects can be integrated into such instructions to introduce new content and digital skills if appropriate. Although, there are limitations with these digital resources, it is possible to maximize their usage if they are used in a different and creative way. It is possible for DH projects to be more than just digital projects but to act as a tool of digital literacy instruction. Librarians must play a creative role to address this gap. However, another limitation is that librarians themselves are “new” to these resources and may find it challenging to understand the importance of DH projects in scholarly research. Originality/value – This paper introduces DH projects produced in a public research library and explores how librarians can use these digital projects to teach patrons on how to analyze data, maps and other content to develop digital literacy skills. The paper conceptualizes the significant roles that these DH projects and librarians can play as critical mediators to introducing and fostering digital literacy in the twenty-first century. The paper can serve as an interest to academic and public libraries with large research collections and digital projects. By offering new innovative ideas of integrating DH into instructional services, the paper addresses how DH projects teaching tools can support specific digital skills such as visual literacy and data analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Borromeo, Ria Mae, Lei Chen, Abhishek Dubey, Sudeepa Roy, and Saravanan Thirumuruganathan. "On Benchmarking for Crowdsourcing and Future of Work Platforms." October 16, 2019. https://doi.org/10.5281/zenodo.6793148.

Full text
Abstract:
Online crowdsourcing platforms have proliferated over the last few years and cover a number of important domains, these platforms include from worker-task platforms such Amazon Mechanical Turk, worker-for hire platforms such as TaskRabbit to specialized platforms with specific tasks such as ridesharing like Uber, Lyft, Ola etc. An increasing proportion of human workforce will be employed by these platforms in the near future. The crowdsourcing community has done yeoman’s work in designing effective algorithms for various key components, such as incentive design, task assignment and quality control. Given the increasing importance of these crowdsourcing platforms, it is now time to design mechanisms so that it is easier to evaluate the effectiveness of these platforms. Specifically, we advocate developing benchmarks for crowdsourcing research. Benchmarks often identify important issues for the community to focus and improve upon. This has played a key role in the development of research domains as diverse as databases and deep learning. We believe that developing appropriate benchmarks for crowdsourcing will ignite further innovations. However, crowdsourcing – and future of work, in general – is a very diverse field that makes developing benchmarks much more challenging. Substantial effort is needed that spans across developing benchmarks for datasets, metrics, algorithms, platforms and so on. In this article, we initiate some discussion into this important problem and issue a call-to-arms for the community to work on this important initiative.
APA, Harvard, Vancouver, ISO, and other styles
10

Landeck, Lilla, Monika Lessl, Andreas Busch, Matthias Gottwald, and Khusru Asadullah. "The role of open innovation in biomarker discovery." Advances in Precision Medicine 1, no. 1 (2016). http://dx.doi.org/10.18063/apm.2016.02.007.

Full text
Abstract:
Precision medicine aims to treat diseases with special consideration for the individual biological variability. Novel biomarkers (BM) are needed to predict therapeutic responses and to allow for the selection of suitable patients for treatment with certain drugs. However, the identification and validation of appropriate BMs is challenging. Close col-laboration between different partners seems to be a key success factor. While the importance of partnerships and larger, well-established consortia in BM discovery such as the pharmaceutical industry and academic institutions is well un-derstood and has been investigated in the past, the use of open-innovation models, also known as ‘crowd sourcing for biomarkers’, is still in its infancy. Crowd sourcing comprises of a —usually via internet— request for problem solution to an open group of users in a kind of an ‘open call’. The community (crowd) is asked to provide solutions. Since the application of the crowd sourcing method offers the possibility to collect as many as possible novel ideas from a broad community with different expertise, this approach is particularly promising for BM development. In this article we de-scribe the first examples of open-innovation models, such as the ‘grants for targets’ (G4T) and biomarkers initiative ‘InnoCentive’ (innovation/incentive) platform. They may be a fruitful basis for collaborative BM development in the future.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!