Academic literature on the topic 'Vector Space Model and wisdom of crowds'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vector Space Model and wisdom of crowds.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vector Space Model and wisdom of crowds"

1

ZESCH, TORSTEN, and IRYNA GUREVYCH. "Wisdom of crowds versus wisdom of linguists – measuring the semantic relatedness of words." Natural Language Engineering 16, no. 1 (2009): 25–59. http://dx.doi.org/10.1017/s1351324909990167.

Full text
Abstract:
AbstractIn this article, we present a comprehensive study aimed at computing semantic relatedness of word pairs. We analyze the performance of a large number of semantic relatedness measures proposed in the literature with respect to different experimental conditions, such as (i) the datasets employed, (ii) the language (English or German), (iii) the underlying knowledge source, and (iv) the evaluation task (computing scores of semantic relatedness, ranking word pairs, solving word choice problems). To our knowledge, this study is the first to systematically analyze semantic relatedness on a large number of datasets with different properties, while emphasizing the role of the knowledge source compiled either by the ‘wisdom of linguists’ (i.e., classical wordnets) or by the ‘wisdom of crowds’ (i.e., collaboratively constructed knowledge sources like Wikipedia).The article discusses benefits and drawbacks of different approaches to evaluating semantic relatedness. We show that results should be interpreted carefully to evaluate particular aspects of semantic relatedness. For the first time, we employ a vector based measure of semantic relatedness, relying on a concept space built from documents, to the first paragraph of Wikipedia articles, to English WordNet glosses, and to GermaNet based pseudo glosses. Contrary to previous research (Strube and Ponzetto 2006; Gabrilovich and Markovitch 2007; Zesch et al. 2007), we find that ‘wisdom of crowds’ based resources are not superior to ‘wisdom of linguists’ based resources. We also find that using the first paragraph of a Wikipedia article as opposed to the whole article leads to better precision, but decreases recall. Finally, we present two systems that were developed to aid the experiments presented herein and are freely available1 for research purposes: (i) DEXTRACT, a software to semi-automatically construct corpus-driven semantic relatedness datasets, and (ii) JWPL, a Java-based high-performance Wikipedia Application Programming Interface (API) for building natural language processing (NLP) applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Tao, Huaixuan Shi, Xinyi Ding, Xi-Ao Ma, Huamao Gu, and Yili Fang. "Mixture of Experts Based Multi-Task Supervise Learning from Crowds." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 13 (2025): 14256–64. https://doi.org/10.1609/aaai.v39i13.33561.

Full text
Abstract:
Existing learning-from-crowds methods aim to design proper aggregation strategies to infer the unknown true labels from noisy labels provided by crowdsourcing. They treat the ground truth as hidden variables and use statistical or deep learning based worker behavior models to infer the ground truth. However, worker behavior models that rely on ground truth hidden variables overlook workers' behavior at the item feature level, leading to imprecise characterizations and negatively impacting the quality of learning-from-crowds. This paper proposes a new paradigm of multi-task supervised learning-from-crowds, which eliminates the need for modeling of items's ground truth in worker behavior models. Within this paradigm, we propose a worker behavior model at the item feature level called Mixture of Experts based Multi-task Supervised Learning-from-Crowds (MMLC), then, two aggregation strategies are proposed within MMLC. The first strategy, named MMLC-owf, utilizes clustering methods in the worker spectral space to identify the projection vector of the oracle worker. Subsequently, the labels generated based on this vector are regarded as the items's ground truth The second strategy, called MMLC-df, employs the MMLC model to fill the crowdsourced data, which can enhance the effectiveness of existing aggregation strategies . Experimental results demonstrate that MMLC-owf outperforms state-of-the-art methods and MMLC-df enhances the quality of existing learning-from-crowds methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Truong, Hai, and Van Tran. "A framework for fake news detection based on the wisdom of crowds and the ensemble learning model." Computer Science and Information Systems, no. 00 (2023): 48. http://dx.doi.org/10.2298/csis230315048t.

Full text
Abstract:
Nowadays, the rapid development of social networks has led to the proliferation of social news. However, the spreading of fake news is a critical issue. Fake news is news written to intentionally misinform or deceive readers. News on social networks is short and lacks context. This makes it difficult for detecting fake news based on shared content. In this paper, we propose an ensemble classification model to detect fake news based on exploiting the wisdom of crowds. The social interactions and the user?s credibility are mined to automatically detect fake news on Twitter without considering news content. The proposed method extracts the features from a Twitter dataset and then a voting ensemble classifier comprising three classifiers namely, Support Vector Machine (SVM), Naive Bayes, and Softmax is used to classify news into two categories which are fake and real news. The experiments on real datasets achieved the highest F1 score of 78.8% which was better than the baseline by 6.8%. The proposed method significantly improved the accuracy of fake news detection in comparison to other methods.
APA, Harvard, Vancouver, ISO, and other styles
4

K., Nirmala Devi, and Murali Bhaskaran V. "Semantic Enhanced Social Media Sentiments for Stock Market Prediction." April 2, 2015. https://doi.org/10.5281/zenodo.1100501.

Full text
Abstract:
Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.
APA, Harvard, Vancouver, ISO, and other styles
5

Dotov, Dobromir, Lana Delasanta, Daniel J. Cameron, Edward Large, and Laurel Trainor. "Collective dynamics support group drumming, reduce variability, and stabilize tempo drift." eLife 11 (November 1, 2022). http://dx.doi.org/10.7554/elife.74816.

Full text
Abstract:
Humans are social animals who engage in a variety of collective activities requiring coordinated action. Among these, music is a defining and ancient aspect of human sociality. Human social interaction has largely been addressed in dyadic paradigms and it is yet to be determined whether the ensuing conclusions generalize to larger groups. Studied more extensively in nonhuman animal behaviour, the presence of multiple agents engaged in the same task space creates different constraints and possibilities than in simpler dyadic interactions. We addressed whether collective dynamics play a role in human circle drumming. The task was to synchronize in a group with an initial reference pattern and then maintain synchronization after it was muted. We varied the number of drummers, from solo to dyad, quartet, and octet. The observed lower variability, lack of speeding up, smoother individual dynamics, and leader-less inter-personal coordination indicated that stability increased as group size increased, a sort of temporal wisdom of crowds. We propose a hybrid continuous-discrete Kuramoto model for emergent group synchronization with pulse-based coupling that exhibits a mean field positive feedback loop. This research suggests that collective phenomena are among the factors that play a role in social cognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Bakar, Mohd Anif A. A., Pin Jern Ker, Shirley G. H. Tang, Mohd Zafri Baharuddin, Hui Jing Lee, and Abdul Rahman Omar. "Translating conventional wisdom on chicken comb color into automated monitoring of disease-infected chicken using chromaticity-based machine learning models." Frontiers in Veterinary Science 10 (June 21, 2023). http://dx.doi.org/10.3389/fvets.2023.1174700.

Full text
Abstract:
Bacteria- or virus-infected chicken is conventionally detected by manual observation and confirmed by a laboratory test, which may lead to late detection, significant economic loss, and threaten human health. This paper reports on the development of an innovative technique to detect bacteria- or virus-infected chickens based on the optical chromaticity of the chicken comb. The chromaticity of the infected and healthy chicken comb was extracted and analyzed with International Commission on Illumination (CIE) XYZ color space. Logistic Regression, Support Vector Machines (SVMs), K-Nearest Neighbors (KNN), and Decision Trees have been developed to detect infected chickens using the chromaticity data. Based on the X and Z chromaticity data from the chromaticity analysis, the color of the infected chicken’s comb converged from red to green and yellow to blue. The development of the algorithms shows that Logistic Regression, SVM with Linear and Polynomial kernels performed the best with 95% accuracy, followed by SVM-RBF kernel, and KNN with 93% accuracy, Decision Tree with 90% accuracy, and lastly, SVM-Sigmoidal kernel with 83% accuracy. The iteration of the probability threshold parameter for Logistic Regression models has shown that the model can detect all infected chickens with 100% sensitivity and 95% accuracy at the probability threshold of 0.54. These works have shown that, despite using only the optical chromaticity of the chicken comb as the input data, the developed models (95% accuracy) have performed exceptionally well, compared to other reported results (99.469% accuracy) which utilize more sophisticated input data such as morphological and mobility features. This work has demonstrated a new feature for bacteria- or virus-infected chicken detection and contributes to the development of modern technology in agriculture applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Acland, Charles. "Matinees, Summers and Opening Weekends." M/C Journal 3, no. 1 (2000). http://dx.doi.org/10.5204/mcj.1824.

Full text
Abstract:
Newspapers and the 7:15 Showing Cinemagoing involves planning. Even in the most impromptu instances, one has to consider meeting places, line-ups and competing responsibilities. One arranges child care, postpones household chores, or rushes to finish meals. One must organise transportation and think about routes, traffic, parking or public transit. And during the course of making plans for a trip to the cinema, whether alone or in the company of others, typically one turns to locate a recent newspaper. Consulting its printed page lets us ascertain locations, a selection of film titles and their corresponding show times. In preparing to feed a cinema craving, we burrow through a newspaper to an entertainment section, finding a tableau of information and promotional appeals. Such sections compile the mini-posters of movie advertisements, with their truncated credits, as well as various reviews and entertainment news. We see names of shopping malls doubling as names of theatres. We read celebrity gossip that may or may not pertain to the film selected for that occasion. We informally rank viewing priorities ranging from essential theatrical experiences to those that can wait for the videotape release. We attempt to assess our own mood and the taste of our filmgoing companions, matching up what we suppose are appropriate selections. Certainly, other media vie to supplant the newspaper's role in cinemagoing; many now access on-line sources and telephone services that offer the crucial details about start times. Nonetheless, as a campaign by the Newspaper Association of America in Variety aimed to remind film marketers, 80% of cinemagoers refer to newspaper listings for times and locations before heading out. The accuracy of that association's statistics notwithstanding, for the moment, the local daily or weekly newspaper has a secure place in the routines of cinematic life. A basic impetus for the newspaper's role is its presentation of a schedule of show times. Whatever the venue -- published, phone or on-line -- it strikes me as especially telling that schedules are part of the ordinariness of cinemagoing. To be sure, there are those who decide what film to see on site. Anecdotally, I have had several people comment recently that they no longer decide what movie to see, but where to see a (any) movie. Regardless, the schedule, coupled with the theatre's location, figures as a point of coordination for travel through community space to a site of film consumption. The choice of show time is governed by countless demands of everyday life. How often has the timing of a film -- not the film itself, the theatre at which it's playing, nor one's financial situation --determined one's attendance? How familiar is the assessment that show times are such that one cannot make it, that the film begins a bit too earlier, that it will run too late for whatever reason, and that other tasks intervene to take precedence? I want to make several observations related to the scheduling of film exhibition. Most generally, it makes manifest that cinemagoing involves an exercise in the application of cinema knowledge -- that is, minute, everyday facilities and familiarities that help orchestrate the ordinariness of cultural life. Such knowledge informs what Michel de Certeau characterises as "the procedures of everyday creativity" (xiv). Far from random, the unexceptional decisions and actions involved with cinemagoing bear an ordering and a predictability. Novelty in audience activity appears, but it is alongside fairly exact expectations about the event. The schedule of start times is essential to the routinisation of filmgoing. Displaying a Fordist logic of streamlining commodity distribution and the time management of consumption, audiences circulate through a machine that shapes their constituency, providing a set time for seating, departure, snack purchases and socialising. Even with the staggered times offered by multiplex cinemas, schedules still lay down a fixed template around which other activities have to be arrayed by the patron. As audiences move to and through the theatre, the schedule endeavours to regulate practice, making us the subjects of a temporal grid, a city context, a cinema space, as well as of the film itself. To be sure, one can arrive late and leave early, confounding the schedule's disciplining force. Most importantly, with or without such forms of evasion, it channels the actions of audiences in ways that consideration of the gaze cannot address. Taking account of the scheduling of cinema culture, and its implication of adjunct procedures of everyday life, points to dimensions of subjectivity neglected by dominant theories of spectatorship. To be the subject of a cinema schedule is to understand one assemblage of the parameters of everyday creativity. It would be foolish to see cinema audiences as cattle, herded and processed alone, in some crude Gustave LeBon fashion. It would be equally foolish not to recognise the manner in which film distribution and exhibition operates precisely by constructing images of the activity of people as demographic clusters and generalised cultural consumers. The ordinary tactics of filmgoing are supplemental to, and run alongside, a set of industrial structures and practices. While there is a correlation between a culture industry's imagined audience and the life that ensues around its offerings, we cannot neglect that, as attention to film scheduling alerts us, audiences are subjects of an institutional apparatus, brought into being for the reproduction of an industrial edifice. Streamline Audiences In this, film is no different from any culture industry. Film exhibition and distribution relies on an understanding of both the market and the product or service being sold at any given point in time. Operations respond to economic conditions, competing companies, and alternative activities. Economic rationality in this strategic process, however, only explains so much. This is especially true for an industry that must continually predict, and arguably give shape to, the "mood" and predilections of disparate and distant audiences. Producers, distributors and exhibitors assess which films will "work", to whom they will be marketed, as well as establish the very terms of success. Without a doubt, much of the film industry's attentions act to reduce this uncertainty; here, one need only think of the various forms of textual continuity (genre films, star performances, etc.) and the economies of mass advertising as ways to ensure box office receipts. Yet, at the core of the operations of film exhibition remains a number of flexible assumptions about audience activity, taste and desire. These assumptions emerge from a variety of sources to form a brand of temporary industry "commonsense", and as such are harbingers of an industrial logic. Ien Ang has usefully pursued this view in her comparative analysis of three national television structures and their operating assumptions about audiences. Broadcasters streamline and discipline audiences as part of their organisational procedures, with the consequence of shaping ideas about consumers as well as assuring the reproduction of the industrial structure itself. She writes, "institutional knowledge is driven toward making the audience visible in such a way that it helps the institutions to increase their power to get their relationship with the audience under control, and this can only be done by symbolically constructing 'television audience' as an objectified category of others that can be controlled, that is, contained in the interest of a predetermined institutional goal" (7). Ang demonstrates, in particular, how various industrially sanctioned programming strategies (programme strips, "hammocking" new shows between successful ones, and counter-programming to a competitor's strengths) and modes of audience measurement grow out of, and invariably support, those institutional goals. And, most crucially, her approach is not an effort to ascertain the empirical certainty of "actual" audiences; instead, it charts the discursive terrain in which the abstract concept of audience becomes material for the continuation of industry practices. Ang's work tenders special insight to film culture. In fact, television scholarship has taken full advantage of exploring the routine nature of that medium, the best of which deploys its findings to lay bare configurations of power in domestic contexts. One aspect has been television time and schedules. For example, David Morley points to the role of television in structuring everyday life, discussing a range of research that emphasises the temporal dimension. Alerting us to the non- necessary determination of television's temporal structure, he comments that we "need to maintain a sensitivity to these micro-levels of division and differentiation while we attend to the macro-questions of the media's own role in the social structuring of time" (265). As such, the negotiation of temporal structures implies that schedules are not monolithic impositions of order. Indeed, as Morley puts it, they "must be seen as both entering into already constructed, historically specific divisions of space and time, and also as transforming those pre-existing division" (266). Television's temporal grid has been address by others as well. Paddy Scannell characterises scheduling and continuity techniques, which link programmes, as a standardisation of use, making radio and television predictable, 'user friendly' media (9). John Caughie refers to the organization of flow as a way to talk about the national particularities of British and American television (49-50). All, while making their own contributions, appeal to a detailing of viewing context as part of any study of audience, consumption or experience; uncovering the practices of television programmers as they attempt to apprehend and create viewing conditions for their audiences is a first step in this detailing. Why has a similar conceptual framework not been applied with the same rigour to film? Certainly the history of film and television's association with different, at times divergent, disciplinary formations helps us appreciate such theoretical disparities. I would like to mention one less conspicuous explanation. It occurs to me that one frequently sees a collapse in the distinction between the everyday and the domestic; in much scholarship, the latter term appears as a powerful trope of the former. The consequence has been the absenting of a myriad of other -- if you will, non-domestic -- manifestations of everyday-ness, unfortunately encouraging a rather literal understanding of the everyday. The impression is that the abstractions of the everyday are reduced to daily occurrences. Simply put, my minor appeal is for the extension of this vein of television scholarship to out-of-home technologies and cultural forms, that is, other sites and locations of the everyday. In so doing, we pay attention to extra-textual structures of cinematic life; other regimes of knowledge, power, subjectivity and practice appear. Film audiences require a discussion about the ordinary, the calculated and the casual practices of cinematic engagement. Such a discussion would chart institutional knowledge, identifying operating strategies and recognising the creativity and multidimensionality of cinemagoing. What are the discursive parameters in which the film industry imagines cinema audiences? What are the related implications for the structures in which the practice of cinemagoing occurs? Vectors of Exhibition Time One set of those structures of audience and industry practice involves the temporal dimension of film exhibition. In what follows, I want to speculate on three vectors of the temporality of cinema spaces (meaning that I will not address issues of diegetic time). Note further that my observations emerge from a close study of industrial discourse in the U.S. and Canada. I would be interested to hear how they are manifest in other continental contexts. First, the running times of films encourage turnovers of the audience during the course of a single day at each screen. The special event of lengthy anomalies has helped mark the epic, and the historic, from standard fare. As discussed above, show times coordinate cinemagoing and regulate leisure time. Knowing the codes of screenings means participating in an extension of the industrial model of labour and service management. Running times incorporate more texts than the feature presentation alone. Besides the history of double features, there are now advertisements, trailers for coming attractions, trailers for films now playing in neighbouring auditoriums, promotional shorts demonstrating new sound systems, public service announcements, reminders to turn off cell phones and pagers, and the exhibitor's own signature clips. A growing focal point for filmgoing, these introductory texts received a boost in 1990, when the Motion Picture Association of America changed its standards for the length of trailers, boosting it from 90 seconds to a full two minutes (Brookman). This intertextuality needs to be supplemented by a consideration of inter- media appeals. For example, advertisements for television began appearing in theatres in the 1990s. And many lobbies of multiplex cinemas now offer a range of media forms, including video previews, magazines, arcades and virtual reality games. Implied here is that motion pictures are not the only media audiences experience in cinemas and that there is an explicit attempt to integrate a cinema's texts with those at other sites and locations. Thus, an exhibitor's schedule accommodates an intertextual strip, offering a limited parallel to Raymond Williams's concept of "flow", which he characterised by stating -- quite erroneously -- "in all communication systems before broadcasting the essential items were discrete" (86-7). Certainly, the flow between trailers, advertisements and feature presentations is not identical to that of the endless, ongoing text of television. There are not the same possibilities for "interruption" that Williams emphasises with respect to broadcasting flow. Further, in theatrical exhibition, there is an end-time, a time at which there is a public acknowledgement of the completion of the projected performance, one that necessitates vacating the cinema. This end-time is a moment at which the "rental" of the space has come due; and it harkens a return to the street, to the negotiation of city space, to modes of public transit and the mobile privatisation of cars. Nonetheless, a schedule constructs a temporal boundary in which audiences encounter a range of texts and media in what might be seen as limited flow. Second, the ephemerality of audiences -- moving to the cinema, consuming its texts, then passing the seat on to someone else -- is matched by the ephemerality of the features themselves. Distributors' demand for increasing numbers of screens necessary for massive, saturation openings has meant that films now replace one another more rapidly than in the past. Films that may have run for months now expect weeks, with fewer exceptions. Wider openings and shorter runs have created a cinemagoing culture characterised by flux. The acceleration of the turnover of films has been made possible by the expansion of various secondary markets for distribution, most importantly videotape, splintering where we might find audiences and multiplying viewing contexts. Speeding up the popular in this fashion means that the influence of individual texts can only be truly gauged via cross-media scrutiny. Short theatrical runs are not axiomatically designed for cinemagoers anymore; they can also be intended to attract the attention of video renters, purchasers and retailers. Independent video distributors, especially, "view theatrical release as a marketing expense, not a profit center" (Hindes & Roman 16). In this respect, we might think of such theatrical runs as "trailers" or "loss leaders" for the video release, with selected locations for a film's release potentially providing visibility, even prestige, in certain city markets or neighbourhoods. Distributors are able to count on some promotion through popular consumer- guide reviews, usually accompanying theatrical release as opposed to the passing critical attention given to video release. Consequently, this shapes the kinds of uses an assessment of the current cinema is put to; acknowledging that new releases function as a resource for cinema knowledge highlights the way audiences choose between and determine big screen and small screen films. Taken in this manner, popular audiences see the current cinema as largely a rough catalogue to future cultural consumption. Third, motion picture release is part of the structure of memories and activities over the course of a year. New films appear in an informal and ever-fluctuating structure of seasons. The concepts of summer movies and Christmas films, or the opening weekends that are marked by a holiday, sets up a fit between cinemagoing and other activities -- family gatherings, celebrations, etc. Further, this fit is presumably resonant for both the industry and popular audiences alike, though certainly for different reasons. The concentration of new films around visible holiday periods results in a temporally defined dearth of cinemas; an inordinate focus upon three periods in the year in the U.S. and Canada -- the last weekend in May, June/July/August and December -- creates seasonal shortages of screens (Rice-Barker 20). In fact, the boom in theatre construction through the latter half of the 1990s was, in part, to deal with those short-term shortages and not some year-round inadequate seating. Configurations of releasing colour a calendar with the tactical manoeuvres of distributors and exhibitors. Releasing provides a particular shape to the "current cinema", a term I employ to refer to a temporally designated slate of cinematic texts characterised most prominently by their newness. Television arranges programmes to capitalise on flow, to carry forward audiences and to counter-programme competitors' simultaneous offerings. Similarly, distributors jostle with each other, with their films and with certain key dates, for the limited weekends available, hoping to match a competitor's film intended for one audience with one intended for another. Industry reporter Leonard Klady sketched some of the contemporary truisms of releasing based upon the experience of 1997. He remarks upon the success of moving Liar, Liar (Tom Shadyac, 1997) to a March opening and the early May openings of Austin Powers: International Man of Mystery (Jay Roach, 1997) and Breakdown (Jonathan Mostow, 1997), generally seen as not desirable times of the year for premieres. He cautions against opening two films the same weekend, and thus competing with yourself, using the example of Fox's Soul Food (George Tillman, Jr., 1997) and The Edge (Lee Tamahori, 1997). While distributors seek out weekends clear of films that would threaten to overshadow their own, Klady points to the exception of two hits opening on the same date of December 19, 1997 -- Tomorrow Never Dies (Roger Spottiswoode, 1997) and Titanic (James Cameron, 1997). Though but a single opinion, Klady's observations are a peek into a conventional strain of strategising among distributors and exhibitors. Such planning for the timing and appearance of films is akin to the programming decisions of network executives. And I would hazard to say that digital cinema, reportedly -- though unlikely -- just on the horizon and in which texts will be beamed to cinemas via satellite rather than circulated in prints, will only augment this comparison; releasing will become that much more like programming, or at least will be conceptualised as such. To summarize, the first vector of exhibition temporality is the scheduling and running time; the second is the theatrical run; the third is the idea of seasons and the "programming" of openings. These are just some of the forces streamlining filmgoers; the temporal structuring of screenings, runs and film seasons provides a material contour to the abstraction of audience. Here, what I have delineated are components of an industrial logic about popular and public entertainment, one that offers a certain controlled knowledge about and for cinemagoing audiences. Shifting Conceptual Frameworks A note of caution is in order. I emphatically resist an interpretation that we are witnessing the becoming-film of television and the becoming-tv of film. Underneath the "inversion" argument is a weak brand of technological determinism, as though each asserts its own essential qualities. Such a pat declaration seems more in line with the mythos of convergence, and its quasi-Darwinian "natural" collapse of technologies. Instead, my point here is quite the opposite, that there is nothing essential or unique about the scheduling or flow of television; indeed, one does not have to look far to find examples of less schedule-dependent television. What I want to highlight is that application of any term of distinction -- event/flow, gaze/glance, public/private, and so on -- has more to do with our thinking, with the core discursive arrangements that have made film and television, and their audiences, available to us as knowable and different. So, using empirical evidence to slide one term over to the other is a strategy intended to supplement and destabilise the manner in which we draw conclusions, and even pose questions, of each. What this proposes is, again following the contributions of Ien Ang, that we need to see cinemagoing in its institutional formation, rather than some stable technological, textual or experiential apparatus. The activity is not only a function of a constraining industrial practice or of wildly creative patrons, but of a complex inter-determination between the two. Cinemagoing is an organisational entity harbouring, reviving and constituting knowledge and commonsense about film commodities, audiences and everyday life. An event of cinema begins well before the dimming of an auditorium's lights. The moment a newspaper is consulted, with its local representation of an internationally circulating current cinema, its listings belie a scheduling, an orderliness, to the possible projections in a given location. As audiences are formed as subjects of the current cinema, we are also agents in the continuation of a set of institutions as well. References Ang, Ien. Desperately Seeking the Audience. New York: Routledge, 1991. Brookman, Faye. "Trailers: The Big Business of Drawing Crowds." Variety 13 June 1990: 48. Caughie, John. "Playing at Being American: Games and Tactics." Logics of Television: Essays in Cultural Criticism. Ed. Patricia Mellencamp. Bloomington: Indiana UP, 1990. De Certeau, Michel. The Practice of Everyday Life. Trans. Steve Rendall. Berkeley: U of California P, 1984. Hindes, Andrew, and Monica Roman. "Video Titles Do Pitstops on Screens." Variety 16-22 Sep. 1996: 11+. Klady, Leonard. "Hitting and Missing the Market: Studios Show Savvy -- or Just Luck -- with Pic Release Strategies." Variety 19-25 Jan. 1998: 18. Morley, David. Television, Audiences and Cultural Studies. New York: Routledge, 1992. Newspaper Association of America. "Before They See It Here..." Advertisement. Variety 22-28 Nov. 1999: 38. Rice-Barker, Leo. "Industry Banks on New Technology, Expanded Slates." Playback 6 May 1996: 19-20. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Williams, Raymond. Television: Technology and Cultural Form. New York: Schocken, 1975. Citation reference for this article MLA style: Charles Acland. "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/cinema.php>. Chicago style: Charles Acland, "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]). APA style: Charles Acland. (2000) Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
8

Livingstone, Randall M. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no. 6 (2010). http://dx.doi.org/10.5204/mcj.315.

Full text
Abstract:
Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography