Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Computer science. Systems software. Programming languages (Electronic computers).

Статті в журналах з теми "Computer science. Systems software. Programming languages (Electronic computers)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-19 статей у журналах для дослідження на тему "Computer science. Systems software. Programming languages (Electronic computers)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mauw, S., and G. J. Veltink. "A Process Specification Formalism1." Fundamenta Informaticae 13, no. 2 (April 1, 1990): 85–139. http://dx.doi.org/10.3233/fi-1990-13202.

Повний текст джерела
Анотація:
Traditional methods for programming sequential machines are inadequate for specifying parallel systems. Because debugging of parallel programs is hard, due to e.g. non-deterministic execution, verification of program correctness becomes an even more important issue. The Algebra of Communicating Processes (ACP) is a formal theory which emphasizes verification and can be applied to a large domain of problems ranging from electronic circuits to CAM architectures. The manual verification of specifications of small size has already been achieved, but this cannot easily be extended to the verification of larger industrially relevant systems. To deal with this problem we need computer tools to help with the specification, simulation, verification and implementation. The first requirement for building such a set of tools is a specification language. In this paper we introduce PSFd (Process Specification Formalism – draft) which can be used to formally express processes in ACP. In order to meet the modern requirements of software engineering, like reusability of software, PSFd supports the modular construction of specifications and parameterization of modules. To be able to deal with the notion of data, ASF (Algebraic Specification Formalism) is embedded in our formalism. As semantics for PSFd a combination of initial algebra semantics and operational semantics for concurrent processes is used. A comparison with programming languages and other formal description techniques for the specification of concurrent systems is included.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

INCLEZAN, DANIELA, and MICHAEL GELFOND. "Modular action language." Theory and Practice of Logic Programming 16, no. 2 (July 6, 2015): 189–235. http://dx.doi.org/10.1017/s1471068415000095.

Повний текст джерела
Анотація:
AbstractThe paper introduces a new modular action language,${\mathcal ALM}$, and illustrates the methodology of its use. It is based on the approach of Gelfond and Lifschitz (1993,Journal of Logic Programming 17, 2–4, 301–321; 1998,Electronic Transactions on AI 3, 16, 193–210) in which a high-level action language is used as a front end for a logic programming system description. The resulting logic programming representation is used to perform various computational tasks. The methodology based on existing action languages works well for small and even medium size systems, but is not meant to deal with larger systems that requirestructuring of knowledge.$\mathcal{ALM}$is meant to remedy this problem. Structuring of knowledge in${\mathcal ALM}$is supported by the concepts ofmodule(a formal description of a specific piece of knowledge packaged as a unit),module hierarchy, andlibrary, and by the division of a system description of${\mathcal ALM}$into two parts:theoryandstructure. Atheoryconsists of one or more modules with a common theme, possibly organized into a module hierarchy based on adependency relation. It contains declarations of sorts, attributes, and properties of the domain together with axioms describing them.Structuresare used to describe the domain's objects. These features, together with the means for defining classes of a domain as special cases of previously defined ones, facilitate the stepwise development, testing, and readability of a knowledge base, as well as the creation of knowledge representation libraries.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kargar, Masoud, Ayaz Isazadeh, and Habib Izadkhah. "Multi-programming language software systems modularization." Computers & Electrical Engineering 80 (December 2019): 106500. http://dx.doi.org/10.1016/j.compeleceng.2019.106500.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

MORIARTY, K. J. M., and T. TRAPPENBERG. "PROGRAMMING TOOLS FOR PARALLEL COMPUTERS." International Journal of Modern Physics C 04, no. 06 (December 1993): 1285–94. http://dx.doi.org/10.1142/s0129183193001002.

Повний текст джерела
Анотація:
Although software tools already have a place on serial and vector computers they are becoming increasingly important for parallel computing. Message passing libraries, parallel operating systems and high level parallel languages are the basic software tools necessary to implement a parallel processing program. These tools up to now have been specific to each parallel computer system and a short survey will be given. The aim of another class of software tools for parallel computers is to help in writing or rewriting application programs. Because automatic parallelization tools are not very successful, an interactive component has to be incorporated. We will concentrate here on the discussion of SPEFY, a parallel program development facility.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shiau, Liejune. "Exploring Quasi-Concurrency in Introductory Computer Science." Journal of Educational Computing Research 15, no. 1 (July 1996): 53–66. http://dx.doi.org/10.2190/7ldf-va2r-vk66-qq8d.

Повний текст джерела
Анотація:
Most programming courses taught today are focused on managing batch-oriented problems. It is primarily because parallel computers are not commonly available, therefore problems with concurrent nature could not be explored. This consequence, at the same time, causes student's under preparation to meet the challenge of modern multi-process computation technologies. This article demonstrates an easy solution for implementing concurrent programming projects in computer labs. This solution does not require special hardware support or special programming languages. The goal is to facilitate a means to deal with the concept and usefulness of multi-process software systems in the early stage of computer science curriculum. We also include detailed descriptions on a few creative and interesting concurrent examples to illustrate this idea.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mlakić, Dragan, Hamid Baghaee, Srete Nikolovski, Marko Vukobratović, and Zoran Balkić. "Conceptual Design of IoT-Based AMR Systems Based on IEC 61850 Microgrid Communication Configuration Using Open-Source Hardware/Software IED." Energies 12, no. 22 (November 10, 2019): 4281. http://dx.doi.org/10.3390/en12224281.

Повний текст джерела
Анотація:
This paper presents an intelligent electronic device (IED) utilized for automatic meter readings (AMR) scheme using “Open-Source” software. This IED is utilized to measure a low-voltage intelligent electronic device) system with a boundless number of sensors, and it is accessible on the Internet of Things (IoT). The utilized equipment for this task is Arduino UNO R3 motherboard and fringe sensors, which are used for measurement of the referenced information. The Arduino motherboard is used not only for sole tranquility of equipment but also for serving as wireless fidelity (Wi-Fi) switch for the sensors. The personal computer is utilized to gather information and perform client-side calculations. The server works based on an open-source program written in Java programming language. The underlying objective of the proposed scheme is to make the meter based on the “Do It Yourself” methodology which requires considerably fewer funds. Also, it is conceivable by keeping easy to understand interface, information legitimacy, precision of measured information and convenience for the conclusive client. The information is measured in just about 1 ms which is superb for custom-designed IED. Furthermore, the measured qualities are calculated based on their RMS values to be used for analyzing and further presentation of data.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Базурін, Віталій Миколайович. "Середовища програмування як засіб навчання учнів основ програмування". Інформаційні технології і засоби навчання 59, № 3 (30 червня 2017): 13–27. http://dx.doi.org/10.33407/itlt.v59i3.1601.

Повний текст джерела
Анотація:
The article reveals the conditions for choosing the programming environment as a means of teaching students of the general education school to programming in modern languages. The main conditions that influence the choice of the programming environment are determined: technical characteristics of computers and system requirements of the programming environment; availability of operating systems and additional software required for the functioning of the programming environment; functional of the programming environment; the interface of the programming environment; availability of documentation for the software environment; availability of educational and methodological support; level of competence of the teacher of computer science. The characteristics of the most common programming environments in C / C ++, C #, Java are analyzed. The selection of the programming environment for studying each of the specified programming languages is substantiated for the training of beginning programmers and students who have programming skills.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Weini. "Research on Recognition Method of Basketball Goals Based on Image Analysis of Computer Vision." Journal of Sensors 2021 (September 20, 2021): 1–11. http://dx.doi.org/10.1155/2021/5269431.

Повний текст джерела
Анотація:
Moving target detection is involved in many engineering applications, but basketball has some difficulties because of the time-varying speed and uncertain path. The purpose of this paper is to use computer vision image analysis to identify the path and speed of a basketball goal, so as to meet the needs of recognition and achieve trajectory prediction. This research mainly discusses the basketball goal recognition method based on computer vision. In the research process, Kalman filter is used to improve the KCF tracking algorithm to track the basketball path. The algorithm of this research is based on MATLAB, so it can avoid the mixed programming of MATLAB and other languages and reduce the difficulty of interface design software. In the aspect of data acquisition, the extended EPROM is used to store user programs, and parallel interface chips (such as 8255A) can be configured in the system to output switch control signals and display and print operations. The automatic basketball bowling counter based on 8031 microprocessor is used as the host computer. After the level conversion by MAX232, it is connected with the RS232C serial port of PC, and the collected data is sent to the workstation recording the results. In order to consider the convenience of user operation, the GUI design of MATLAB is used to facilitate the exchange of information between users and computers so that users can see the competition results intuitively. The processing frame rate of the tested video image can reach 60 frames/second, more than 25 frames/second, which meet the real-time requirements of the system. The results show that the basketball goal recognition method used in this study has strong anti-interference ability and stable performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

A.N., Khimich, Chistyakova T.V., Sydoruk V.A., and Yershov P.S. "Intellectual computer mathematics system inparsolver." Artificial Intelligence 25, no. 4 (December 25, 2020): 60–71. http://dx.doi.org/10.15407/jai2020.04.060.

Повний текст джерела
Анотація:
The paper considers the intellectual computer mathematics system InparSolver, which is designed to automatically explore and solve basic classes of computational mathematics problems on multi-core computers with graphics accelerators. The problems of results reliability of solving problems with approximate input data are outlined. The features of the use of existing computer mathematics systems are analyzed, their weaknesses are found. The functionality of InparSolver, some innovative approaches to the implementation of effective solutions to problems in a hybrid architecture are described. Examples of applied usage of InparSolver for processes mathematical modeling in various subject areas are given. Nowadays, new more complex objects and phenomena in many subject areas (nuclear energy, mechanics, chemistry, molecular biology, medicine, etc.) are constantly emerging, which are subject to mathematical research on a computer. This encourages the development of new numerical methods and technologies of mathematical modeling, as well as the creation of more powerful computers for their implementation. With the advent and constant development of supercomputers of various architectures, the problems of their effective use, expansion of tasks range should be solved, ensuring the reliability of computer results and increasing the level of intellectual information support for users ‒ specialists in various fields. Today, the issues of solving these problems are given special attention by many specialists in the fields of information technology and parallel programming. The world's leadingscientists in the field of computer technology see the solution to the problems of efficient usage of modern supercomputers in algorithmic software creation that easily adapts to different computer architectures with different types of memory and coprocessors, supports efficient parallelism on millions of cores etc. In addition, improving the efficiency of high-performance computing on modern supercomputers is provided by their intellectualization, transferring to the computer to perform a significant part of the functions (symbolic languages for computer problem statement, research of mathematical models properties, visualization and analysis of tasks results, etc.). The industry of development and usage of intelligent computer technologies is one of the main directions of science and technology development in modern society
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ulker, Birol, and Bülent Sezen. "A fuzzy based self-check capable computerized MCDM aid tool." Kybernetes 43, no. 5 (April 29, 2014): 797–816. http://dx.doi.org/10.1108/k-03-2013-0046.

Повний текст джерела
Анотація:
Purpose – The purpose of this paper is to determine as to develop a fuzzy multi-criteria decision-making (MCDM) algorithm with self-check capability that can solve any manufacturing company's printed circuit boards (PCB) design computer aided design (CAD) tool selection problem and to implement it. Design/methodology/approach – An algorithm that consists of two sub-algorithms that use same inputs and alternative pool is developed, thus self-check capability is introduced. The first sub-algorithm designed as an integration of fuzzy AHP and TOPSIS, where the second sub-algorithm composes of fuzzy analytic network process and TOPSIS. Fuzzy set theory and linguistic variables were utilized to handle uncertainty and usage of verbal expressions, respectively. MATLAB programming language was used for the implementation. The used MCDM methods’ and fuzzy set theory's explanations are given along with the literature review prior to real life application of the developed algorithm. Findings – A MCDM algorithm with self-check capability is introduced. Moreover, a practical decision aid tool is generated for the usage of the manufacturing companies that are related with PCB design. Practical implications – A practical computerized MCDM aid tool is generated. Using the tool let the manufacturers, i.e. high-tech device manufacturers, evaluate available PCB CAD design tools with respect to tangible and intangible criteria, and obtain a reliable result. Originality/value – Self-check capability is incorporated into the decision process. Along with this capability, although the decision-making process takes place in a fuzzy environment, result of the algorithm becomes more reliable than the ones deprived of this characteristic. Furthermore, a practical computerized MCDM aid tool is generated.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Paul, Sebastian, and Melanie Niethammer. "On the importance of cryptographic agility for industrial automation." at - Automatisierungstechnik 67, no. 5 (May 27, 2019): 402–16. http://dx.doi.org/10.1515/auto-2019-0019.

Повний текст джерела
Анотація:
AbstractCryptographic primitives do not remain secure, they deteriorate over time. On the one hand increasing computing power leads to more powerful attacks on their underlying mathematical problems. On the other hand quantum computing threatens to break many widely used cryptographic primitives. The main goal ofcryptographic agilityis to enable an easy transition to alternative cryptographic schemes. Considering the long lifetime of products within industrial automation, we argue that vendors should strive for cryptographic agility in their products. In this work we motivate cryptographic agility by discussing the threat of quantum computers to modern cryptography. Additionally, we introduce the reader to the concept of post-quantum cryptography. Ultimately, we demonstrate that cryptographic agility requires three elements: 1) cryptographic application programming interfaces, 2) secure update mechanisms and 3) documentation of cryptographic primitives. By providing practical concepts we show how to meet these requirements in software-based systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Okhdar, Mahboub, and Ali Ghaffari. "English vocabulary learning through recommender system based on sentence complexity and vocabulary difficulty." Kybernetes 47, no. 1 (January 8, 2018): 44–57. http://dx.doi.org/10.1108/k-06-2017-0198.

Повний текст джерела
Анотація:
Purpose Based on consideration of learner needs for expanding vocabulary and the complexity of educational content, this paper introduces a model aimed at facilitating English vocabulary learning. Design/methodology/approach By measuring a set of effective variables regarding simplicity of English sentences, a ranking algorithm is presented in the proposed model. According to this ranking, the simplest sentence in the recommender system (RS) is selected and recommended to the user. Furthermore, Pearson correlation coefficient was used for checking the degree of correlation among the respective parameters on sentence simplicity. For evaluating the efficiency of the recommended algorithm, a prototype was designed by programming using Embarcadero Delphi XE2. Findings The results of the study indicated that the correlation among the parameters of word frequency, sentence length and average dependency distance were 0.723, 0.683 and 0.589, respectively. The computed final score is considered to be more accurate. Practical implications The application of RS in language learning and education sheds light on the theoretical validity of system thinking by highlighting its key features: its multidisciplinary nature, complexity, dynamicity and the interdependence and relation of micro and macro levels in a system. Social implications The proposed method has significant pedagogical implications; it can be used by second language teachers and learners for checking the degree of complexity/learnability of discourse and text. Originality/value This paper proposes an alternate model with a significantly higher speed for computing final sentence score.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Kozlenko, Mykola, Olena Zamikhovska, and Leonid Zamikhovskyi. "Software implemented fault diagnosis of natural gas pumping unit based on feedforward neural network." Eastern-European Journal of Enterprise Technologies 2, no. 2 (110) (April 30, 2021): 99–109. http://dx.doi.org/10.15587/1729-4061.2021.229859.

Повний текст джерела
Анотація:
In recent years, more and more attention has been paid to the use of artificial neural networks (ANN) for the diagnostics of gas pumping units (GPU). Usually, ANN training is carried out on GPU workflow models, and generated sets of diagnostic data are used to simulate defect conditions. At the same time, the results obtained do not allow assessing the real state of the GPU. It is proposed to use the characteristics of the acoustic and vibration processes of the GPU as the input data of the ANN. A descriptive statistical analysis of real vibration and acoustic processes generated by the operation of the GPU type GTK-25-i (Nuovo Pignone, Italy) was carried out. The formation of batches of diagnostic features arriving at the input of the ANN was carried out. Diagnostic features are the five maximum amplitude components of the acoustic and vibration signals, as well as the value of the standard deviation for each sample. Diagnostic features are calculated directly in the ANN input data pipeline in real time for three technical states of the GPU. Using the frameworks TensorFlow, Keras, NumPy, pandas, in the Python 3 programming language, an architecture was developed for a deep fully connected feedforward ANN, trained on the backpropagation algorithm. The results of training and testing the developed ANN are presented. During testing, it was found that the signal classification precision for the “nominal” state of all 1,475 signal samples is 1.0000, for the “current” state, precision equals 0.9853, and for the “defective” state, precision is 0.9091. The use of the developed ANN makes it possible to classify the technical states of the GPU with an accuracy sufficient for practical use, which will prevent the occurrence of GPU failures. ANN can be used to diagnose GPU of any type and power
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mula, G., C. Angius, F. Casula, G. Maxia, M. Porcu, and Jinlong Yang. "The SEISM* Project: A Software Engineering Initiative for the Study of Materials." MRS Proceedings 408 (1995). http://dx.doi.org/10.1557/proc-408-3.

Повний текст джерела
Анотація:
AbstractStructured programming is no longer enough for dealing with the large software projects allowed by today's computer hardware. An object-oriented computational model has been developed in order to achieve reuse, rapid prototyping and easy maintenance in large scale materials science calculations. The exclusive use of an object-oriented language is not mandatory for implementing the model. On the contrary, embedding Fortran code in an object-oriented language can be a very efficient way of fulfilling these goals without sacrificing the huge installed base of Fortran programs. Reuse can begin from one's old Fortran programs. These claims are substantiated with practical examples from a professional code for the study of the electronic properties of atomic clusters. Out of the about 20,000 lines of the original Fortran program, more than 70% of them could be reused in the C++ objects of the new version. Facilities for dealing with periodic systems and for scaling linearly with the number of atoms have been added without any change in the computational model.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Kangas, Sonja. "From Haptic Interfaces to Man-Machine Symbiosis." M/C Journal 2, no. 6 (September 1, 1999). http://dx.doi.org/10.5204/mcj.1787.

Повний текст джерела
Анотація:
Until the 1980s research into computer technology was developing outside of a context of media culture. Until the 1970s the computer was seen as a highly effective calculator and a tool for the use in government, military and economic life. Its popular image from the 1940s to 1950s was that of a calculator. At that time the computer was a large machine which only white lab-coated engineers could understand. The computer was studied as a technical instrument, not from the viewpoint of the user. The peculiar communication between the user -- engineers at this point -- and the machine was described in caricatures like those in Electric Media (Brown & Marks 100). Many comics handled the issue of understanding. In one cartoon one engineer asks another: "Do you ever feel that it is trying to tell us something?" And in Robert Sherman Townes's novel "Problem of Emmy", the computer (Emmy) acts out of control and prints the words: "WHO AM I WHO AM I WHO AM I?". In these examples the man-machine relationship was taken under consideration, but the attitude towards the relationship was that of a master-tool way. The user was pronouncedly in control and the machine just a passive tool. After the 1980s the image of the computer was turning into that of a playful toy and a game machine, thanks to the game houses' and marketing departments' efforts. Suddenly the player was playing with the computer, and even fairly often got beaten by it. That definitely raises feelings towards the machine! The playing situation was so intensive that the player did not often pay any attention to the interface, and the roles were not so clear anymore. This was a step towards the idea of natural communication between human and machine. Later science fiction influenced depictions of virtual reality, and haptic interfaces mediated the ideas into reality. In this paper I will discuss the man-machine relationship from the viewpoint of interface design. My expertise is in electronic games, and thus I will use examples from the game industry. This paper is a sidetrack of RAID -- Research of Adaptive User Interface Design, which was going on at the University of Lapland, Finland in 1995-1999. The RAID project was about research into adaptive interface design from the viewpoint of media archaeology, electronic games, toys and media art. Early Visions Already in the 1960s, MIT professor J.C.R. Licklider wrote about man-machine symbiosis. He saw that "man machine symbiosis is an expected development in cooperative interaction between men and electronic computers". He believed that it would lead to a new kind of cooperative partnership between man and machine (9). Licklider's visions are important, because the relationship between man and machine was seen generally differently at those days. At the time of the first mainframe computers in the 1940s, man and machine were seen as separate entities from the viewpoint of data processing. The operator put in data to the machine, which processed it by its own language which only the machine and very few engineers could understand. Fear -- a fearful affection -- has affected the development of machines and the idea of man-machine relationships throughout the decades. One reason for this is that the ordinary person had no contact to the computer. That has led to fears that when cooperating with the machine, the user will become enslaved by it, or sucked into it, as in Charlie Chaplin's film Modern Times (1936). The machine captivates its user's body, punishes it and makes its movement impossible at the end. Or the machine will keep the body's freedom, but adapt its functions to work by the automatic rhythm: the human body will be subordinated to the machine or made a part of it. What Is the Interface? In reality there still is a mediator between the user and the machine: the interface. It is a connector -- a boundary surface -- that enables the user to control the machine. There has been no doubt who is in charge of whom, but the public image of the machine is changing from "computer as a tool" to "computer as an entertainment medium". That is also changing the somewhat fearful relationship to the computer, because such applications place the player much more intensively immersed in the game world. The machine as a tool does not lose its meaning but its functionality and usability are being developed towards more entertainment-like attributes. The interface is an environment and a structural system that consists of the physical machine, a virtual programming environment, and the user. The system becomes perfect when all its parts will unite as a functional, interactive whole. Significant thresholds will arise through the hapticity of the interface, on one hand questioning the bodily relationship between user and machine and on the other hand creating new ways of being with the machine. New haptic (wearable computing) and spatial (sensors in a reactive space) interfaces raise the question of man-machine symbiosis from a new perspective. Interfaces in a Game World In games the man-machine relationship is seen with much less emotion than when using medical applications, for example. The strength of electronic games is in the goal-oriented interaction. The passivity of older machines has been replaced by the information platform where the player's actions have an immediate effect in the virtual world. The player is already surrounded by the computer: at home sitting by the computer holding a joystick and in the arcades sometimes sitting inside the computer or even being tied up with the computer (as in gyroscope VR applications). The symbiosis in game environments is essential and simple. During the 1980s and 1990s a lot of different virtual reality gear variants were developed in the "VR boom". Some systems were more or less masked arcade game machines that did not offer any real virtuality. Virtuality was seen as a new way of working with a machine, but most of the applications did not support the idea far enough. Neither did the developers pay attention to interface design nor to new ways of experiencing and feeling pleasure through the machine. At that time the most important thing was to build a plausible "virtual reality system". Under the futuristic cover of the machine there was usually a PC and a joystick or mouse. Usually a system could easily be labelled as a virtual theater, a dome or a cabin, which all refer to entertainment simulators. At the beginning of the 1990s, data glasses and gloves were the most widely used interfaces within the new interaction systems. Later the development turned from haptic interfaces towards more spatial ideas -- from wearable systems to interaction environments. Still there are only few innovative applications available. One good example is Vivid Group's old Mandala VR system which was later in the 1990s developed further to the Holopod system. It has been promoted as the interface of the future and new way of being with the computer. As in the film Modern Times so also with Holopod the player is in a way sucked inside the game world. But this time with the user's consent. Behind the Holopod is Vivid Group's Mandala VGC (Video Gesture Control) technology which they have been developing since 1986. The Mandala VGC system combines real time video images of the player with the game scene. The player in the real world is the protagonist in the game world. So the real world and the game world are united. That makes it possible to sense the real time movement as well as interaction between the platform and the player. Also other manufacturers like American Holoplex has developed similar systems. Their system is called ThunderCam. Like Konami's Dance Dance Revolution, it asks heavy physical involvement in the Street Fighter combat game. Man-Man and Man-Machine Cooperation One of the most important elements in electronic games has been reaction ability. Now the playing is turning closer to a new sport. Different force feedback systems combined with haptic interfaces will create much more diverse examples of action. For example, the Japanese Konami corporation has developed a haptic version of a popular Playstation dance game where karaoke and an electronic version of the Twister game are combined. Besides new man-machine cooperative applications, there are also under development some multi-user environments where the user interacts with the computer-generated world as well as with other players. The Land of Snow and Ice has been under development for about a year now in the University of Lapland, Finland. It is a tourism project that is supposed to be able to create a sensation of the arctic environment throughout the year. Temperature and atmosphere are created with the help of refrigerating equipment. In the space there are virtual theatre and enhanced ski-doo as interfaces. The 3-D software makes the sensation very intense, and a hydraulic platform extends the experience. The Land of Snow and Ice is interesting from the point of view of the man-machine relationship in the way that it brings a new idea to the interface design: the use of everyday objects as interfaces. The machine is "hidden" inside an everyday object and one is interacting and using the machine in a more natural way. For example, the Norwegian media artist Stahl Stenslie has developed "an 'intelligent' couch through which you communicate using your body through tactile and visual stimuli". Besides art works he has also talked about new everyday communication environments, where the table in a café could be a communication tool. One step towards Stenslie's idea has already become reality in Lasipalatsi café in Helsinki, Finland. The tables are good for their primary purpose, but you can also surf the Internet and read your e-mail with them, while drinking your tea. These kind of ideas have also been presented within 'intelligent home' speculations. Intelligent homes have gained acceptance and there are already several intelligent homes in the world. Naturally there will always be opposition, because the surface between man and machine is still a very delicate issue. In spite of this, I see such homogeneous countries as Finland, for example, to be a good testing ground for a further development of new man-machine interaction systems. Pleasure seems to be one of the key words of the future, and with the new technology, one can make everyday routines easier, pleasure more intense and the Internet a part of social communication: within the virtual as well as in real world communities. In brief, I have introduced two ideas: using games as a testing ground, and embedding haptic and spatial interfaces inside everyday objects. It is always difficult to predict the future and there are always at least technology, marketing forces, popular culture and users that will affect what the man-machine relationship of the future will be like. I see games and game interfaces as the new developing ground for a new kind of man-machine relationship. References Barfield, W., and T.A. Furness. Virtual Environments and Advanced Interface Design. New York: Oxford UP, 1995. Brown, Les, and Sema Marks. Electric Media. New York: Hargrove Brace Jovanovich, 1974. Burdea, G., and P. Coiffet. Virtual Reality Technology. New York: John Wiley and Sons, 1994. Greelish, David. "Hictorically Brewed Magazine. A Retrospective." Classic Computing. 1 Sep. 1999 <http://www.classiccomputing.com/mag.php>. Huhtamo, Erkki. "Odottavasta Operaattorista Kärsimättömäksi Käyttäjäksi. Interaktiivisuuden Arkeologiaa." Mediaevoluutiota. Eds. Kari Hintikka and Seppo Kuivakari. Rovaniemi: U of Lapland P, 1997. Jones, Steve, ed. Virtual Culture: Identity and Communication in Cybersociety. Thousand Oaks, Calif.: Sage, 1997. Kuivakari, Seppo, ed. Keholliset Käyttöliittymät. Helsinki: TEKES, 1999. 1 Sep. 1999 <http://media.urova.fi/~raid>. Licklider, J.C.R. "Man-Computer Symbiosis." 1960. 1 Sep. 1999 <http://memex.org/licklider.pdf>. Picard, Rosalind W. Affective Computing. Cambridge, Mass.: MIT P, 1997. "Return of the Luddites". Interview with Kirkpatrick Sale. Wired Magazine June 1995. Stenslie, Stahl. Artworks. 1 Sep. 1999 <http://sirene.nta.no/stahl/>. Citation reference for this article MLA style: Sonja Kangas. "From Haptic Interfaces to Man-Machine Symbiosis." M/C: A Journal of Media and Culture 2.6 (1999). [your date of access] <http://www.uq.edu.au/mc/9909/haptic.php>. Chicago style: Sonja Kangas, "From Haptic Interfaces to Man-Machine Symbiosis," M/C: A Journal of Media and Culture 2, no. 6 (1999), <http://www.uq.edu.au/mc/9909/haptic.php> ([your date of access]). APA style: Sonja Kangas. (1999) From haptic interfaces to man-machine symbiosis. M/C: A Journal of Media and Culture 2(6). <http://www.uq.edu.au/mc/9909/haptic.php> ([your date of access]).
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Németh, Dávid J., Dániel Horpácsi, and Máté Tejfel. "Adaptation of a Refactoring DSL for the Object-Oriented Paradigm." Acta Cybernetica, March 18, 2021. http://dx.doi.org/10.14232/actacyb.284280.

Повний текст джерела
Анотація:
Many development environments offer refactorings aimed at improving non-functional properties of software, but we have no guarantees that these transformations indeed preserve the observable behavior of the source code they are applied on. An existing domain-specific language makes it possible to formalize automatically verifiable refactorings via instantiating predefined transformation schemes with conditional term rewrite rules. We present a proposal for adapting this language from the functional to the object-oriented programming paradigm, using Java instead of Erlang as a representative. The behavior-preserving property of discussed refactorings is characterized with a multilayered definition of equivalence for Java programs, including the conformity relation of class hierarchies. Based on the decomposition of a complex refactoring rule, we show how new transformation schemes can be identified, along with modifications and extensions of the description language required to accommodate them. Finally, we formally define the chosen base refactoring as a composition of scheme instances.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Deck, Andy. "Treadmill Culture." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2157.

Повний текст джерела
Анотація:
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2703.

Повний текст джерела
Анотація:
Introduction In The World Is Not a Desktop, Marc Weisner, the principal scientist and manager of the computer science laboratory at Xerox PARC, stated that, “a good tool is an invisible tool.” Weisner cited eyeglasses as an ideal technology because with spectacles, he argued, “you look at the world, not the eyeglasses.” Although Weisner’s work at PARC played an important role in the creation of the field of “ubiquitous computing”, his ideal is widespread in many areas of technology design. Through repetition, and by design, technologies blend into our lives. While technologies, and communications technologies in particular, have a powerful mediating impact, many of the most pervasive effects are taken for granted by most users. When technology works smoothly, its nature and effects are invisible. But technologies do not always work smoothly. A tiny fracture or a smudge on a lens renders glasses quite visible to the wearer. The Microsoft Windows “Blue Screen of Death” on subway in Seoul (Photo credit Wikimedia Commons). Anyone who has seen a famous “Blue Screen of Death”—the iconic signal of a Microsoft Windows crash—on a public screen or terminal knows how errors can thrust the technical details of previously invisible systems into view. Nobody knows that their ATM runs Windows until the system crashes. Of course, the operating system chosen for a sign or bank machine has important implications for its users. Windows, or an alternative operating system, creates affordances and imposes limitations. Faced with a crashed ATM, a consumer might ask herself if, with its rampant viruses and security holes, she should really trust an ATM running Windows? Technologies make previously impossible actions possible and many actions easier. In the process, they frame and constrain possible actions. They mediate. Communication technologies allow users to communicate in new ways but constrain communication in the process. In a very fundamental way, communication technologies define what their users can say, to whom they say it, and how they can say it—and what, to whom, and how they cannot. Humanities scholars understand the power, importance, and limitations of technology and technological mediation. Weisner hypothesised that, “to understand invisibility the humanities and social sciences are especially valuable, because they specialise in exposing the otherwise invisible.” However, technology activists, like those at the Free Software Foundation (FSF) and the Electronic Frontier Foundation (EFF), understand this power of technology as well. Largely constituted by technical members, both organisations, like humanists studying technology, have struggled to communicate their messages to a less-technical public. Before one can argue for the importance of individual control over who owns technology, as both FSF and EFF do, an audience must first appreciate the power and effect that their technology and its designers have. To understand the power that technology has on its users, users must first see the technology in question. Most users do not. Errors are under-appreciated and under-utilised in their ability to reveal technology around us. By painting a picture of how certain technologies facilitate certain mistakes, one can better show how technology mediates. By revealing errors, scholars and activists can reveal previously invisible technologies and their effects more generally. Errors can reveal technology—and its power and can do so in ways that users of technologies confront daily and understand intimately. The Misprinted Word Catalysed by Elizabeth Eisenstein, the last 35 years of print history scholarship provides both a richly described example of technological change and an analysis of its effects. Unemphasised in discussions of the revolutionary social, economic, and political impact of printing technologies is the fact that, especially in the early days of a major technological change, the artifacts of print are often quite similar to those produced by a new printing technology’s predecessors. From a reader’s purely material perspective, books are books; the press that created the book is invisible or irrelevant. Yet, while the specifics of print technologies are often hidden, they are often exposed by errors. While the shift from a scribal to print culture revolutionised culture, politics, and economics in early modern Europe, it was near-invisible to early readers (Eisenstein). Early printed books were the same books printed in the same way; the early press was conceived as a “mechanical scriptorium.” Shown below, Gutenberg’s black-letter Gothic typeface closely reproduced a scribal hand. Of course, handwriting and type were easily distinguishable; errors and irregularities were inherent in relatively unsteady human hands. Side-by-side comparisons of the hand-copied Malmesbury Bible (left) and the black letter typeface in the Gutenberg Bible (right) (Photo credits Wikimedia Commons & Wikimedia Commons). Printing, of course, introduced its own errors. As pages were produced en masse from a single block of type, so were mistakes. While a scribe would re-read and correct errors as they transcribed a second copy, no printing press would. More revealingly, print opened the door to whole new categories of errors. For example, printers setting type might confuse an inverted n with a u—and many did. Of course, no scribe made this mistake. An inverted u is only confused with an n due to the technological possibility of letter flipping in movable type. As print moved from Monotype and Linotype machines, to computerised typesetting, and eventually to desktop publishing, an accidentally flipped u retreated back into the realm of impossibility (Mergenthaler, Swank). Most readers do not know how their books are printed. The output of letterpresses, Monotypes, and laser printers are carefully designed to produce near-uniform output. To the degree that they succeed, the technologies themselves, and the specific nature of the mediation, becomes invisible to readers. But each technology is revealed in errors like the upside-down u, the output of a mispoured slug of Monotype, or streaks of toner from a laser printer. Changes in printing technologies after the press have also had profound effects. The creation of hot-metal Monotype and Linotype, for example, affected decisions to print and reprint and changed how and when it is done. New mass printing technologies allowed for the printing of works that, for economic reasons, would not have been published before. While personal computers, desktop publishing software, and laser printers make publishing accessible in new ways, it also places real limits on what can be printed. Print runs of a single copy—unheard of before the invention of the type-writer—are commonplace. But computers, like Linotypes, render certain formatting and presentation difficult and impossible. Errors provide a space where the particulars of printing make technologies visible in their products. An inverted u exposes a human typesetter, a letterpress, and a hasty error in judgment. Encoding errors and botched smart quotation marks—a ? in place of a “—are only possible with a computer. Streaks of toner are only produced by malfunctioning laser printers. Dust can reveal the photocopied provenance of a document. Few readers reflect on the power or importance of the particulars of the technologies that produced their books. In part, this is because the technologies are so hidden behind their products. Through errors, these technologies and the power they have on the “what” and “how” of printing are exposed. For scholars and activists attempting to expose exactly this, errors are an under-exploited opportunity. Typing Mistyping While errors have a profound effect on media consumption, their effect is equally important, and perhaps more strongly felt, when they occur during media creation. Like all mediating technologies, input technologies make it easier or more difficult to create certain messages. It is, for example, much easier to write a letter with a keyboard than it is to type a picture. It is much more difficult to write in languages with frequent use of accents on an English language keyboard than it is on a European keyboard. But while input systems like keyboards have a powerful effect on the nature of the messages they produce, they are invisible to recipients of messages. Except when the messages contains errors. Typists are much more likely to confuse letters in close proximity on a keyboard than people writing by hand or setting type. As keyboard layouts switch between countries and languages, new errors appear. The following is from a personal email: hez, if there’s not a subversion server handz, can i at least have the root password for one of our machines? I read through the instructions for setting one up and i think i could do it. [emphasis added] The email was quickly typed and, in two places, confuses the character y with z. Separated by five characters on QWERTY keyboards, these two letters are not easily mistaken or mistyped. However, their positions are swapped on German and English keyboards. In fact, the author was an American typing in a Viennese Internet cafe. The source of his repeated error was his false expectations—his familiarity with one keyboard layout in the context of another. The error revealed the context, both keyboard layouts, and his dependence on a particular keyboard. With the error, the keyboard, previously invisible, was exposed as an inter-mediator with its own particularities and effects. This effect does not change in mobile devices where new input methods have introduced powerful new ways of communicating. SMS messages on mobile phones are constrained in length to 160 characters. The result has been new styles of communication using SMS that some have gone so far as to call a new language or dialect called TXTSPK (Thurlow). Yet while they are obvious to social scientists, the profound effects of text message technologies on communication is unfelt by most users who simply see the messages themselves. More visible is the fact that input from a phone keypad has opened the door to errors which reveal input technology and its effects. In the standard method of SMS input, users press or hold buttons to cycle through the letters associated with numbers on a numeric keyboard (e.g., 2 represents A, B, and C; to produce a single C, a user presses 2 three times). This system makes it easy to confuse characters based on a shared association with a single number. Tegic’s popular T9 software allows users to type in words by pressing the number associated with each letter of each word in quick succession. T9 uses a database to pick the most likely word that maps to that sequence of numbers. While the system allows for quick input of words and phrases on a phone keypad, it also allows for the creation of new types of errors. A user trying to type me might accidentally write of because both words are mapped to the combination of 6 and 3 and because of is a more common word in English. T9 might confuse snow and pony while no human, and no other input method, would. Users composing SMS’s are constrained by its technology and its design. The fact that text messages must be short and the difficult nature of phone-based input methods has led to unique and highly constrained forms of communication like TXTSPK (Sutherland). Yet, while the influence of these input technologies is profound, users are rarely aware of it. Errors provide a situation where the particularities of a technology become visible and an opportunity for users to connect with scholars exposing the effect of technology and activists arguing for increased user control. Google News Denuded As technologies become more complex, they often become more mysterious to their users. While not invisible, users know little about the way that complex technologies work both because they become accustomed to them and because the technological specifics are hidden inside companies, behind web interfaces, within compiled software, and in “black boxes” (Latour). Errors can help reveal these technologies and expose their nature and effects. One such system, Google’s News, aggregates news stories and is designed to make it easy to read multiple stories on the same topic. The system works with “topic clusters” that attempt to group articles covering the same news event. The more items in a news cluster (especially from popular sources) and the closer together they appear in time, the higher confidence Google’s algorithms have in the “importance” of a story and the higher the likelihood that the cluster of stories will be listed on the Google News page. While the decision to include or remove individual sources is made by humans, the act of clustering is left to Google’s software. Because computers cannot “understand” the text of the articles being aggregated, clustering happens less intelligently. We know that clustering is primarily based on comparison of shared text and keywords—especially proper nouns. This process is aided by the widespread use of wire services like the Associated Press and Reuters which provide article text used, at least in part, by large numbers of news sources. Google has been reticent to divulge the implementation details of its clustering engine but users have been able to deduce the description above, and much more, by watching how Google News works and, more importantly, how it fails. For example, we know that Google News looks for shared text and keywords because text that deviates heavily from other articles is not “clustered” appropriately—even if it is extremely similar semantically. In this vein, blogger Philipp Lenssen gives advice to news sites who want to stand out in Google News: Of course, stories don’t have to be exactly the same to be matched—but if they are too different, they’ll also not appear in the same group. If you want to stand out in Google News search results, make your article be original, or else you’ll be collapsed into a cluster where you may or may not appear on the first results page. While a human editor has no trouble understanding that an article using different terms (and different, but equally appropriate, proper nouns) is discussing the same issue, the software behind Google News is more fragile. As a result, Google News fails to connect linked stories that no human editor would miss. A section of a screenshot of Google News clustering aggregation showcasing what appears to be an error. But just as importantly, Google News can connect stories that most human editors will not. Google News’s clustering of two stories by Al Jazeera on how “Iran offers to share nuclear technology,” and by the Guardian on how “Iran threatens to hide nuclear program,” seem at first glance to be a mistake. Hiding and sharing are diametrically opposed and mutually exclusive. But while it is true that most human editors would not cluster these stories, it is less clear that it is, in fact, an error. Investigation shows that the two articles are about the release of a single statement by the government of Iran on the same day. The spin is significant enough, and significantly different, that it could be argued that the aggregation of those stories was incorrect—or not. The error reveals details about the way that Google News works and about its limitations. It reminds readers of Google News of the technological nature of their news’ meditation and gives them a taste of the type of selection—and mis-selection—that goes on out of view. Users of Google News might be prompted to compare the system to other, more human methods. Ultimately it can remind them of the power that Google News (and humans in similar roles) have over our understanding of news and the world around us. These are all familiar arguments to social scientists of technology and echo the arguments of technology activists. By focusing on similar errors, both groups can connect to users less used to thinking in these terms. Conclusion Reflecting on the role of the humanities in a world of increasingly invisible technology for the blog, “Humanities, Arts, Science and Technology Advanced Collaboratory,” Duke English professor Cathy Davidson writes: When technology is accepted, when it becomes invisible, [humanists] really need to be paying attention. This is one reason why the humanities are more important than ever. Analysis—qualitative, deep, interpretive analysis—of social relations, social conditions, in a historical and philosophical perspective is what we do so well. The more technology is part of our lives, the less we think about it, the more we need rigorous humanistic thinking that reminds us that our behaviours are not natural but social, cultural, economic, and with consequences for us all. Davidson concisely points out the strength and importance of the humanities in evaluating technology. She is correct; users of technologies do not frequently analyse the social relations, conditions, and effects of the technology they use. Activists at the EFF and FSF argue that this lack of critical perspective leads to exploitation of users (Stallman). But users, and the technology they use, are only susceptible to this type of analysis when they understand the applicability of these analyses to their technologies. Davidson leaves open the more fundamental question: How will humanists first reveal technology so that they can reveal its effects? Scholars and activists must do more than contextualise and describe technology. They must first render invisible technologies visible. As the revealing nature of errors in printing systems, input systems, and “black box” software systems like Google News show, errors represent a point where invisible technology is already visible to users. As such, these errors, and countless others like them, can be treated as the tip of an iceberg. They represent an important opportunity for humanists and activists to further expose technologies and the beginning of a process that aims to reveal much more. References Davidson, Cathy. “When Technology Is Invisible, Humanists Better Get Busy.” HASTAC. (2007). 1 September 2007 http://www.hastac.org/node/779>. Eisenstein, Elisabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge, UK: Cambridge University Press, 1979. Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard UP, 1999. Lenssen, Philipp. “How Google News Indexes.” Google Blogscoped. 2006. 1 September 2007 http://blogoscoped.com/archive/2006-07-28-n49.html>. Mergenthaler, Ottmar. The Biography of Ottmar Mergenthaler, Inventor of the Linotype. New ed. New Castle, Deleware: Oak Knoll Books, 1989. Monotype: A Journal of Composing Room Efficiency. Philadelphia: Lanston Monotype Machine Co, 1913. Stallman, Richard M. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston, Massachusetts: Free Software Foundation, 2002. Sutherland, John. “Cn u txt?” Guardian Unlimited. London, UK. 2002. Swank, Alvin Garfield, and United Typothetae America. Linotype Mechanism. Chicago, Illinois: Dept. of Education, United Typothetae America, 1926. Thurlow, C. “Generation Txt? The Sociolinguistics of Young People’s Text-Messaging.” Discourse Analysis Online 1.1 (2003). Weiser, Marc. “The World Is Not a Desktop.” ACM Interactions. 1.1 (1994): 7-8. Citation reference for this article MLA Style Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/01-hill.php>. APA Style Hill, B. (Oct. 2007) "Revealing Errors," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/01-hill.php>.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Livingstone, Randall M. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no. 6 (November 23, 2010). http://dx.doi.org/10.5204/mcj.315.

Повний текст джерела
Анотація:
Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії