To see the other types of publications on this topic, follow the link: Computers-Word Processing - General.

Journal articles on the topic 'Computers-Word Processing - General'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 journal articles for your research on the topic 'Computers-Word Processing - General.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Stoloff, Michael L., and James V. Couch. "A Survey of Computer Use by Undergraduate Psychology Departments in Virginia." Teaching of Psychology 14, no. 2 (April 1987): 92–94. http://dx.doi.org/10.1207/s15328023top1402_6.

Full text
Abstract:
The various uses of computers in instruction, faculty research, and departmental administration were assessed by a survey of the 36 psychology departments at four-year colleges in Virginia. Complete responses were obtained from 29 schools. The results indicated that many faculty and clerical staff use microcomputers for a variety of purposes, including word processing, statistical analysis, data-base management, and test generation. Students frequently use microcomputers for statistical analysis and word processing. Simulation and tutorial programs are in use at over half of the responding departments. More than 50% of the schools indicated that computer use is required in undergraduate statistics or research courses, and computers are being used in many other courses as well. Apple II computers are the most popular, although IBM and 13 other brands are also being used. Our data may be useful for academic psychologists who need to know how computers are used in psychology programs, and especially for those who are planning to expand their use of computers.
APA, Harvard, Vancouver, ISO, and other styles
2

Nickell, Gary S., and Paul C. Seado. "The Impact of Attitudes and Experience on Small Business Computer Use." American Journal of Small Business 10, no. 4 (April 1986): 37–48. http://dx.doi.org/10.1177/104225878601000404.

Full text
Abstract:
This study investigates the attitudes of small business owners/managers toward computers and how computers are used in small businesses. A survey of 236 firms revealed that a majority of the respondents are currently using computers. In general, owners/managers have a positive attitude toward computers. Respondents who have taken a computer class, own a microcomputer, or whose businesses are using computers have a more positive attitude toward computers. The most frequent business computer applications were accounting, mailing lists, and storing information. The most frequently reported personal applications were word processing, accounting, and budgeting. Implications for further computerization of small businesses are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Portela, Manuel. "‘This strange process of typing on a glowing glass screen’: An Interview with Matthew Kirschenbaum." Matlit Revista do Programa de Doutoramento em Materialidades da Literatura 4, no. 2 (July 11, 2016): 267–75. http://dx.doi.org/10.14195/2182-8830_4-2_13.

Full text
Abstract:
Track Changes, by Matthew Kirschenbaum, tells the early history of word processing, roughly situated between 1964—when the IBM Magnetic Tape/Selectric Typewriter (MT/ST) was advertised as a word processing system for offices—and 1984—when the Apple Macintosh generalized the graphical user interface in personal computers. The history of word processing both as technological process and mode of textual production is deeply entangled with the changes in the technologies of writing as they reflect and contribute to efficiency and control in increasingly bureaucratic processes of social administration and organization. The literary history of word processing can be situated within this general computerization of the modes of production of writing. Kirschenbaum’s methods combine archival work in special collections and writers’ archives, oral interviews with writers and engineers, and hands-on descriptions of historical word processing machines. Track Changes is the subject of this interview.
APA, Harvard, Vancouver, ISO, and other styles
4

Bergin, T. J. "The Origins of Word Processing Software for Personal Computers: 1976-1985." IEEE Annals of the History of Computing 28, no. 4 (October 2006): 32–47. http://dx.doi.org/10.1109/mahc.2006.76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Atnip, Gilbert W. "Teaching the Use of Computers: A Case Study." Teaching of Psychology 12, no. 3 (October 1985): 171–72. http://dx.doi.org/10.1207/s15328023top1203_18.

Full text
Abstract:
A course on the use of computers in psychology and the other social sciences is described. The course included an introduction to computers and computing and units on word processing, data analysis, data acquisition, artificial intelligence, and computer-assisted instruction, simulation, and modeling. Each unit incorporated the application of an appropriate program. Students conducted independent research projects using the computer. They evaluated the course very positively, as did the instructor. The major problems encountered in teaching the course related to the diversity of students' backgrounds in computers and in statistics, and to the difficulty of separating technique from content in assigning grades.
APA, Harvard, Vancouver, ISO, and other styles
6

Sudriyanto, Sudriyanto, Sukma Agung Adi Luwih, Syamsul Arifin, Wahyu Pratama Mukti, and Wakiludinil Hasan. "PKM Pendampingan dan Pelatihan Microsoft Office untuk Meningkatkan Keterampilan Santri Pesantren Nurul Hidayah." GUYUB: Journal of Community Engagement 3, no. 2 (August 31, 2022): 92–99. http://dx.doi.org/10.33650/guyub.v3i2.3945.

Full text
Abstract:
The development of technology has helped a lot of human work. One of the things that we can enjoy from these technological developments include Microsoft Word, Microsoft Excel and power points. Microsoft Word, functions as a word processing software includes creating, editing, and formatting documents. The students have not been able to operate Microsoft word, Excel and Power Point because they are not taught at school, the high cost of computer courses among students. One of them is to train the ability to use a word processing application, namely Microsoft Word. Community service activities are emphasized in the form of training on how to apply Microsoft Office for students of the Nurul Hidayah Islamic Boarding School. It is hoped that with this training, students can know more about the basic techniques of using Microsoft Office by utilizing Microsoft Word, Microsoft Excel, Microsoft Power Point. The results of the service activities carried out provide experience and skills for students in using Microsoft Office. Thus, the implementation of community service activities at the Nurul Hidayah Islamic Boarding School Foundation Using Microsoft Office provides significant benefits for improving the skills of students in utilizing information technology and computers. The students were very enthusiastic about participating in further training activities. The students already understand and can run Microsoft Office applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Cadiz-Gabejan, Alona Medalia, and Melinda Jr C. Takenaka. "Students’ Computer Literacy and Academic Performance." Journal of World Englishes and Educational Practices 3, no. 6 (June 30, 2021): 29–42. http://dx.doi.org/10.32996/jweep.2021.3.6.4.

Full text
Abstract:
This study determined the level of computer literacy and its influence on the academic performance of junior high school students. Specifically, it probed into the students’ attitude toward computers and sought answers to the following: the extent of students’ computer literacy in terms of Word Processing, Spreadsheet, Presentation, and General Computing; their academic performance based on the mean percentage scores during the first and second quarters of the school year 2019-2020; issues and problems encountered by them relative to the extent of their computer literacy; and the solutions that may be suggested by themselves to address the constraints they encountered relative to the extent of their computer literacy. Also, by employing descriptive-correlational analysis, the study examined the significant differences in the extent of students’ computer literacy in said areas when paired according to their attitude toward computers and the significant relationship between their academic performance and the extent of their computer literacy in terms of the identified areas. Generally, the findings of the study revealed that the students needed to enhance the extent of their computer literacy in the areas of word processing, spreadsheet, presentation, and general computing. The results also signified that the greater the extent of their computer literacy in said areas, the higher their academic performance. This implied that classroom intervention activities are imperative to enhance the extent of the students' computer literacy. Thus, teachers should support them by implementing an intervention program that improves students’ level of computer literacy in the specific areas mentioned.
APA, Harvard, Vancouver, ISO, and other styles
8

Chakraborty, Pratic. "Embedded Machine Learning and Embedded Systems in the Industry." International Journal for Research in Applied Science and Engineering Technology 9, no. 11 (November 30, 2021): 1872–75. http://dx.doi.org/10.22214/ijraset.2021.39067.

Full text
Abstract:
Abstract: Machine learning is the buzz word right now. With the machine learning algorithms one can make a computer differentiate between a human and a cow. Can detect objects, can predict different parameters and can process our native languages. But all these algorithms require a fair amount of processing power in order to be trained and fitted as a model. Thankfully, with the current improvement in technology, processing power of computers have significantly increased. But there is a limitation in power consumption and deployability of a server computer. This is where “tinyML” helps the industry out. Machine Learning has never been so easy to access before!
APA, Harvard, Vancouver, ISO, and other styles
9

Thorning, L. "Introduction of new computing facilities at the Geological Survey of Greenland." Rapport Grønlands Geologiske Undersøgelse 140 (December 31, 1988): 7–9. http://dx.doi.org/10.34194/rapggu.v140.8023.

Full text
Abstract:
From a cautious start in the use of computers in the early 1970s, the Geological Survey of Greenland has developed complex and varied uses of modern computer facilities for both scientific and administrative tasks. GGU's first computer installation, a noisy TTY connected to the Computing Centre of Copenhagen University by a 110 baud telephone modem, was a selfservice facility which was not easy to use. Over the years, first with use of a PDP-10 with just one Tektronix 4014 graphic terminal and later a succession of increasingly powerful PDP-11s with many terminals, GGU's in-house facilities just kept ahead of the ever increasing demand for computer services. At the same time a number of programs for special tasks were developed on external facilities, because they required larger computers or special facilities. In the 1980s the demands on the computer facilitiesrequiring many different types of programs, including word processing, had grown so large that GGU's in-house system could no longer handle them satisfactorily. A major reorganisation was required, and consequently activities were divided between personal computers (PCs; mainly administrative) and a new central computer (mainly scientific). This development took place in late 1986 with the purchase of 17 new personal computers and a new central computer with accessory peripheral equipment. This has allowed an increasing integration of computer methods into GGU's activities. A brief summary is given below.
APA, Harvard, Vancouver, ISO, and other styles
10

Hao, Zhoushao. "Naive Bayesian Prediction of Japanese Annotated Corpus for Textual Semantic Word Formation Classification." Mathematical Problems in Engineering 2022 (March 16, 2022): 1–14. http://dx.doi.org/10.1155/2022/8048335.

Full text
Abstract:
With the rapid development of Japanese information processing technology, problems such as polysemy and ambiguity at the text and dialogue level, as well as unregistered words, have become increasingly prominent because computers cannot fully “understand” the semantics of words. How to make the computer “understand” the semantics of words accurately requires the computer to “understand” the rules of converting and integrating words into words from the perspective of semantics. Traditional Japanese text classification mostly adopts the text representation method of vector space model, which has the problem of confusing classification effect. Therefore, this paper proposes the topic of constructing a semantic word formation pattern prediction model based on a large-scale annotated corpus. This paper proposes a solution that combines Japanese semantic word formation rules with pattern recognition algorithms. Aiming at this scheme, a variety of pattern recognition algorithms were compared and analyzed, and the naive Bayesian model was decided to predict semantic word formation patterns. This paper further improves the accuracy of computer prediction of Japanese semantic word formation patterns by adding part of speech. Before modeling, the parts of speech of words are automatically tagged and manually checked based on the original annotated corpus. In the research on predicting Japanese semantic word formation patterns, this paper builds a semantic word formation pattern prediction model based on Naive Bayes and conducts simulation experiments. We divide the eight types of semantic word formation patterns in the annotated corpus into two groups, and divide the obtained sample sets into training sets and test sets, so that the Naive Bayes model first learns semantic word formation rules based on the training sets of each group. Semantic word formation patterns are predicted on the test set for each group. The simulation results show that the prediction model of semantic word formation mode has a generally high degree of fit and prediction accuracy. The prediction model of semantic word formation pattern based on this theory can ensure that the computer can judge the semantic word formation pattern more accurately.
APA, Harvard, Vancouver, ISO, and other styles
11

Korhonen, Anna. "Automatic lexical classification: bridging research and practice." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1924 (August 13, 2010): 3621–32. http://dx.doi.org/10.1098/rsta.2010.0039.

Full text
Abstract:
Natural language processing (NLP)—the automatic analysis, understanding and generation of human language by computers—is vitally dependent on accurate knowledge about words . Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Liting. "Design of New Word Retrieval Algorithm for Chinese-English Bilingual Parallel Corpus." Mathematical Problems in Engineering 2022 (March 26, 2022): 1–9. http://dx.doi.org/10.1155/2022/6399375.

Full text
Abstract:
Natural language processing is an important direction in the field of computer science and artificial intelligence. It can realize various theories and methods of effective communication between humans and computers using natural language. Machine learning is a branch of natural language processing research, which is based on a large-scale English-Chinese database. Due to the relatively poor alignment corpus of English and Chinese bilingual sentences containing unknown words, machine translation is unprofessional and unbalanced, which is the problem studied in this paper. The purpose of this paper is to design and implement a length-based system for sentence alignment between English and Chinese bilingual texts. The research content of this paper is mainly divided into the following parts. First, the evaluation function of bilingual sentence alignment is designed, and on this basis, the bilingual sentence alignment algorithm based on the length and the optimal sentence pair sequence search algorithm is designed. In this paper, China National Knowledge Infrastructure (CNKI) is selected as an English-Chinese bilingual candidate website and English-Chinese bilingual web pages are downloaded. After analyzing the downloaded pages, nontext content such as page tags is removed, and bilingual text information is stored so as to establish an English-Chinese bilingual corpus based on segment alignment and retain English-Chinese bilingual keywords in the web pages. Second, extract the dictionary from the software StarDict, analyze the original dictionary format, and turn it into a custom dictionary format, which is convenient and better to use the double-sentence sentence alignment system, which is conducive to expanding the number of dictionaries and increasing the professionalism of vocabulary. Finally, we extract the stems of English words from the established corpus to simplify the complexity of English word processing, reduce the noise caused by the conversion of word parts of speech, and improve the operation efficiency. A bilingual sentence alignment system based on length is implemented. Finally, the system parameters are adjusted for comparative experiments to test the system performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Jacobson, Michael J., and Martha H. Weller. "A Profile of Computer Use among the University of Illinois Humanities Faculty." Journal of Educational Technology Systems 16, no. 2 (December 1987): 83–98. http://dx.doi.org/10.2190/x1v2-d2y9-megp-0uve.

Full text
Abstract:
The faculty of the School of Humanities of the University of Illinois at Urbana-Champaign (UIUC) were surveyed to assess their current use of and attitudes towards educational computing. The respondents were generally self-trained in computer use, indicated positive attitudes to, and made frequent use of computers. Frequency of computer use, level of general computing skills, computer interest, and anxiety were analyzed according to respondent rank, sex, and age. Faculty perceptions of obstacles to computer use in the humanities indicate a need to address issues of funding for hardware, quality of software, training, and technical support. The main faculty interests in applications software include word processing, desktop publishing, graphics, database management, communications, and computer-assisted instruction. While recognizing that humanities faculty do not have the same level of involvement in computing as faculty in more “technical” disciplines, UIUC humanists, as a group, are clearly not intimidated by computer technology.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Huasu. "Research on Feature Extraction and Chinese Translation Method of Internet-of-Things English Terminology." Computational Intelligence and Neuroscience 2022 (April 28, 2022): 1–11. http://dx.doi.org/10.1155/2022/6344571.

Full text
Abstract:
Feature extraction and Chinese translation of Internet-of-Things English terms are the basis of many natural language processing. Its main purpose is to extract rich semantic information from unstructured texts to allow computers to further calculate and process them to meet different types of NLP-based tasks. However, most of the current methods use simple neural network models to count the word frequency or probability of words in the text, and it is difficult to accurately understand and translate IoT English terms. In response to this problem, this study proposes a neural network for feature extraction and Chinese translation of IoT English terms based on LSTM, which can not only correctly extract and translate IoT English vocabulary but also realize the feature correspondence between English and Chinese. The neural network proposed in this study has been tested and trained on multiple datasets, and it basically fulfills the requirements of feature translation and Chinese translation of Internet-of-Things terms in English and has great potential in the follow-up work.
APA, Harvard, Vancouver, ISO, and other styles
15

KR, Puneetha. "Proactively Discouraging Cyberbullying Activities." International Journal for Research in Applied Science and Engineering Technology 9, no. 10 (October 31, 2021): 1601–7. http://dx.doi.org/10.22214/ijraset.2021.38496.

Full text
Abstract:
Abstract: Research into cyberbullying detection has increased in recent years, due in part to the proliferation of cyberbullying across social media and its detrimental effect on young people. Cyber bullying is one of the most common problems faced by the internet users making internet a vulnerable space hence there has to be some detection that is needed on the social media platforms. Detecting the bullies online at the earliest makes sure that these platforms are safer for the user and internet indeed becomes a platform to share information and use it for other leisure activities. Even though there has been some research going on implementing detection and prevention of cyber bullying, it is not completely feasible due to certain limitations imposed. In this paper lexicon-based approach of the NLTK sentiwordnetis used to differentiate the positive and negative words and produce results. These words are given negative and positive values greater than or less than zero for positive and negative words respectively. Lexicon based systems utilize word lists and use the presence of words within the lists to detect cyberbullying. Lemmatization is used to find the root word. This paper essentially maps out the state-of-the-art in cyberbullying detection research and serves as a resource for researchers to determine where to best direct their future research efforts in thisfield. Keywords: Abuse and crime involving computers, natural language processing, sentiment analysis, social networking
APA, Harvard, Vancouver, ISO, and other styles
16

Pepe, Sveva, Edoardo Barba, Rexhina Blloshmi, and Roberto Navigli. "STEPS: Semantic Typing of Event Processes with a Sequence-to-Sequence Approach." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (June 28, 2022): 11156–64. http://dx.doi.org/10.1609/aaai.v36i10.21365.

Full text
Abstract:
Enabling computers to comprehend the intent of human actions by processing language is one of the fundamental goals of Natural Language Understanding. An emerging task in this context is that of free-form event process typing, which aims at understanding the overall goal of a protagonist in terms of an action and an object, given a sequence of events. This task was initially treated as a learning-to-rank problem by exploiting the similarity between processes and action/object textual definitions. However, this approach appears to be overly complex, binds the output types to a fixed inventory for possible word definitions and, moreover, leaves space for further enhancements as regards performance. In this paper, we advance the field by reformulating the free-form event process typing task as a sequence generation problem and put forward STEPS, an end-to-end approach for producing user intent in terms of actions and objects only, dispensing with the need for their definitions. In addition to this, we eliminate several dataset constraints set by previous works, while at the same time significantly outperforming them. We release the data and software at https://github.com/SapienzaNLP/steps.
APA, Harvard, Vancouver, ISO, and other styles
17

Ekpenyong, Moses Effiong, Eno-Abasi Essien Urua, Aniefon Daniel Akpan, Olufemi Sunday Adeoye, and Aminu Alhaji Suleiman. "A Template-Based Approach to Intelligent Multilingual Corpora Transcription." International Journal of Humanities and Arts Computing 16, no. 2 (October 2022): 182–213. http://dx.doi.org/10.3366/ijhac.2022.0290.

Full text
Abstract:
Emerging linguistic problems are data-driven and multidisciplinary, requiring richly transcribed corpora. Accurate corpus transcription therefore demands intelligent protocols that satisfy the following important criteria: 1) acceptability by end-users, computers/machines; 2) conformity to existing language standards, rules and structures; and 3) representation within the context of the intended language domain. To demonstrate the feasibility of these criteria, a template-based framework for multilingual transcription was proposed and implemented. The first version of the developed transcription tool, also called SCAnnAL (Speech Corpus Annotator for African Languages), applies signal processing to pre-segment waveforms of a recorded speech corpus, into word, syllable and phoneme units, resulting in a pre-segmented TextGrid file with empty labels. Using preformatted templates, the front-end or linguistic aspects/datasets (the text corpus, vowels inventory, consonants inventory, and a set of syllabification rules) are specified in a default language. A Natural Language Understanding (NLU) algorithm then uses these datasets with a data-driven syllabification algorithm to relabel subtrees of the TextGrid file. Tone pattern models were finally constructed from translations of experimental data, using the Ibadan 400 words (a list of basic items of a language), for four Nigerian tone languages. Integration of the tone pattern models into the transcription system is expected in a future paper. This research will benefit emerging digital humanists and computational linguists working on language data, as well as open new opportunities for improved African tone language speech processing systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Emmert-Streib, Frank. "From the Digital Data Revolution toward a Digital Society: Pervasiveness of Artificial Intelligence." Machine Learning and Knowledge Extraction 3, no. 1 (March 4, 2021): 284–98. http://dx.doi.org/10.3390/make3010014.

Full text
Abstract:
Technological progress has led to powerful computers and communication technologies that penetrate nowadays all areas of science, industry and our private lives. As a consequence, all these areas are generating digital traces of data amounting to big data resources. This opens unprecedented opportunities but also challenges toward the analysis, management, interpretation and responsible usage of such data. In this paper, we discuss these developments and the fields that have been particularly effected by the digital revolution. Our discussion is AI-centered showing domain-specific prospects but also intricacies for the method development in artificial intelligence. For instance, we discuss recent breakthroughs in deep learning algorithms and artificial intelligence as well as advances in text mining and natural language processing, e.g., word-embedding methods that enable the processing of large amounts of text data from diverse sources such as governmental reports, blog entries in social media or clinical health records of patients. Furthermore, we discuss the necessity of further improving general artificial intelligence approaches and for utilizing advanced learning paradigms. This leads to arguments for the establishment of statistical artificial intelligence. Finally, we provide an outlook on important aspects of future challenges that are of crucial importance for the development of all fields, including ethical AI and the influence of bias on AI systems. As potential end-point of this development, we define digital society as the asymptotic limiting state of digital economy that emerges from fully connected information and communication technologies enabling the pervasiveness of AI. Overall, our discussion provides a perspective on the elaborate relatedness of digital data and AI systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Sasne, Ajinkya, Ashutosh Banait, Apurva Raut, and Vishal Raut. "Brain Machine Interface." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 3641–42. http://dx.doi.org/10.22214/ijraset.2022.43218.

Full text
Abstract:
Abstract— Brain Machine Interface is also known as ‘A brain-computer inteface’.A brain-computer interface (BCI), sometimes called a direct neural interface or a brain-machine interface, is a direct communication pathway between a human or animal brain and an external device. In one-way BCIs, computers either accept commands from the brain or send signals to it (for example, to restore vision) but not both. Two-way BCIs would allow brains and external devices to exchange information in both directions but have yet to be successfully implanted in animals or humans. In this definition, the word brain means the brain or nervous system of an organic life form rather than the mind. Computer means any processing or computational device, from simple circuits to silicon chips. Research on BCIs began in the 1970s, but it wasn't until the mid1990s that the first working experimental implants in humans appeared. Following years of animal experimentation, early working implants in humans now exist, designed to restore damaged hearing, sight and movement. With recent advances in technology and knowledge, pioneering researchers could now conceivably attempt to produce BCIs that augment human functions rather than simply restoring them, previously only a possibility in science fiction.
APA, Harvard, Vancouver, ISO, and other styles
20

Naufal, Mohammad Farid, and Selvia Ferdiana Kusuma. "Natural Language Processing untuk Otomatisasi Pengenalan Pronomina dalam Kalimat Bahasa Indonesia." Jurnal Teknologi Informasi dan Ilmu Komputer 9, no. 5 (October 31, 2022): 1011. http://dx.doi.org/10.25126/jtiik.2022946394.

Full text
Abstract:
<p class="Abstrak">Pronomina (kata ganti) adalah jenis kata yang dapat dipakai untuk menggantikan posisi kata benda atau orang dalam suatu kalimat. Penggunaan pronomina akan mudah dipahami apabila serangkaian kalimat dibaca secara utuh. Namun jika rangkaian kalimat tersebut hanya dibaca pada kalimat-kalimat tertentu, maka akan sulit memahami kalimat yang memiliki pronomina. Pada pengolahan bahasa alamiah, diperlukan kejelasan konteks dari sebuah kalimat. Dalam konteks otomatisasi pengolahan bahasa alamiah, adanya pronomina dapat menyulitkan komputer untuk memahami kalimat tersebut. Oleh sebab itu, dalam pengolahan bahasa alamiah yang mengandung pronomina diperlukan pre proses berupa pengubahan pronomina ke dalam bentuk subjek atau objek asli yang dirujuk. Metode yang diusulkan untuk menyelesaikan permasalahan ini adalah pendekatan berbasis sintaktik. Pendekatan ini menitikberatkan pada struktur kata yang digunakan dan struktur komponen kata yang digunakan. Metode yang diusulkan memiliki 4 tahapan yakni pengumpulan data, pembangkitan aturan, otomatisasi pengenalan pronominal, dan terakhir adalah evaluasi. Metode yang diusulkan telah diujicobakan untuk mengenali adanya pronomina dari kalimat-kalimat pada materi Ilmu Pengetahuan Alam dan Ilmu Pengetahuan Sosial di jenj­­ang sekolah dasar. Hasil evaluasi menunjukkan bahwa metode yang diusulkan dapat digunakan untuk mengubah subjek yang berbentuk pronomina menjadi subjek atau objek asli yang dirujuk. Rata-rata akurasi yang didapatkan sebesar 81%. Akurasi tersebut didapatkan dari perbandingan antara jumlah kata ganti yang berhasil diidentifikasi subjeknya dengan keseluruhan data uji. Hasil dari penelitian ini dapat digunakan peneliti di bidang <em>Natural Language Processing</em> untuk melakukan praproses terhadap teks yang akan diolah.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>A pronoun is a word that can be used to replace a noun or person in a sentence. The use of pronouns will be easy to understand if a series of sentences is read in its entirety. However, if the sentence series is only read in specific sentences, it will be difficult to understand sentences with pronouns. In natural language processing, it is necessary to clarify the context of a sentence. In the context of automation of natural language processing, the existence of pronouns can make it difficult for computers to understand the sentence. Therefore, in processing natural language containing pronouns, it is necessary to pre-process in the form of converting pronouns into the form of the original subject or object referred to. The method proposed to solve this problem is a syntactic-based approach. This approach focuses on the structure of the words used and the word components used. The proposed method has 4 stages, namely data collection, rule generation, automation of pronoun recognition and the last is evaluation. The proposed method has been evaluated to identify the existence of pronouns from sentences in the Natural Sciences and Social Sciences material at the elementary school level. The evaluation results show that the proposed method can be used to change the subject in the form of a pronoun into the original subject or object referred to. The average accuracy obtained is 81%. The accuracy is obtained from the comparison between the number of pronouns that have been identified with the overall test data. Researchers in natural language processing can use the results of this study to pre-process their text.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p><p class="Abstrak" align="center"> </p><p class="Abstrak" align="center"> </p>
APA, Harvard, Vancouver, ISO, and other styles
21

Jagannatha, S., and B. N. Tulasimala. "A Comprehensive Study on Commercial Applications of Cloud Computing." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4411–18. http://dx.doi.org/10.1166/jctn.2020.9088.

Full text
Abstract:
In the world of information communication technology (ICT) the term Cloud Computing has been the buzz word. Cloud computing is changing its definition the way technocrats are using it according to the environment. Cloud computing as a definition remains very contentious. Definition is stated liable to a particular application with no unanimous definition, making it altogether elusive. In spite of this, it is this technology which is revolutionizing the traditional usage of computer hardware, software, data storage media, processing mechanism with more of benefits to the stake holders. In the past, the use of autonomous computers and the nodes that were interconnected forming the computer networks with shared software resources had minimized the cost on hardware and also on the software to certain extent. Thus evolutionary changes in computing technology over a few decades has brought in the platform and environment changes in machine architecture, operating system, network connectivity and application workload. This has made the commercial use of technology more predominant. Instead of centralized systems, parallel and distributed systems will be more preferred to solve computational problems in the business domain. These hardware are ideal to solve large-scale problems over internet. This computing model is data-intensive and networkcentric. Most of the organizations with ICT used to feel storing of huge data, maintaining, processing of the same and communication through internet for automating the entire process a challenge. In this paper we explore the growth of CC technology over several years. How high performance computing systems and high throughput computing systems enhance computational performance and also how cloud computing technology according to various experts, scientific community and also the service providers is going to be more cost effective through different dimensions of business aspects.
APA, Harvard, Vancouver, ISO, and other styles
22

Porwoł, Monika. "NLP ‘RECIPES’ FOR TEXT CORPORA: APPROACHES TO COMPUTING THE PROBABILITY OF A SEQUENCE OF TOKENS." Studia Philologica 2, no. 15 (2020): 6–13. http://dx.doi.org/10.28925/2311-2425.2021.151.

Full text
Abstract:
Investigation in the hybrid architectures for Natural Language Processing (NLP) requires overcoming complexity in various intellectual traditions pertaining to computer science, formal linguistics, logic, digital humanities, ethical issues and so on. NLP as a subfield of computer science and artificial intelligence is concerned with interactions between computers and human (natural) languages. It is used to apply machine learning algorithms to text (and speech) in order to create systems, such as: machine translation (converting from text in a source language to text in a target language), document summarization (converting from long texts into short texts), named entity recognition, predictive typing, et cetera. Undoubtedly, NLP phenomena have been implanted in our daily lives, for instance automatic Machine Translation (MT) is omnipresent in social media (or on the world wide web), virtual assistants (Siri, Cortana, Alexa, and so on) can recognize a natural voice or e-mail services use detection systems to filter out some spam messages. The purpose of this paper, however, is to outline the linguistic and NLP methods to textual processing. Therefore, the bag-of-n-grams concept will be discussed here as an approach to extract more details about the textual data in a string of a grouped words. The n-gram language model presented in this paper (that assigns probabilities to sequences of words in text corpora) is based on findings compiled in Sketch Engine, as well as samples of language data processed by means of NLTK library for Python. Why would one want to compute the probability of a word sequence? The answer is quite obvious – in various systems for performing tasks, the goal is to generate texts that are more fluent. Therefore, a particular component is required, which computes the probability of the output text. The idea is to collect information how frequently the n-grams occur in a large text corpus and use it to predict the next word. Counting the number of occurrences can also envisage certain drawbacks, for instance there are sometimes problems with sparsity or storage. Nonetheless, the language models and specific computing ‘recipes’ described in this paper can be used in many applications, such as machine translation, summarization, even dialogue systems, etc. Lastly, it has to be pointed out that this piece of writing is a part of an ongoing work tentatively termed as LADDER (Linguistic Analysis of Data in the Digital Era of Research) that touches upon the process of datacization[1] that might help to create an intelligent system of interdisciplinary information.
APA, Harvard, Vancouver, ISO, and other styles
23

McCarthy, Diana. "Computers getting the drift." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365, no. 1861 (September 21, 2007): 3019–31. http://dx.doi.org/10.1098/rsta.2007.0010.

Full text
Abstract:
Natural language processing is the study of computer programs that can understand and produce human language. An important goal in the research to produce such technology is identifying the right meaning of words and phrases. In this paper, we give an overview of current research in three areas: (i) inducing word meaning; (ii) distinguishing different meanings of words used in context; and (iii) determining when the meaning of a phrase cannot straightforwardly be obtained from its parts. Manual construction of resources is labour intensive and costly and furthermore may not reflect the meanings that are useful for the task or data at hand. For this reason, we focus particularly on systems that use samples of language data to learn about meanings, rather than examples annotated by humans.
APA, Harvard, Vancouver, ISO, and other styles
24

Preddie, Martha Ingrid. "Canadian Public Library Users are Unaware of Their Information Literacy Deficiencies as Related to Internet Use and Public Libraries are Challenged to Address These Needs." Evidence Based Library and Information Practice 4, no. 4 (December 14, 2009): 58. http://dx.doi.org/10.18438/b8sp7f.

Full text
Abstract:
A Review of: Julien, Heidi and Cameron Hoffman. “Information Literacy Training in Canada’s Public Libraries.” Library Quarterly 78.1 (2008): 19-41. Objective – To examine the role of Canada’s public libraries in information literacy skills training, and to ascertain the perspectives of public library Internet users with regard to their experiences of information literacy. Design – Qualitative research using semi-structured interviews and observations. Setting – Five public libraries in Canada. Subjects – Twenty-eight public library staff members and twenty-five customers. Methods – This study constituted the second phase of a detailed examination of information literacy (IL) training in Canadian public libraries. Five public libraries located throughout Canada were selected for participation. These comprised a large central branch of a public library located in a town with a population of approximately two million, a main branch of a public library in an urban city of about one million people, a public library in a town with a population of about 75,000, a library in a town of 900 people and a public library located in the community center of a Canadian First Nations reserve that housed a population of less than 100 persons. After notifying customers via signage posted in the vicinity of computers and Internet access areas, the researchers observed each patron as they accessed the Internet via library computers. Observations focused on the general physical environment of the Internet access stations, customer activities and use of the Internet, as well as the nature and degree of customer interactions with each other and with staff. Photographs were also taken and observations were recorded via field notes. The former were analyzed via qualitative content analysis while quantitative analysis was applied to the observations. Additionally, each observed participant was interviewed immediately following Internet use. Interview questions focused on a range of issues including the reasons why customers used the Internet in public libraries, customers’ perceptions about their level of information literacy and their feelings with regard to being information literate, the nature of their exposure to IL training, the benefits they derived from such training, and their desire for further training. Public service librarians and other staff were also interviewed in a similar manner. These questions sought to ascertain staff views on the role of the public library with regard to IL training; perceptions of the need for and expected outcomes of such training; as well as the current situation pertinent to the provision of IL skills training in their respective libraries in terms of staff competencies, resource allocation, and the forms of training and evaluation. Interviews were recorded and transcribed. Data were interpreted via qualitative content analysis through the use of NVivo software. Main Results – Men were more frequent users of public library computers than women, outnumbering them by a ratio ranging from 2:1 to 3.4:1. Customers appeared to be mostly under the age of 30 and of diverse ethnicities. The average income of interviewed customers was less than the Canadian average. The site observations revealed that customers were seen using the Internet mainly for the purposes of communication (e.g., e-mail, instant messaging, online dating services). Such use was observed 78 times in four of the libraries. Entertainment accounted for 43 observations in all five sites and comprised activities such as online games, music videos, and movie listings. Twenty-eight observations involved business/financial uses (e.g., online shopping, exploration of investment sites, online banking). The use of search engines (25 observations), news information (23), foreign language and forum websites (21), and word processing were less frequently observed. Notably, there were only 20 observed library-specific uses (e.g., searching online catalogues, online database and library websites). Customers reported that they used the Internet mainly for general web searching and for e-mail. It was also observed that in general the physical environment was not conducive to computer use due to uncomfortable or absent seating and a lack of privacy. Additionally, only two sites had areas specifically designated for IL instruction. Of the 25 respondents, 19 reported at least five years experience with the Internet, 9 of whom cited experience of 10 years or more. Self-reported confidence with the Internet was high: 16 individuals claimed to be very confident, 7 somewhat confident, and only 2 lacking in confidence. There was a weak positive correlation between years of use and individuals’ reported levels of confidence. Customers reported interest in improving computer literacy (e.g., keyboarding ability) and IL skills (ability to use more sources of information). Some expressed a desire “to improve certain personal attitudes” (30), such as patience when conducting Internet searches. When presented with the Association of College and Research Libraries’ definition of IL, 13 (52%) of those interviewed claimed to be information literate, 8 were ambivalent, and 4 admitted to being information illiterate. Those who professed to be information literate had no particular feeling about this state of being, however 10 interviewees admitted feeling positive about being able to use the Internet to retrieve information. Most of those interviewed (15) disagreed that a paucity of IL skills is a deterrent to “accessing online information efficiently and effectively” (30). Eleven reported development of information skills through self teaching, while 8 cited secondary schools or tertiary educational institutions. However, such training was more in terms of computer technology education than IL. Eleven of the participants expressed a desire for additional IL training, 5 of whom indicated a preference for the public library to supply such training. Customers identified face-to-face, rather than online, as the ideal training format. Four interviewees identified time as the main barrier to Internet use and online access. As regards library staff, 22 (78.6%) of those interviewed posited IL training as an important role for public libraries. Many stated that customers had been asking for formal IL sessions with interest in training related to use of the catalogue, databases, and productivity software, as well as searching the web. Two roles were identified in the context of the public librarian as a provider of IL: “library staff as teachers/agents of empowerment and library staff as ‘public parents’” (32). The former was defined as supporting independent, lifelong learning through the provision of IL skills, and the latter encompassing assistance, guidance, problem solving, and filtering of unsuitable content. Staff identified challenges to IL training as societal challenges (e.g., need for customers to be able to evaluate information provided by the media, the public library’s role in reducing the digital divide), institutional (e.g., marketing of IL programs, staff constraints, lack of budget for IL training), infrastructural (e.g., limited space, poor Internet access in library buildings) and pedagogical challenges, such as differing views pertinent to the philosophy of IL, as well as the low levels of IL training to which Canadian students at all levels had been previously exposed. Despite these challenges library staff acknowledged positive outcomes resulting from IL training in terms of customers achieving a higher level of computer literacy, becoming more skillful at searching, and being able to use a variety of information sources. Affective benefits were also apparent such as increased independence and willingness to learn. Library staff also identified life expanding outcomes, such as the use of IL skills to procure employment. In contrast to customer self-perception, library staff expressed that customers’ IL skills were low, and that this resulted in their avoidance of “higher-level online research” and the inability to “determine appropriate information sources” (36). Several librarians highlighted customers’ incapacity to perform simple activities such as opening an email account. Library staff also alluded to customer’s reluctance to ask them for help. Libraries in the study offered a wide range of training. All provided informal, personalized training as needed. Formal IL sessions on searching the catalogue, online searching, and basic computer skills were conducted by the three bigger libraries. A mix of librarians and paraprofessional staff provided the training in these libraries. However, due to a lack of professional staff, the two smaller libraries offered periodic workshops facilitated by regional librarians. All the libraries lacked a defined training budget. Nonetheless, the largest urban library was well-positioned to offer IL training as it had a training coordinator, a training of trainers program, as well as technologically-equipped training spaces. The other libraries in this study provided no training of trainers programs and varied in terms of the adequacy of spaces allocated for the purpose of training. The libraries also varied in terms of the importance placed on the evaluation of IL training. At the largest library evaluation forms were used to improve training initiatives, while at the small town library “evaluations were done anecdotally” (38). Conclusion – While Internet access is available and utilized by a wide cross section of the population, IL skills are being developed informally and not through formal training offered by public libraries. Canadian public libraries need to work to improve information literacy skills by offering and promoting formal IL training programs.
APA, Harvard, Vancouver, ISO, and other styles
25

Rogers, Michelle, Janice Masud-Paul, and Rania El Desoki. "Understanding the use of health information technology for maternal and child health practitioner training in low and middle income countries." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 743–46. http://dx.doi.org/10.1177/1071181319631521.

Full text
Abstract:
Objectives: To assess the evidence of information communication technology (ICT) use in the training of maternal and child health (MCH) workers, discuss methodological issues present in the identified studies, and identify future work areas. Introduction: The explosive growth of cellphone usage in low and middle-income countries (LMIC) has made mobile technology an increasingly attractive form of information communication technology (ICT) to be used to meet healthcare needs that go unmet, rising due to the paucity of trained clinical workers (O’Donovan, Bersin, & O’Donovan, 2015). The portability and relative low cost of cellphones have made them ubiquitous and efficient to use. For example, subscriptions in Africa have risen from 12.4 per hundred inhabitants in 2005 to per hundred inhabitants in 2015 (ITU, 2017). ICT is an umbrella term that encompasses the hardware, software and networks that provide its users with data and information resources. As far as healthcare is concerned, these resources include access to varied tools and services such as electronic health records, point-of-care databases, decision support systems, clinical guidelines or training modules for continuing education (Machingura et al., 2014). This technology has made healthcare more efficient in affluent countries where funding and infrastructure to build, support and maintain ICT is readily available. However, ICT development is critical to LMIC’s which have the greatest barriers to effective and efficient healthcare systems and fewer resources to overcome challenges. The aims of this paper are to (1) summarize the literature on ICT use in the training of MCH workers, (2) discuss methodological issues present in the identified studies, and (3) identify future work areas. Our specific research questions are: Which ICT tools have been used in developing countries for training the MCH workforce? How successful are the tools for instructing health care workers? A major impediment to health care improvements in underdeveloped countries is the low ratio of health professionals to patients. A developed workforce is critical for sustaining healthcare infrastructure. Because there is an insufficient number of professional practitioners, many MCH health needs are met by community workers with limited or no formal training (Chipps et al., 2015). Since the level of services range from general check-ups to life-saving interventions, training must address a variety of educational requirements. (Agarwal et al., 2015). In addition to primary professional education, health workers require training for re-licensure and continuous professional development (CPD). Training, particularly in remote areas, requires travel, time away from work as well as funding for food and lodging (Chipps et al., 2015). This exacerbates uneven healthcare coverage with the majority of MCH health care workers concentrated in urban centers, leaving rural residents with inadequate services (Middleberg et al., 2013; Modi et al., 2015). ICT reduces costs by enabling personnel to remain in their communities while providing digital access to educational content, mentors, guidelines and decision support systems (Saronga et al., 2015). It is commonly recognized that underdeveloped countries have occasional brown-outs in their urban centers and the power grid may not reach rural or remote areas. Even if seed money is acquired for start-up costs, funding for technology maintenance and technical manpower beyond the pilot stage can be tentative (Achampong, 2012). Secondly, while cell phone use across LMICs has exploded in recent years, its use for advancing training has not grown in comparison. A limited number of reports have been published, reporting the use of ICT for communication (Andreatta et al., 2011), tracking health worker behavior (Awoonor-Williams et al., 2013), attitudes towards using ICT (Sukums et al., 2014; Zakane et al., 2014), and the impact of the design of ICT (Valez et. al., 2014). This paucity of studies understanding the impact of ICT on measurable training outcomes leaves a troubling gap in the literature if progress is to be made in addressing the training needs. Finally, government entities, educators and administrators may be reluctant to adopt ICT into health training for practical, fiscal and political reasons. Because health personnel may not have exposure to technology in their daily lives, staff may require basic computer training on operating systems, file management, word processing and databases in conjunction with ICT projects (Sukums, 2014). In addition to a lack of knowledge about computers in general, use of ICT also comes with associated monetary costs. Both of these issues are also exacerbated by resulting government policy changes. We endeavored to fill this gap by completing a literature review to bring the disparate work together, but to our surprise, it did not really exist. This paper reports on (1) what studies have been conducted on the use of ICT in training; (2) what common methods are used and how they are evaluated and (3) what outcomes have been reported. Methods: Medline (OVID), CINAHL and Web of Science were searched for relevant articles published between January 1, 2007 and February 28, 2017. Studies were included if they included training and education in low and middle-income countries using ICT for maternal child health workers. Results: 111 unique articles from electronic searches with seven additional articles discovered through hand-searching reference lists were identified. After review, 15 articles aligned with the necessities to analyze the current environment of the ICT tools. The study designs in the reviewed articles were usually pre- and post-evaluations (n=7). There were also a small number of single cross-sectional studies (n=3) measuring the use of the tool. Two studies also evaluated the use of electronic clinical decision support systems (CDSS) applications or algorithms. The remainder of the studies (n=3) used ICT to provide resources for meeting information needs, as well as repositories of protocols and best practice documents. The outcomes reported ranged from access to medical resources (n=3), accuracy in clinical documentation (n=2), need for remedial computer training (n=2) and an increase in clinical knowledge and proper use of protocols (n=4) Discussion and conclusion: The current evidence-base does not show a clear indication that there were particular initiatives using ICT for the training of health workers. While the majority of projects identified were shown to improve outcomes, there were limited results reported. This lack of documented evidence hinders decisions about the content and methods that should be used to support training. We are missing an opportunity for advancement. The World Health Organization identified community health worker training as a lever to move the improvement of health care in low and middle-income countries (LMICs). An understanding of barriers and facilitators to using ICTs to meet this need, provides key directions for policy makers and non-governmental organizations as they apply limited resources to these issues.
APA, Harvard, Vancouver, ISO, and other styles
26

Wadud, Md Anwar Hussen, M. F. Mridha, and Mohammad Motiur Rahman. "Word Embedding Methods for Word Representation in Deep Learning for Natural Language Processing." Iraqi Journal of Science, March 30, 2022, 1349–61. http://dx.doi.org/10.24996/ijs.2022.63.3.37.

Full text
Abstract:
Natural Language Processing (NLP) deals with analysing, understanding and generating languages likes human. One of the challenges of NLP is training computers to understand the way of learning and using a language as human. Every training session consists of several types of sentences with different context and linguistic structures. Meaning of a sentence depends on actual meaning of main words with their correct positions. Same word can be used as a noun or adjective or others based on their position. In NLP, Word Embedding is a powerful method which is trained on large collection of texts and encoded general semantic and syntactic information of words. Choosing a right word embedding generates more efficient result than others. Most of the papers used pretrained word embedding vector in deep learning for NLP processing. But, the major issue of pretrained word embedding vector is that it can’t use for all types of NLP processing. In this paper, a local word embedding vector formation process have been proposed and shown a comparison between pretrained and local word embedding vectors for Bengali language. The Keras framework is used in Python for local word embedding implementation and analysis section of this paper shows proposed model produced 87.84% accuracy result which is better than fastText pretrained word embedding vectors accuracy 86.75%. Using this proposed method NLP researchers of Bengali language can easily build the specific word embedding vectors for word representation in Natural Language Processing.
APA, Harvard, Vancouver, ISO, and other styles
27

Richards, Blake A., and Timothy P. Lillicrap. "The Brain-Computer Metaphor Debate Is Useless: A Matter of Semantics." Frontiers in Computer Science 4 (February 8, 2022). http://dx.doi.org/10.3389/fcomp.2022.810358.

Full text
Abstract:
It is commonly assumed that usage of the word “computer” in the brain sciences reflects a metaphor. However, there is no single definition of the word “computer” in use. In fact, based on the usage of the word “computer” in computer science, a computer is merely some physical machinery that can in theory compute any computable function. According to this definition the brain is literally a computer; there is no metaphor. But, this deviates from how the word “computer” is used in other academic disciplines. According to the definition used outside of computer science, “computers” are human-made devices that engage in sequential processing of inputs to produce outputs. According to this definition, brains are not computers, and arguably, computers serve as a weak metaphor for brains. Thus, we argue that the recurring brain-computer metaphor debate is actually just a semantic disagreement, because brains are either literally computers or clearly not very much like computers at all, depending on one's definitions. We propose that the best path forward is simply to put the debate to rest, and instead, have researchers be clear about which definition they are using in their work. In some circumstances, one can use the definition from computer science and simply ask, what type of computer is the brain? In other circumstances, it is important to use the other definition, and to clarify the ways in which our brains are radically different from the laptops, smartphones, and servers that surround us in modern life.
APA, Harvard, Vancouver, ISO, and other styles
28

Aparitosh Gahankari, Palak Garhwal, Shubhangi Bagwe, Aditya Gupta, Prajwala Adkane, and Sonal Kawalkar. "A Review on Methods to Solve Polysemy Problem in WSD Occurring while Processing Marathi Text." International Journal of Advanced Research in Science, Communication and Technology, January 31, 2022, 488–94. http://dx.doi.org/10.48175/ijarsct-2463.

Full text
Abstract:
Word Sense Disambiguation (WSD) is an open problem in computational linguistics concerned with identifying which sense of a word is used in a sentence. It basically resolves the ambiguity that arises when determining the meaning of the same word used in a particular context in different situations. The solution to this issue impacts other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence and inference. Marathi is an official language of Maharashtra, India, and co-official Language of Goa & Union territory of Dadra & Nagar Haveli and Daman & Diu with more than an 83 million Native speaker whereas 12 million speaks Marathi as their second language. However, the language barrier is impeding the information technology revolution’s progress. Thus, there is a requisite for competent measures to carry out natural language processing (NCP) through computer processing so that computers can interact through language like Marathi and handle by users who know regional Language. This paper provides a review of the methods for WSD and present modified evaluation. The methods, algorithms, and techniques of Word Sense Disambiguation are examined in this work. We aim to elaborate on resources that will be useful for working in Marathi Word Sense Disambiguation in this paper.
APA, Harvard, Vancouver, ISO, and other styles
29

Rajper, Rahmat Ali, Samina Rajper Samina Rajper, Abdullah Maitlo, and Ghulam Nabi. "Analysis and Comparative Study of POS Tagging Techniques for National (Urdu) Language and other Regional Languages of Pakistan." SINDH UNIVERSITY RESEARCH JOURNAL -SCIENCE SERIES 53, no. 04 (December 20, 2021). http://dx.doi.org/10.26692/surj.v53i04.4223.

Full text
Abstract:
Defining algorithms and techniques to enable computers to understand human language is the Natural Language Processing (NLP), which is an integral part of speech recognition. Parts of Speech (POS) is considered as one of the well understood problems of Natural Language Processing, in which natural language words and sentence are tagged or assigned grammatical classes, because tagging a single word by human hand is a time consuming and tedious job. To automate the tagging job is the way to automate the lexicons of the text of a language. Many of the languages are enriched with their POS tagging systems. Pakistani regional languages are less developed due to the many reasons and much of the work is needed in POS tagging system. Some of the regional languages have their POS tagging systems but still they need some more attention to refine their system. Some of the languages need to develop from the scratch. Balochi language has no any POS tagging system. This study presents the comparative analysis of POS tagging approaches for the national language (Urdu) and other regional languages of Pakistan. The approaches and their data sets used and their reported results are presented here
APA, Harvard, Vancouver, ISO, and other styles
30

"STAGES OF CREATING PARALLEL CORPUS OF ENGLISH-UZBEK SIMILES." Philology matters, September 25, 2021, 99–112. http://dx.doi.org/10.36078/987654506.

Full text
Abstract:
In recent years, corpus linguistics has been mentioned in the scientific literature as the main tool for the elaboration of dictionaries and grammar manuals, the creation of corpus analysis, the practical use of corpus, as well as the statistical study of linguistic analysis using corpus. Articles, textbooks, manuals on the creation of general and special electronic corpus have been published, but as the Uzbek language is a relatively new field of corpus linguistics, there is a need for research on some issues. The development of corpus linguistics and the increasing focus on statistical methods of processing linguistic materials have led to the development of a number of techniques related to the usage of parallel or similar texts in different languages. This article is devoted to the stages of creating bilingual parallel corpus and explains the peculiarities of creating parallel corpus of texts. The creation of parallel corpus also has its own research stages. Statistical research in linguistics has a long history, especially since the advent of computers in the fifties of the twentieth century, research in this area has grown rapidly. The purpose of the study is to analyze the scientific views on the creation of the linguistic supply of the program of translation of similes on the basis of parallel texts (English-Uzbek, Uzbek-English), to study the linguistic basis and to study the lexical-semantic relationship of similes in bilingual vocabulary. This article deals with the creation of a parallel corpus of literary texts with English and Uzbek similes, analysis of lexical-semantic relations of similes, formation of a database of text segments based on translation alternatives of similes, scientific proof of linguistic models of English similes in translation and the effect of mentality is scientifically substantiated in the example of the translation of similes. A bilingual simile dictionary also plays an important role in creating a parallel corpus of similes. There is a great need for electronic dictionaries in the field of literary translation. The creation of bilingual and multilingual dictionaries, which embody the subtle nuances of the word, serves as a very important resource for translators, writers, poets, and language users.
APA, Harvard, Vancouver, ISO, and other styles
31

"Reading & Writing." Language Teaching 38, no. 4 (October 2005): 216–29. http://dx.doi.org/10.1017/s0261444805253144.

Full text
Abstract:
05–486Balnaves, Edmund (U of Sydney, Australia; ejb@it.usyd.edu.au), Systematic approaches to long term digital collection management. Literary and Linguistic Computing (Oxford, UK) 20.4 (2005), 399–413.05–487Barwell, Graham (U of Wollongong, Australia; gbarwell@uow.edu.au), Original, authentic, copy: conceptual issues in digital texts. Literary and Linguistic Computing (Oxford, UK) 20.4 (2005), 415–424.05–488Beech, John R. & Kate A. Mayall (U of Leicester, UK; JRB@Leicester.ac.uk), The word shape hypothesis re-examined: evidence for an external feature advantage in visual word recognition. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 302–319.05–489Belcher, Diane (Georgia State U, USA; dbelcher1@gsu.edu) & Alan Hirvela, Writing the qualitative dissertation: what motivates and sustains commitment to a fuzzy genre?Journal of English for Academic Purposes (Amsterdam, the Netherlands) 4.3 (2005), 187–205.05–490Bernhardt, Elisabeth (U of Minnesota, USA; ebernhar@stanford.edu), Progress and procrastination in second language reading. Annual Review of Applied Linguistics (Cambridge, UK) 25 (2005), 133–150.05–491Bishop, Dorothy (U of Oxford, UK; dorothy.bishop@psy.ox.ac.uk), Caroline Adams, Annukka Lehtonen & Stuart Rosen, Effectiveness of computerised spelling training in children with language impairments: a comparison of modified and unmodified speech input. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 144–157.05–492Bowey, Judith A., Michaela McGuigan & Annette Ruschena (U of Queensland, Australia; j.bowey@psy.uq.edu.au), On the association between serial naming speed for letters and digits and word-reading skill: towards a developmental account. Journal of Research in Reading (Oxford, UK) 28.4 (2005), 400–422.05–493Bowyer-Crane, Claudine & Margaret J. Snowling (U of York, UK; c.crane@psych.york.ac.uk), Assessing children's inference generation: what do tests of reading comprehension measure?British Journal of Educational Psychology (Leicester, UK) 75.2 (2005), 189–201.05–494Bruce, Ian (U of Waikato, Hamilton, New Zealand; ibruce@waikato.ac.nz), Syllabus design for general EAP writing courses: a cognitive approach. Journal of English for Academic Purposes (Amsterdam, the Netherlands) 4.3 (2005), 239–256.05–495Burrows, John (U of Newcastle, Australia; john.burrows@netcentral.com.au), Who wroteShamela? Verifying the authorship of a parodic text. Literary and Linguistic Computing (Oxford, UK) 20.4 (2005), 437–450.05–496Clarke, Paula, Charles Hulme & Margaret Snowling (U of York, UK; CH1@york.ac.uk), Individual differences in RAN and reading: a response timing analysis. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 73–86.05–497Colledge, Marion (Metropolitan U, London, UK; m.colledge@londonmet.ac.uk), Baby Bear or Mrs Bear? Young English Bengali-speaking children's responses to narrative picture books at school. Literacy (Oxford, UK) 39.1 (2005), 24–30.05–498De Pew, Kevin Eric (Old Dominion U, Norfolk, USA; Kdepew@odu.edu) & Susan Kay Miller, Studying L2 writers' digital writing: an argument for post-critical methods. Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 259–278.05–499Dekydtspotter, Laurent (Indiana U, USA; ldekydts@indiana.edu) & Samantha D. Outcalt, A syntactic bias in scope ambiguity resolution in the processing of English French cardinality interrogatives: evidence for informational encapsulation. Language Learning (Malden, MA, USA) 55.1 (2005), 1–36.05–500Fernández Toledo, Piedad (Universidad de Murcia, Spain; piedad@um.es), Genre analysis and reading of English as a foreign language: genre schemata beyond text typologies. Journal of Pragmatics37.7 (2005), 1059–1079.05–501French, Gary (Chukyo U, Japan; french@lets.chukyo-u.ac.jp), The cline of errors in the writing of Japanese university students. World Englishes (Oxford, UK) 24.3 (2005), 371–382.05–502Green, Chris (Hong Kong Polytechnic U, Hong Kong, China), Profiles of strategic expertise in second language reading. Hong Kong Journal of Applied Linguistics (Hong Kong, China) 9.2 (2004), 1–16.05–503Groom, Nicholas (U of Birmingham, UK; nick@nicholasgroom.fsnet.co.uk), Pattern and meaning across genres and disciplines: an exploratory study. Journal of English for Academic Purposes (Amsterdam, the Netherlands) 4.3 (2005), 257–277.05–504Harris, Pauline & Barbara McKenzie (U of Wollongong, Australia; pharris@uow.edu.au), Networking aroundThe Waterholeand other tales: the importance of relationships among texts for reading and related instruction. Literacy (Oxford, UK) 39.1 (2005), 31–37.05–505Harrison, Allyson G. & Eva Nichols (Queen's U, Canada; harrisna@post.queensu.ca), A validation of the Dyslexia Adult Screening Test (DAST) in a post-secondary population. Journal of Research in Reading (Oxford, UK) 28.4 (2005), 423–434.05–506Hirvela, Alan (Ohio State U, USA; hirvela.1@osu.edu), Computer-based reading and writing across the curriculum: two case studies of L2 writers. Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 337–356.05–507Holdom, Shoshannah (Oxford U, UK; shoshannah.holdom@oucs.ox.ac.uk), E-journal proliferation in emerging economies: the case of Latin America. Literary and Linguistic Computing (Oxford, UK) 20.3 (2005), 351–365.05–508Hopper, Rosemary (U of Exeter, UK; r.hopper@ex.ac.uk), What are teenagers reading? Adolescent fiction reading habits and reading choices. Literacy (Oxford, UK) 39.3 (2005), 113–120.05–509Jarman, Ruth & Billy McClune (Queen's U, Northern Ireland; r.jarman@qub.ac.uk), Space Science News: Special Edition, a resource for extending reading and promoting engagement with newspapers in the science classroom. Literacy (Oxford, UK) 39.3 (2005), 121–128.05–510Jia-ling Charlene Yau (Ming Chuan U, Taiwan; jyau@mcu.edu.tw), Two Mandarin readers in Taiwan: characteristics of children with higher and lower reading proficiency levels. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 108–124.05–511Justice, Laura M, Lori Skibbel, Andrea Canning & Chris Lankford (U of Virginia, USA; ljustice@virginia.edu), Pre-schoolers, print and storybooks: an observational study using eye movement analysis. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 229–243.05–512Kelly, Alison (Roehampton U, UK; a.m.kelly@roehampton.ac.uk), ‘Poetry? Of course we do it. It's in the National Curriculum.’ Primary children's perceptions of poetry. Literacy (Oxford, UK) 39.3 (2005), 129–134.05–513Kern, Richard (U of California, Berkeley, USA; rkern@berkeley.edu) & Jean Marie Schultz, Beyond orality: investigating literacy and the literary in second and foreign language instruction. The Modern Language Journal (Malden, MA, USA) 89.3 (2005), 381–392.05–514Kispal, Anne (National Foundation for Educational Research, UK; a.kispal@nfer.ac.uk), Examining England's National Curriculum assessments: an analysis of the KS2 reading test questions, 1993–2004. Literacy (Oxford, UK) 39.3 (2005), 149–157.05–515Kriss, Isla & Bruce J. W. Evans (Institute of Optometry, London, UK), The relationship between dyslexia and Meares-Irlen Syndrome. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 350–364.05–516Lavidor, Michal & Peter J. Bailey (U of Hull, UK; M.Lavidor@hull.ac.uk), Dissociations between serial position and number of letters effects in lateralised visual word recognition. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 258–273.05–517Lee, Sy-ying (Taipei, Taiwan, China; syying.lee@msa.hinet.net), Facilitating and inhibiting factors in English as a foreign language writing performance: a model testing with structural equation modelling. Language Learning (Malden, MA, USA) 55.2 (2005), 335–374.05–518Leppänen, Ulla, Kaisa Aunola & Jari-Erik Nurmi (U of Jyväskylä, Finland; uleppane@psyka.jyu.fi), Beginning readers' reading performance and reading habits. Journal of Research in Reading (Oxford, UK) 28.4 (2005), 383–399.05–519Lingard, Tony (Newquay, Cornwall, UK; tonylingard@awled.co.uk), Literacy Acceleration and the Key Stage 3 English strategy–comparing two approaches for secondary-age pupils with literacy difficulties. British Journal of Special Education32.2, 67–77.05–520Liu, Meihua (Tsinghua U, China; ellenlmh@yahoo.com) & George Braine, Cohesive features in argumentative writing produced by Chinese undergraduates. System (Amsterdam, the Netherlands) 33.4 (2005), 623–636.05–521Masterson, Jackie, Veronica Laxon, Emma Carnegie, Sheila Wright & Janice Horslen (U of Essex; mastj@essex.ac.uk), Nonword recall and phonemic discrimination in four- to six-year-old children. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 183–201.05–522Merttens, Ruth & Catherine Robertson (Hamilton Reading Project, Oxford, UK; ruthmerttens@onetel.net.uk), Rhyme and Ritual: a new approach to teaching children to read and write. Literacy (Oxford, UK) 39.1 (2005), 18–23.05–523Min Wang (U of Maryland, USA; minwang@umd.edu) & Keiko Koda, Commonalities and differences in word identification skills among learners of English as a Second Language. Language Learning (Malden, MA, USA) 55.1 (2005), 71–98.05–524O'Brien, Beth A., J. Stephen Mansfield & Gordon E. Legge (Tufts U, Medford, USA; beth.obrien@tufts.edu), The effect of print size on reading speed in dyslexia. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 332–349.05–525Pisanski Peterlin, Agnes (U of Ljubljana, Slovenia; agnes.pisanski@guest.arnes.si), Text-organising metatext in research articles: an English–Slovene contrastive analysis. English for Specific Purposes (Amsterdam, the Netherlands) 24.3 (2005), 307–319.05–526Rilling, Sarah (Kent State U, Kent, USA; srilling@kent.edu), The development of an ESL OWL, or learning how to tutor writing online. Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 357–374.05–527Schacter, John & Jo Booil (Milken Family Foundation, Santa Monica, USA; schacter@sbcglobal.net), Learning when school is not in session: a reading summer day-camp intervention to improve the achievement of exiting First-Grade students who are economically disadvantaged. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 158–169.05–528Shapira, Anat (Gordon College of Education, Israel) & Rachel Hertz-Lazarowitz, Opening windows on Arab and Jewish children's strategies as writers. Language, Culture and Curriculum (Clevedon, UK) 18.1 (2005), 72–90.05–529Shillcock, Richard C. & Scott A. McDonald (U of Edinburgh, UK; rcs@inf.ed.ac.uk), Hemispheric division of labour in reading. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 244–257.05–530Singleton, Chris & Susannah Trotter (U of Hull, UK; c.singleton@hull.ac.uk), Visual stress in adults with and without dyslexia. Journal of Research in Reading (Oxford, UK) 28.3 (2005), 365–378.05–531Spelman Miller, Kristyan (Reading U, UK; k.s.miller@reading.ac.uk), Second language writing research and pedagogy: a role for computer logging?Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 297–317.05–532Su, Susan Shiou-mai (Chang Gung College of Technology, Taiwan, China) & Huei-mei Chu, Motivations in the code-switching of nursing notes in EFL Taiwan. Hong Kong Journal of Applied Linguistics (Hong Kong, China) 9.2 (2004), 55–71.05–533Taillefer, Gail (Toulouse U, France; gail.taillefer@univ-tlse1.fr), Reading for academic purposes: the literacy practices of British, French and Spanish Law and Economics students as background for study abroad. Journal of Research in Reading (Oxford, UK) 28.4 (2005), 435–451.05–534Tardy, Christine M. (DePaul U, Chicago, USA; ctardy@depaul.edu), Expressions of disciplinarity and individuality in a multimodal genre. Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 319–336.05–535Thatcher, Barry (New Mexico State U, USA; bathatch@nmsu.edu), Situating L2 writing in global communication technologies. Computers and Composition (Amsterdam, the Netherlands) 22.3 (2005), 279–295.05–536Topping, Keith & Nancy Ferguson (U of Dundee, UK; k.j.topping@dundee.ac.uk), Effective literacy teaching behaviours. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 125–143.05–537Torgerson, Carole (U of York, UK; cjt3@york.ac.uk), Jill Porthouse & Greg Brooks, A systematic review of controlled trials evaluating interventions in adult literacy and numeracy. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 87–107.05–538Willett, Rebekah (U of London, UK; r.willett@ioe.ac.uk), ‘Baddies’ in the classroom: media education and narrative writing. Literacy (Oxford, UK) 39.3 (2005), 142–148.05–539Wood, Clara, Karen Littleton & Pav Chera (Coventry U, UK; c.wood@coventry.ac.uk), Beginning readers' use of talking books: styles of working. Literacy (Oxford, UK) 39.3 (2005), 135–141.05–540Wood, Clare (The Open U, UK; c.p.wood@open.ac.uk), Beginning readers' use of ‘talking books’ software can affect their reading strategies. Journal of Research in Reading (Oxford, UK) 28.2 (2005), 170–182.05–541Yasuda, Sachiko (Waseda U, Japan), Different activities in the same task: an activity theory approach to ESL students' writing process. JALT Journal (Tokyo, Japan) 27.2 (2005), 139–168.05–542Zelniker, Tamar (Tel-Aviv U, Israel) & Rachel Hertz-Lazarowitz, School–Family Partnership for Coexistence (SFPC) in the city of Acre: promoting Arab and Jewish parents' role as facilitators of children's literacy development and as agents of coexistence. Language, Culture and Curriculum (Clevedon, UK) 18.1 (2005), 114–138.
APA, Harvard, Vancouver, ISO, and other styles
32

Blessed, Guda, Nuhu Bello Kontagora, James Agajo, and Ibrahim Aliyu. "Performance Evaluation of Keyword Extraction Techniques and Stop Word Lists on Speech-To-Text Corpus." International Arab Journal of Information Technology 20, no. 1 (2022). http://dx.doi.org/10.34028/iajit/20/1/14.

Full text
Abstract:
The dawn of conversational user interfaces, through which humans communicate with computers through voice audio, has been reached. Therefore, Natural Language Processing (NLP) techniques are required to focus not only on text but also on audio speeches. Keyword Extraction is a technique to extract key phrases out of a document which can provide summaries of the document and be used in text classification. Existing keyword extraction techniques have commonly been used on only text/typed datasets. With the advent of text data from speech recognition engines which are less accurate than typed texts, the suitability of keyword extraction is questionable. This paper evaluates the suitability of conventional keyword extraction methods on a speech-to-text corpus. A new audio dataset for keyword extraction is collected using the World Wide Web (WWW) corpus. The performances of Rapid Automatic Keyword Extraction (RAKE) and TextRank are evaluated with different Stoplists on both the originally typed corpus and the corresponding Speech-To-Text (STT) corpus from the audio. Metrics of precision, recall, and F1 score was considered for the evaluation. From the obtained results, TextRank with the FOX Stoplist showed the highest performance on both the text and audio corpus, with F1 scores of 16.59% and 14.22%, respectively. Despite lagging behind text corpus, the recorded F1 score of the TextRank technique with audio corpus is significant enough for its adoption in audio conversation without much concern. However, the absence of punctuation during the STT affected the F1 score in all the techniques.
APA, Harvard, Vancouver, ISO, and other styles
33

Maras, Steven. "One or Many Media?" M/C Journal 3, no. 6 (December 1, 2000). http://dx.doi.org/10.5204/mcj.1888.

Full text
Abstract:
The theme for this issue of M/C is 'renew'. This is a term that could be approached in numerous ways: as a cultural practice, in terms of broader dynamics of change, in terms of the future of the journal. In this piece, however, I'd like to narrow the focus and think about renewal in the context of the concept of 'media' and media theory. This is not to diminish the importance of looking at media in relation to changing technologies, and changing cultural contexts. Indeed, most readers of M/C will no doubt be aware of the dangers of positing media outside of culture in some kind of deterministic relationship. Indeed, the slash in the title of M/C -- which since its first editorial both links and separates the terms 'media' and 'culture' -- is interesting to think about here precisely because the substitution of the 'and' opens up a questioning of the relation between the two terms. While I too want to keep the space between media / culture filled with possibility, in this piece I want to look mainly at one side of the slash and speculate on renewal in the way we relate to ideas of media. Since its first editorial the slash has also been a marker of M/C's project to bridge academic and popular approaches, and work as a cross-over journal. In the hope of not stretching the cross-over too far, I'd like to bring contemporary philosophy into the picture and keep it in the background while thinking about renewal and the concept of 'media'. A key theme in contemporary philosophy has been the attempt to think difference beyond any opposition of the One and the Many (Patton 29-48; Deleuze 38-47). In an effort to think difference in its own terms, philosophers like Gilles Deleuze and Jacques Derrida have resisted seeing difference as something dependant on, derivative or secondary to a primary point of sameness and identity. In this brief piece, and out of respect of M/C's project, my intention is not to summarise this work in detail. Rather, I want to highlight the existence of this work in order to draw a contrast with the way in which contemporary thinking about media often seems caught up in a dynamic of the One and Many, and to pose the question of a different path for media theory. Having mentioned philosophy, I do want to make the point that 'One or Many Media' is not just an abstract formulation. On the contrary, the present day is a particularly appropriate time to look at this problem. Popular discussion of media issues itself oscillates between an idea of Media dominance (the One) and an idea of multiple media (multimedia). Discussions of convergence frequently invoke a thematics of the One arising out of the Many, or of the Many arising from the One. Medium, Media, the Media. Which one to use? We need only to list these three terms to begin see how the tension between the One and the Multiple has influenced contemporary thinking about media. An obvious tension exists on the level of grammar. 'Media' is the plural of 'Medium'. That is, until we use the term 'the Media' which can be used to refer to the singularity of (a specific area of) the Press. Walter Ong dubs 'medium' "the fugitive singular" to describe this phenomenon (175). To compensate for the increasing use of 'the media' as a singular it is becoming more common to see the term 'mediums' instead of 'media'. A second tension exists on the level of the senses. 'The media', and in some senses a 'medium', conveys the notion of a media form distinct from the senses. As Michael Heim notes, Medium meant conceptual awareness in conjunction with the five senses through which we come to understand things present before us in the environment. This natural sense of media was gradually dissipated during the modern period by man-made extensions and enhancements of the human senses ... . Electronic media gave new meaning to the term. We not only perceive directly with five senses aided by concepts and enhanced by instrumentation, but also are surrounded by a panorama of man-made images and symbols far more complex than can be assimilated directly through the senses and thought processes. Media in the electronic sense of acoustic-optic technology ... appear to do more than augment innate human sensory capacities: the electronic media become themselves complex problems; they become facts of life we must take into account as we live; they become, in short, the media. (47) In this passage, Heim shows how through extension and instrumentation 'the media' comes to occupy a different register of existence. 'The media' in this account are distinct from any general artefact that can serve as a means of communication to us. On this register, 'the media' also develops into the idea of the mass media (see Williams 169). In popular usage this incorporates print and broadcasting areas (usually with a strong journalistic emphasis), and is often personified around a notion of 'the media' as an agent in the contemporary political arena (the fourth estate, the instrument of a media baron). This brings us to a third tension, to do with diversity. The difference between the terms 'Medium', 'Media', 'the Media', is clearly bound up with the issue of diversity and concentration of media. Sean Cubitt argues that a different activation of interactive media, intermedia, or video media, is crucial to restoring an electronic ecology that has been destroyed by the marketplace (207). What the work of theorists like Cubitt reveals is that the problem of diversity and concentration has a conceptual dimension. Framed within an opposition between the One and the Multiple, the diversity in question -- of different senses and orders of media -- is constrained by the dominant idea of the Media. Many theorists and commentators on 'the Mass media' barely acknowledge the existence of video media unless it is seen as a marketplace for the distribution of movies. This process of marginalisation has been so thorough that the contemporary discussion of the Internet or interactive digital media often ignores previous critical discussion of the electronic arts -- as if McLuhan had no connection with Fluxus, or convergence had no links to intermedia experimentation. In a different example, it is becoming common to discuss 'personal media' like laptops and intelligent jewellery (see Beniger; Kay and Goldberg). But if media theory has previously failed to look at T-shirts and other personal effects as media then this is in part due to the dominance of the idea of the mass media in conceptual terms. This dominance leads Umberto Eco to propose an idea of the "multiplication of the media" against the idea of mass media, and prompts him to declare that "all the professors of theory of communications, trained by the texts of twenty years ago (this includes me) should be pensioned off" (149). It could be argued that rather than represent a problem the sliding between these terms is enabling not disabling. From this perspective, the fact that different senses of media collapse or coalesce with one another is appropriate, since (as I hope I've shown) different senses of media are often grounded in other senses. Indeed, we can agree with this argument, and go further to suggest that renewing our relationship to concepts of media involves affirming the interplay of different senses of media. What needs careful consideration here, however, is how we think of different senses of media. For it is very often the case that this question of difference is blocked from discussion when an order of media is used to secure a territory or a foundation for a particular idea of how things should work. From this foundation particular ideas of One-ness/Same-ness or Many-ness can emerge, each of which involves making assumptions about differences between media, and the nature of difference. Examples might include notions of mainstream and alternative, professional and non-professional, 'industry' and 'artistic' ways of working.1 In each case a dominant idea of the media establishes itself as an order against which other practices are defined as secondary, and other senses of media subordinate. Surveying these tensions (grammatical, sensory and diversity) between the terms 'Medium', 'Media', 'the Media', what becomes apparent is that neither of them is able to stand as 'the' primary conceptual term. Attempting to read contemporary developments in light of the One of the mass media means that theory is often left to discuss the fate of an idea, broadcasting, that represents only one way of organising and articulating a medium. Certainly, this approach can yield important results on the level of audience studies and identity politics, and in respect to government policy. Jock Given's work on broadcasting as a "set of technologies, social and cultural practices, cultural forms, industries, institutional forms, words and an idea" usefully contests the idea that broadcasting is dying or has no place in the digital future (46). However, research of this kind is often constrained by its lack of engagement with different orders of media, and its dependence on an idea of the One medium that is now under erasure.2 Attempting to read contemporary developments in light of the One of the mass media means that theory is often left to discuss the fate of an idea, broadcasting, that represents only one way of organising and articulating a medium. Certainly, this approach can yield important results on the level of audience studies and identity politics, and in respect to government policy. Jock Given's work on broadcasting as a "set of technologies, social and cultural practices, cultural forms, industries, institutional forms, words and an idea" usefully contests the idea that broadcasting is dying or has no place in the digital future (46). However, research of this kind is often constrained by its lack of engagement with different orders of media, and its dependence on an idea of the One medium that is now under erasure.2 Exploring the potential of 'Medium' as a primary term leads again into the problem of the One and the Many. The content of every medium may be, as McLuhan said, another medium (8). But we should search for the hidden One that binds together the Many. Indeed, multimedia can precisely be seen in this way: as a term that facilitates the singularising of multiple media. In a historically significant 1977 paper "Personal Dynamic Media" by Alan Kay and Adele Goldberg, we read that the essence of a medium is very much dependent on the way messages are embedded, changed, and viewed. Although digital computers were originally designed to do arithmetic computation, the ability to simulate the details of any descriptive model means that the computer, viewed as a medium itself, can be all other media if the embedding and viewing methods are sufficiently well provided. (255) It is following this passage that Kay and Goldberg use the term "metamedium" to describe this system, which effectively seals the Many into the One, and compromises any sense that 'multimedia' can fully live up to the idea of multiple media. Situating the term 'media' as a primary term is interesting primarily because Heim deems it the "natural sense of media". There is some value in re-asserting the most general understanding of this idea, which is that any artefact can serve to communicate something to the senses. That said, any exploration of this kind needs to keep a critical eye not just on the McLuhanesque extension of the senses that Heim mentions, but also the imperative that these artefacts must mediate, and function as a means of communication. In other words, any celebration of this conception of media needs to be careful not to naturalise the idea that communication is the transmission of ideal contents. As Derrida's work shows, a complex system is required for a media to work in this way. It is only via a particular system of representation that a medium comes to serve as a vehicle for communication (311-2). As such, we should be wary of designating this idea of media as 'natural'. There are of course other reasons to be cautious with the use of the term 'natural' in this context. Contemporary usage of 'media' show that the human sensorium has already entered a complex cyborg future in which human actions, digital files, data, scripts, can be considered 'media' in a performance work or some other assemblage. Contemporary media theory resolves some of the problems of the terms 'Medium', 'Media', 'the Media' serving as a primary conceptual figure by reading them against one another. Thus, the mass media can be criticised from the point of view of the broader potential of the medium, or transformations in a medium can be tracked through developments in interactive media. Various critical or comparative approaches can be adopted within the nexus defined by these three terms. One important path of investigation for media theory is the investigation of hybrid mixed forms of media as they re-emerge out of more or less well defined definitions of a medium. A concern that can be raised with this approach, however, is that it risks avoiding the problem of the One or Many altogether in the way it posits some media as 'pure' or less hybrid in the first instance. In the difficult process of approaching the problem of One or Many media, media theory may find it worthwhile listen in on discussion of the One or Many opposition in contemporary philosophy. Two terms that find a prominent place in Deleuze's discussion of the multiplicity are "differentiation" and "actualisation". I'd want to suggest that both terms should hold interest for media theorists. For example, in terms of the problem of One or Many Media, we can note that differentiation and actualisation have not always been looked at. Too often, the starting point for theories of media is to begin with a particular order of media, a conception of the One, and then situate multiple practices in relationship to this One. Thus, 'the media' or 'mass media' is able to take the position of centre, with the rest left subordinate. This gesture allows the plural form of 'media' to be dealt with in a reductive way, at the expense of an analysis of supposed plurality. (It also works to detach the discussion of the order of media in question from other academic and non-academic disciplines that may have a great deal to say about the way media work.) A different approach could be to look at the way this dominant order is actualised in the first place. Recognition that a multiplicity of different senses of media pre-exists any single order of media would seem to be a key step towards renewal in media theory. This piece has sought to disturb the way a notion of the One or Many media often works in the space of media theory. Rather than locate this issue in relation to only one definition of media or medium, this approach attempts to differentiate between different senses of media, ranging from those understandings linked to the human sensorium, those related to craft understandings, and those related to the computerised manipulation of media resources. The virtue of this approach is that it tackles head on the issue that there is no one understanding of media that can function as an over-arching term in the present. The human senses, craft, broadcasting, and digital manipulation are all limited in this respect. Any response to this situation needs to engage with this complexity by recognising that some understandings of media exceed the space of a medium. These other understandings can form useful provisional points of counter-actualisation.4 Footnotes Recent Australian government decisions about the differences between digital television and datacasting would be interesting to examine here. In relation to Given's work I'd suggest that a fuller examination of media's digital future needs to elaborate on the relationship between 'the media' and alternative understandings of the term in computing, for example, such as Kay and Goldberg's. In this way, the issue of future conceptions of media can be opened up alongside the issue of a future for the media. Monaco's, "Mediography: In the Middle of Things" is a rare example. In the section 'Levels of the Game' Monaco usefully distinguishes between different orders of media. My thanks to the anonymous M/C reviewers for their useful comments, and also Anna Munster for her suggestions. References Beniger, James R. "Personalisation of Mass Media and the Growth of Pseudo-Community." Communication Research 14.3 (June 1987): 352-71. Cubitt, Sean. Videography: Video Media as Art and Culture. London: Macmillan, 1993. Deleuze, Gilles. Bergsonism. Trans. Hugh Tomlinson and Barbara Habberjam. New York: Zone, 1991. Derrida, Jacques. Margins of Philosophy. Trans. Alan Bass. Brighton, Sussex: Harvester, 1986. Eco, Umberto. "The Multiplication of the Media." Travels in Hyper-Reality. Trans. William Weaver. London: Pan, 1986. 145-50. Given, Jock. The Death of Broadcasting: Media's Digital Future. Kensington: U of New South Wales P, 1998. Heim, Michael. Electric Language: A Philosophical Study of Word Processing. New Haven and London: Yale UP, 1987. Kay, Alan, and Adele Goldberg. "Personal Dynamic Media." A History of Personal Workstations. Ed. Adele Goldberg. New York: ACM/Addison-Wesley, 1988. 254-63. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. Monaco, James. "Mediography: In the Middle of Things." Media Culture. Ed. James Monaco. New York: Delta, 1978. 3-21. Ong, Walter. Orality and Literacy: The Technologising of the Word. London: Methuen, 1982. Patton, Paul. Deleuze and the Political. London: Routledge, 2000. Williams, Raymond. Keywords: A Vocabulary of Culture and Society. London: Fontana, 1976. Citation reference for this article MLA style: Steven Maras. "One or Many Media?" M/C: A Journal of Media and Culture 3.6 (2000). [your date of access] <http://www.api-network.com/mc/0012/many.php>. Chicago style: Steven Maras, "One or Many Media?" M/C: A Journal of Media and Culture 3, no. 6 (2000), <http://www.api-network.com/mc/0012/many.php> ([your date of access]). APA style: Steven Maras. (2000) One or many media? M/C: A Journal of Media and Culture 3(6). <http://www.api-network.com/mc/0012/many.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
34

Binns, Daniel. "No Free Tickets." M/C Journal 25, no. 2 (April 25, 2022). http://dx.doi.org/10.5204/mcj.2882.

Full text
Abstract:
Introduction 2021 was the year that NFTs got big—not just in value but also in terms of the cultural consciousness. When digital artist Beeple sold the portfolio of his 5,000 daily images at Christie’s for US$69 million, the art world was left intrigued, confused, and outraged in equal measure. Depending on who you asked, non-fungible tokens (NFTs) seemed to be either a quick cash-grab or the future of the art market (Bowden and Jones; Smee). Following the Beeple sale, articles started to appear indicating that the film industry was abuzz for NFTs. Independent filmmaker Kevin Smith was quick to announce that he planned to release his horror film Killroy Was Here as an NFT (Alexander); in September 2021 the James Bond film No Time to Die also unveiled a series of collectibles to coincide with the film’s much-delayed theatrical release (Natalee); the distribution and collectible platforms Vuele, NFT Studios, and Mogul Productions all emerged, and the industry rumour mill suggests more start-ups are en route (CurrencyWorks; NFT Studios; NewsBTC). Blockchain disciples say that the technology will solve all the problems of the Internet (Tewari; Norton; European Business Review); critics say it will only perpetuate existing accessibility and equality issues (Davis and Flatow; Klein). Those more circumspect will doubtless sit back until the dust settles, waiting to see what parts of so-called web3 will be genuinely integrated into the architecture of the Internet. Pamela Hutchinson puts it neatly in terms of the arts sector: “the NFT may revolutionise the art market, film funding and distribution. Or it might be an ecological disaster and a financial bubble, in which few actual movies change hands, and fraudsters get rich from other people’s intellectual property” (Hutchinson). There is an uptick in the literature around NFTs and blockchain (see Quiniou; Gayvoronskaya & Meinel); however, the technology remains unregulated and unstandardised (Yeung 212-14; Dimitropoulos 112-13). Similarly, the sheer amount of funding being put into fundamental technical, data, and security-related issues speaks volumes to the nascency of the space (Ossinger; Livni; Gayvoronskaya & Meinel 52-6). Put very briefly, NFTs are part of a given blockchain system; think of them, like cryptocurrency coins, as “units of value” within that system (Roose). NFTs were initially rolled out on Ethereum, though several other blockchains have now implemented their own NFT frameworks. NFTs are usually not the artwork itself, but rather a unique, un-copyable (hence, non-fungible) piece of code that is attached, linked, or connected to another digital file, be that an image, video, text, or something else entirely. NFTs are often referred to as a digital artwork’s “certificate of authenticity” (Roose). At the time of writing, it remains to be seen how widely blockchain and NFT technology will be implemented across the entertainment industries. However, this article aims to outline the current state of implementation in the film trade specifically, and to attempt to sort true potential from the hype. Beginning with an overview of the core issues around blockchain and NFTs as they apply to film properties and adjacent products, current implementations of the technology are outlined, before finishing with a hesitant glimpse into the potential future applications. The Issues and Conversation At the core of current conversations around blockchain are three topics: intellectual property and ownership, concentrations of power and control, and environmental impact. To this I would like to add a consideration of social capital, which I begin with briefly here. Both the film industry and “crypto” — if we take the latter to encompass the various facets of so-called ‘web3’ — are engines of social capital. In the case of cinema, its products are commodified and passed through a model that begins with exclusivity (theatrical release) before progressing to mass availability (home media, streaming). The cinematic object, i.e., an individual copy of a film, is, by virtue of its origins as a mass product of the twentieth century, fungible. The film is captured, copied, stored, distributed, and shared. The film-industrial model has always relied on social phenomena, word of mouth, critical discourse, and latterly on buzz across digital social media platforms. This is perhaps as distinct from fine art, where — at least for dealers — the content of the piece does not necessarily matter so much as verification of ownership and provenance. Similarly, web3, with its decentralised and often-anonymised processes, relies on a kind of social activity, or at least a recorded interaction wherein the chain is stamped and each iteration is updated across the system. Even without the current hype, web3 still relies a great deal on discourse, sharing, and community, particularly as it flattens the existing hierarchies of the Internet that linger from Web 2.0. In terms of NFTs, blockchain systems attach scarcity and uniqueness to digital objects. For now, that scarcity and uniqueness is resulting in financial value, though as Jonathan Beller argues the notion of value could — or perhaps should — be reconsidered as blockchain technology, and especially cryptocurrencies, evolve (Beller 217). Regardless, NFT advocates maintain that this is the future of all online activity. To questions of copyright, the structures of blockchain do permit some level of certainty around where a given piece of intellectual property emerged. This is particularly useful where there are transnational differences in recognition of copyright law, such as in France, for instance (Quiniou 112-13). The Berne Convention stipulates that “the subsistence of copyright does not rest on the compliance with formal requirements: rights will exist if the work meets the requirements for protection set out by national law and treaties” (Guadamuz 1373). However, there are still no legal structures underpinning even the most transparent of transactions, when an originator goes out of their way to transfer rights to the buyer of the accompanying NFT. The minimum requirement — even courtesy — for the assignment of rights is the identification of the work itself; as Guadamuz notes, this is tricky for NFTs as they are written in code (1374). The blockchain’s openness and transparency are its key benefits, but until the code can explicitly include (or concretely and permanently reference) the ‘content’ of an NFT, its utility as a system of ownership is questionable. Decentralisation, too, is raised consistently as a key positive characteristic of blockchain technology. Despite the energy required for this decentralisation (addressed shortly), it is true that, at least in its base code, blockchain is a technology with no centralised source of truth or verification. Instead, such verification is performed by every node on the chain. On the surface, for the film industry, this might mean modes of financing, rights management, and distribution chains that are not beholden to multinational media conglomerates, streamers like Netflix, niche intermediaries, or legacy studios. The result here would be a flattening of the terrain: breaking down studio and corporate gatekeeping in favour of a more democratised creative landscape. Creators and creative teams would work peer-to-peer, paying, contracting, servicing, and distribution via the blockchain, with iron-clad, publicly accessible tracking of transactions and ownership. The alternative, though, is that the same imbalances persist, just in a different form: this is outlined in the next section. As Hunter Vaughan writes, the film industry’s environmental impact has long been under-examined. Its practices are diverse, distributed, and hard to quantify. Cinematic images, Vaughan writes, “do not come from nothing, and they do not vanish into the air: they have always been generated by the earth and sun, by fossil fuels and chemical reactions, and our enjoyment of them has material consequences” (3). We believe that by watching a “green” film like Avatar we are doing good, but it implicates us in the dirty secret, an issue of “ignorance and of voluntary psychosis” where “we do not see who we are harming or how these practices are affecting the environment, and we routinely agree to accept the virtual as real” (5). Beyond questions of implication and eco-material conceptualisation, however, there are stark facts. In the 1920s, the Kodak Park Plant in New York drew 12 million gallons of water from Lake Ontario each day to produce film stock. As the twentieth century came to a close, this amount — for a single film plant — had grown to 35-53 million gallons per day. The waste water was perfunctorily “cleaned” and then dumped into surrounding rivers (72-3). This was just one plant, and one part of the filmmaking process. With the shift to digital, this cost might now be calculated in the extraction of precious metals used to make contemporary cameras, computers, or storage devices. Regardless, extrapolate outwards to a global film industry and one quickly realises the impact is almost beyond comprehension. Considering — let alone calculating — the carbon footprint of blockchain requires outlining some fundamentals of the technology. The two primary architectures of blockchain are Proof of Work (PoW) and Proof of Stake (PoS), both of which denote methods of adding and verifying new blocks to a chain. PoW was the first model, employed by Bitcoin and the first iteration of Ethereum. In a PoW model, each new block has a specific cryptographic hash. To confirm the new block, crypto miners use their systems to generate a target hash that is less than or equal to that of the block. The systems process these calculations quickly, as the goal is to be “the first miner with the target hash because that miner is the one who can update the blockchain and receive crypto rewards” (Daly). The race for block confirmation necessitates huge amounts of processing power to make these quick calculations. The PoS model differs in that miners are replaced by validators (or staking services where participants pool validation power). Rather than investing in computer power, validators invest in the blockchain’s coins, staking those coins (tokens) in a smart contract (think of this contract like a bank account or vault). When a new block is proposed, an algorithm chooses a validator based on the size of their stake; if the block is verified, the validator receives further cryptocurrency as a reward (Castor). Given the ubiquity and exponential growth of blockchain technology and its users, an accurate quantification of its carbon footprint is difficult. For some precedent, though, one might consider the impact of the Bitcoin blockchain, which runs on a PoW model. As the New York Times so succinctly puts it: “the process of creating Bitcoin to spend or trade consumes around 91 terawatt-hours of electricity annually, more than is used by Finland, a nation of about 5.5 million” (Huang, O’Neill and Tabuchi). The current Ethereum system (at time of writing), where the majority of NFT transactions take place, also runs on PoW, and it is estimated that a single Ethereum transaction is equivalent to nearly nine days of power consumption by an average US household (Digiconomist). Ethereum always intended to operate on a PoS system, and the transition to this new model is currently underway (Castor). Proof of Stake transactions use significantly less energy — the new Ethereum will supposedly be approximately 2,000 times more energy efficient (Beekhuizen). However, newer systems such as Solana have been explicit about their efficiency goals, stating that a single Solana transaction uses less energy (1,837 Joules, to be precise) than keeping an LED light on for one hour (36,000 J); one Ethereum transaction, for comparison, uses over 692 million J (Solana). In addition to energy usage, however, there is also the question of e-waste as a result of mining and general blockchain operations which, at the time of writing, for Bitcoin sits at around 32 kilotons per year, around the same as the consumer IT wastage of the Netherlands (de Vries and Stoll). How the growth in NFT awareness and adoption amplifies this impact remains to be seen, but depending on which blockchain they use, they may be wasting energy and resources by design. If using a PoW model, the more valuable the cryptocurrency used to make the purchase, the more energy (“gas”) required to authenticate the purchase across the chain. Images abound online of jerry-rigged crypto data centres of varying quality (see also efficiency and safety). With each NFT minted, sold, or traded, these centres draw — and thus waste, for gas — more and more energy. With increased public attention and scrutiny, cryptocurrencies are slowly realising that things could be better. As sustainable alternatives become more desirable and mainstream, it is safe to predict that many NFT marketplaces may migrate to Cardano, Solana, or other more efficient blockchain bases. For now, though, this article considers the existing implementations of NFTs and blockchain technology within the film industry. Current Implementations The current applications of NFTs in film centre around financing and distribution. In terms of the former, NFTs are saleable items that can raise capital for production, distribution, or marketing. As previously mentioned, director Kevin Smith launched Jay & Silent Bob’s Crypto Studio in order to finish and release Killroy Was Here. Smith released over 600 limited edition tokens, including one of the film itself (Moore). In October 2021, renowned Hong Kong director Wong Kar-wai sold an NFT with unreleased footage from his film In the Mood for Love at Sotheby’s for US$550,000 (Raybaud). Quentin Tarantino entered the arena in January 2022, auctioning uncut scenes from his 1994 film Pulp Fiction, despite the threat of legal action from the film’s original distributor Miramax (Dailey). In Australia, an early adopter of the technology is director Michael Beets, who works in virtual production and immersive experiences. His immersive 14-minute VR film Nezunoban (2020) was split into seven different chapters, and each chapter was sold as an NFT. Beets also works with artists to develop entry tickets that are their own piece of generative art; with these tickets and the chapters selling for hundreds of dollars at a time, Beets seems to have achieved the impossible: turning a profit on a short film (Fletcher). Another Australian writer-producer, Samuel Wilson, now based in Canada, suggests that the technology does encourage filmmakers to think differently about what they create: At the moment, I’m making NFTs from extra footage of my feature film Miles Away, which will be released early next year. In one way, it’s like a new age of behind-the-scenes/bonus features. I have 14 hours of DV tapes that I’m cutting into a short film which I will then sell in chapters over the coming months. One chapter will feature the dashing KJ Apa (Songbird, Riverdale) without his shirt on. So, hopefully that can turn some heads. (Wilson, in Fletcher) In addition to individual directors, a number of startup companies are also seeking to get in on the action. One of these is Vuele, which is best understood as a blockchain-based streaming service: an NFT Netflix, if you like. In addition to films themselves, the service will offer extra content as NFTs, including “behind the scenes content, bonus features, exclusive Q&As, and memorabilia” (CurrencyWorks). Vuele’s launch title is Zero Contact, directed by Rick Dugdale and starring Anthony Hopkins. The film is marketed as “the World’s First NFT Feature Film” (as at the time of writing, though, both Vuele and its flagship film have yet to launch). Also launching is NFT Studios, a blockchain-based production company that distributes the executive producer role to those buying into the project. NFT Studios is a decentralised administrative organisation (DAO), guided by tech experts, producers, and film industry intermediaries. NFT Studios is launching with A Wing and a Prayer, a biopic of aeronaut Brian Milton (NFT Studios), and will announce their full slate across festivals in 2022. In Australia, Culture Vault states that its aim is to demystify crypto and champion Australian artists’ rights and access to the space. Co-founder and CEO Michelle Grey is well aware of the aforementioned current social capital of NFTs, but is also acutely aware of the space’s opacity and the ubiquity of often machine-generated tat. “The early NFT space was in its infancy, there was a lot of crap around, but don’t forget there’s a lot of garbage in the traditional art world too,” she says (cited in Miller). Grey and her company effectively act like art dealers; intermediaries between the tech and art worlds. These new companies claim to be adhering to the principles of web3, often selling themselves as collectives, DAOs, or distributed administrative systems. But the entrenched tendencies of the film industry — particularly the persistent Hollywood system — are not so easily broken down. Vuele is a joint venture between CurrencyWorks and Enderby Entertainment. The former is a financial technology company setting up blockchain systems for businesses, including the establishment of branded digital currencies such as the controversial FreedomCoin (Memoria); the latter, Enderby, is a production company founded by Canadian film producer (and former investor relations expert in the oil and uranium sectors) Rick Dugdale (Wiesner). Similarly, NFT Studios is partnered with consulting and marketing agencies and blockchain venture capitalists (NFT Investments PLC). Depending on how charitable or cynical one is feeling, these start-ups are either helpful intermediaries to facilitate legacy media moving into NFT technology, or the first bricks in the capitalist wall to bar access for entry to other players. The Future Is… Buffering Marketplaces like Mintable, OpenSea, and Rarible do indeed make the minting and selling of NFTs fairly straightforward — if you’ve ever listed an item for sale on eBay or Facebook, you can probably mint an NFT. Despite this, the current major barrier for average punters to the NFT space remains technical knowledge. The principles of blockchain remain fairly opaque — even this author, who has been on a deep dive for this article, remains sceptical that widespread adoption across multiple applications and industries is feasible. Even so, as Rennie notes, “the unknown is not what blockchain technology is, or even what it is for (there are countless ‘use cases’), but how it structures the actions of those who use it” (235). At the time of writing, a great many commentators and a small handful of scholars are speculating about the role of the metaverse in the creative space. If the endgame of the metaverse is realised, i.e., a virtual, interactive space where users can interact, trade, and consume entertainment, the role of creators, dealers, distributors, and other brokers and players will be up-ended, and have to re-settle once again. Film industry practitioners might look to the games space to see what the road might look like, but then again, in an industry that is — at its best — somewhat resistant to change, this may simply be a fad that blows over. Blockchain’s current employment as a get-rich-quick mechanism for the algorithmic literati and as a computational extension of existing power structures suggests nothing more than another techno-bubble primed to burst (Patrickson 591-2; Klein). Despite the aspirational commentary surrounding distributed administrative systems and organisations, the current implementations are restricted, for now, to startups like NFT Studios. In terms of cinema, it does remain to be seen whether the deployment of NFTs will move beyond a kind of “Netflix with tchotchkes” model, or a variant of crowdfunding with perks. Once Vuele and NFT Studios launch properly, we may have a sense of how this all will play out, particularly alongside less corporate-driven, more artistically-minded initiatives like that of Michael Beets and Culture Vault. It is possible, too, that blockchain technology may streamline the mechanics of the industry in terms of automating or simplifying parts of the production process, particularly around contracts, financing, licensing. This would obviously remove some of the associated labour and fees, but would also de-couple long-established parts and personnel of the industry — would Hollywood and similar industrial-entertainment complexes let this happen? As with any of the many revolutions that have threatened to kill or resurrect the (allegedly) long-suffering cinematic object, we just have to wait, and watch. References Alexander, Bryan. “Kevin Smith Reveals Why He’s Auctioning Off New His Film ‘Killroy Was Here’ as an NFT.” USA TODAY, 15 Apr. 2021. <https://www.usatoday.com/story/entertainment/movies/2021/04/15/kevin-smith-auctioning-new-film-nft-killroy-here/7244602002/>. Beekhuizen, Carl. “Ethereum’s Energy Usage Will Soon Decrease by ~99.95%.” Ethereum Foundation Blog, 18 May 2021. <https://blog.ethereum.org/2021/05/18/country-power-no-more/>. Beller, Jonathan. “Economic Media: Crypto and the Myth of Total Liquidity.” Australian Humanities Review 66 (2020): 215-225. Beller, Jonathan. The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle. Hanover, NH: Dartmouth College P, 2006. Bowden, James, and Edward Thomas Jones. “NFTs Are Much Bigger than an Art Fad – Here’s How They Could Change the World.” The Conversation, 26 Apr. 2021. <http://theconversation.com/nfts-are-much-bigger-than-an-art-fad-heres-how-they-could-change-the-world-159563>. Cardano. “Cardano, Ouroboros.” 14 Feb. 2022 <https://cardano.org/ouroboros/>. Castor, Amy. “Why Ethereum Is Switching to Proof of Stake and How It Will Work.” MIT Technology Review, 4 Mar. 2022. <https://www.technologyreview.com/2022/03/04/1046636/ethereum-blockchain-proof-of-stake/>. CurrencyWorks. “Vuele - CurrencyWorks™.” 3 Feb. 2022 <https://currencyworks.io/project/vuele/>. Dailey, Natasha. “Quentin Tarantino Will Sell His ‘Pulp Fiction’ NFTs This Month despite a Lawsuit from the Film’s Producer Miramax.” Business Insider, 5 Jan. 2022. <https://www.businessinsider.com.au/quentin-tarantino-to-sell-pulp-fiction-nft-despite-miramax-lawsuit-2022-1>. Daly, Lyle. “What Is Proof of Work (PoW) in Crypto?” The Motley Fool, 27 Sep. 2021. <https://www.fool.com/investing/stock-market/market-sectors/financials/cryptocurrency-stocks/proof-of-work/>. Davis, Kathleen, and Ira Flatow. “Will Blockchain Really Change the Way the Internet Runs?” Science Friday, 23 July 2021. <https://www.sciencefriday.com/segments/blockchain-internet/>. De Vries, Alex, and Christian Stoll. “Bitcoin’s Growing E-Waste Problem.” Resources, Conservation & Recycling 175 (2021): 1-11. Dimitropoulos, Georgios. “Global Currencies and Domestic Regulation: Embedding through Enabling?” In Regulating Blockchain: Techno-Social and Legal Challenges. Eds. Philipp Hacker et al. Oxford: Oxford UP, 2019. 112–139. Edelman, Gilad. “What Is Web3, Anyway?” Wired, Nov. 2021. <https://www.wired.com/story/web3-gavin-wood-interview/>. European Business Review. “Future of Blockchain: How Will It Revolutionize the World in 2022 & Beyond!” The European Business Review, 1 Nov. 2021. <https://www.europeanbusinessreview.com/future-of-blockchain-how-will-it-revolutionize-the-world-in-2022-beyond/>. Fletcher, James. “How I Learned to Stop Worrying and Love the NFT!” FilmInk, 2 Oct. 2021. <https://www.filmink.com.au/how-i-learned-to-stop-worrying-and-love-the-nft/>. Gayvoronskaya, Tatiana, and Christoph Meinel. Blockchain: Hype or Innovation. Cham: Springer. Guadamuz, Andres. “The Treachery of Images: Non-Fungible Tokens and Copyright.” Journal of Intellectual Property Law & Practice 16.12 (2021): 1367–1385. Huang, Jon, Claire O’Neill, and Hiroko Tabuchi. “Bitcoin Uses More Electricity than Many Countries. How Is That Possible?” The New York Times, 3 Sep. 2021. <http://www.nytimes.com/interactive/2021/09/03/climate/bitcoin-carbon-footprint-electricity.html>. Hutchinson, Pamela. “Believe the Hype? What NFTs Mean for Film.” BFI, 22 July 2021. <https://www.bfi.org.uk/sight-and-sound/features/nfts-non-fungible-tokens-blockchain-film-funding-revolution-hype>. Klein, Ezra. “A Viral Case against Crypto, Explored.” The Ezra Klein Show, n.d. 7 Apr. 2022 <https://www.nytimes.com/2022/04/05/opinion/ezra-klein-podcast-dan-olson.html>. Livni, Ephrat. “Venture Capital Funding for Crypto Companies Is Surging.” The New York Times, 1 Dec. 2021. <https://www.nytimes.com/2021/12/01/business/dealbook/crypto-venture-capital.html>. Memoria, Francisco. “Popular Firearms Marketplace GunBroker to Launch ‘FreedomCoin’ Stablecoin.” CryptoGlobe, 30 Jan. 2019. <https://www.cryptoglobe.com/latest/2019/01/popular-firearm-marketplace-gunbroker-to-launch-freedomcoin-stablecoin/>. Miller, Nick. “Australian Start-Up Aims to Make the Weird World of NFT Art ‘Less Crap’.” Sydney Morning Herald, 19 Jan. 2022. <https://www.smh.com.au/culture/art-and-design/australian-startup-aims-to-make-the-weird-world-of-nft-art-less-crap-20220119-p59pev.html>. Moore, Kevin. “Kevin Smith Drops an NFT Project Packed with Utility.” One37pm, 27 Apr. 2021. <https://www.one37pm.com/nft/art/kevin-smith-jay-and-silent-bob-nft-killroy-was-here>. Nano. “Press Kit.” 14 Feb. 2022 <https://content.nano.org/Nano-Press-Kit.pdf>. Natalee. “James Bond No Time to Die VeVe NFTs Launch.” NFT Culture, 22 Sep. 2021. <https://www.nftculture.com/nft-marketplaces/4147/>. NewsBTC. “Mogul Productions to Conduct the First Ever Blockchain-Based Voting for Film Financing.” NewsBTC, 22 July 2021. <https://www.newsbtc.com/news/company/mogul-productions-to-conduct-the-first-ever-blockchain-based-voting-for-film-financing/>. NFT Investments PLC. “Approach.” 21 Jan. 2022 <https://www.nftinvest.pro/approach>. NFT Studios. “Projects.” 9 Feb. 2022 <https://nftstudios.dev/projects>. Norton, Robert. “NFTs Have Changed the Art of the Possible.” Wired UK, 14 Feb. 2022. <https://www.wired.co.uk/article/nft-art-world>. Ossinger, Joanna. “Crypto World Hits $3 Trillion Market Cap as Ether, Bitcoin Gain.” Bloomberg.com, 8 Nov. 2021. <https://www.bloomberg.com/news/articles/2021-11-08/crypto-world-hits-3-trillion-market-cap-as-ether-bitcoin-gain>. Patrickson, Bronwin. “What Do Blockchain Technologies Imply for Digital Creative Industries?” Creativity and Innovation Management 30.3 (2021): 585–595. Quiniou, Matthieu. Blockchain: The Advent of Disintermediation, New York: John Wiley, 2019. Raybaud, Sebastien. “First Asian Film NFT Sold, Wong Kar-Wai’s ‘In the Mood for Love’ Fetches US$550k in Sotheby’s Evening Sale, Auctions News.” TheValue.Com, 10 Oct. 2021. <https://en.thevalue.com/articles/sothebys-auction-wong-kar-wai-in-the-mood-for-love-nft>. Rennie, Ellie. “The Challenges of Distributed Administrative Systems.” Australian Humanities Review 66 (2020): 233-239. Roose, Kevin. “What are NFTs?” The New York Times, 18 Mar. 2022. <https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html>. Smee, Sebastian. “Will NFTs Transform the Art World? Are They Even Art?” Washington Post, 18 Dec. 2021. <https://www.washingtonpost.com/arts-entertainment/2021/12/18/nft-art-faq/>. Solana. “Solana’s Energy Use Report: November 2021.” Solana, 24 Nov. 2021. <https://solana.com/news/solana-energy-usage-report-november-2021>. Tewari, Hitesh. “Four Ways Blockchain Could Make the Internet Safer, Fairer and More Creative.” The Conversation, 12 July 2019. <http://theconversation.com/four-ways-blockchain-could-make-the-internet-safer-fairer-and-more-creative-118706>. Vaughan, Hunter. Hollywood’s Dirtiest Secret: The Hidden Environmental Costs of the Movies. New York: Columbia UP, 2019. Vision and Value. “CurrencyWorks (CWRK): Under-the-Radar, Crypto-Agnostic, Blockchain Pick-and-Shovel Play.” Seeking Alpha, 1 Dec. 2021. <https://seekingalpha.com/article/4472715-currencyworks-under-the-radar-crypto-agnostic-blockchain-pick-and-shovel-play>. Wiesner, Darren. “Exclusive – BC Producer – Rick Dugdale Becomes a Heavyweight.” Hollywood North Magazine, 29 Aug. 2017. <https://hnmag.ca/interview/exclusive-bc-producer-rick-dugdale-becomes-a-heavyweight/>. Yeung, Karen. “Regulation by Blockchain: The Emerging Battle for Supremacy between the Code of Law and Code as Law.” The Modern Law Review 82.2 (2019): 207–239.
APA, Harvard, Vancouver, ISO, and other styles
35

Ellison, Elizabeth. "The #AustralianBeachspace Project: Examining Opportunities for Research Dissemination Using Instagram." M/C Journal 20, no. 4 (August 16, 2017). http://dx.doi.org/10.5204/mcj.1251.

Full text
Abstract:
IntroductionIn late 2016, I undertook a short-term, three-month project to share some of my research through my Instagram account using the categorising hashtag #AustralianBeachspace. Much of this work emerged from my PhD thesis, which is being published in journal articles, but has yet to be published in any accessible or overarching way. I wanted to experiment with the process of using a visual social media tool for research dissemination. I felt that Instagram’s ability to combine text and image allowed for an aesthetically interesting way to curate this particular research project. My research is concerned with representations of the Australian beach, and thus the visual, image-based focus of Instagram seemed ideal. In this article, I briefly examine some of the existing research around academic practices of research dissemination, social media use, and the emerging research around Instagram itself. I then will examine my own experience of using Instagram as a tool for depicting curated, aesthetically-driven, research dissemination and reflect whether this use of Instagram is effective for representing and disseminating research. Research DisseminationResearchers, especially those backed by public funding, are always bound by the necessity of sharing the findings and transferring the knowledge gained during the research process. Research metrics are linked to workload allocations and promotion pathways for university researchers, providing clear motivation to maintain an active research presence. For most academics, the traditional research dissemination strategies involve academic publications: peer-reviewed scholarly books and journal articles.For academics working within a higher education policy climate that centres on measuring impact and engagement, peer-reviewed publications remain the gold standard. There are indicators, however, that research dissemination strategies may need to include methods for targeting non-academic outputs. Gunn and Mintrom (21), in their recent research, “anticipate that governments will increasingly question the value of publicly funded research and seek to evaluate research impact”. And this process, they argue, is not without challenges. Education Minister Simon Birmingham supports their claim by suggesting the Turnbull Government is looking to find methods for more meaningful ways of evaluating value in higher education research outcomes, “rather than only allocating funding to researchers who spend their time trying to get published in journals” (para 5).It therefore makes sense that academics are investigating ways of using social media as a way of broadening their research dissemination, despite the fact social media metrics do not yet count towards traditional citations within the university sector.Research Dissemination via Social MediaThere has been an established practice of researchers using social media, especially blogging (Kirkup) and Twitter, as ways of sharing information about their current projects, their findings, their most recent publications, or to connect with colleagues. Gruzd, Staves, and Wilk (2348) investigated social media use by academics, suggesting “scholars are turning to social media tools professionally because they are more convenient for making new connections with peers, collaboration, and research dissemination”. It is possible to see social media functioning as a new way of representing research – playing an important role in the shaping and developing of ideas, sharing those ideas, and functioning as a dissemination tool after the research has concluded.To provide context for the use of social media in research, this section briefly covers blogging and Twitter, two methods considered somewhat separated from university frameworks, and also professional platforms, such as Academia.edu and The Conversation.Perhaps the tool that has the most history in providing another avenue for academics to share their work is academic blogging. Blogging is considered an avenue that allows for discussion of topics prior to publication (Bukvova, 4; Powell, Jacob, and Chapman, 273), and often uses a more conversational tone than academic publishing. It provides opportunity to share research in long form to an open, online audience. Academic blogs have also become significant parts of online academic communities, such as the highly successful blog, The Thesis Whisperer, targeted for research students. However, many researchers in this space note the stigma attached to blogging (and other forms of social media) as useless or trivial; for instance, in Gruzd, Staves, and Wilk’s survey of academic users of social media, an overwhelming majority of respondents suggested that institutions do not recognise these activities (2343). Because blogging is not counted in publication metrics, it is possible to dismiss this type of activity as unnecessary.Twitter has garnered attention within the academic context because of its proliferation in conference engagement and linking citation practices of scholars (Marht, Weller, and Peters, 401–406). Twitter’s platform lends itself as a place to share citations of recently published material and a way of connecting with academic peers in an informal, yet meaningful way. Veletsianos has undertaken an analysis of academic Twitter practices, and there is a rise in popularity of “Tweetable Abstracts” (Else), or the practice of refining academic abstracts into a shareable Tweet format. According to Powell, Jacob, and Chapman (272), new media (including both Twitter and the academic blog) offer opportunities to engage with an increasingly Internet-literate society in a way that is perhaps more meaningful and certainly more accessible than traditional academic journals. Like blogging, the use of Twitter within the active research phase and pre-publication, means the platform can both represent and disseminate new ideas and research findings.Both academic blogs and Twitter are widely accessible and can be read by Internet users beyond academia. It appears likely, however, that many blogs and academic Twitter profiles are still accessed and consumed primarily by academic audiences. This is more obvious in the increasingly popular specific academic social media platforms such as ResearchGate or Academia.edu.These websites are providing more targeted, niche communication and sharing channels for scholars working in higher education globally, and their use appears to be regularly encouraged by institutions. These sites attempt to mediate between open access and copyright in academic publishing, encouraging users to upload full-text documents of their publications as a means of generating more attention and citations (Academia.edu cites Niyazov et al’s study that suggests articles posted to the site had improved citation counts). ResearchGate and Academia.edu function primarily as article repositories, albeit with added social networking opportunities that differentiate them from more traditional university repositories.In comparison, the success of the online platform The Conversation, with its tagline “Academic rigour, journalistic flair”, shows the growing enthusiasm and importance of engaging with more public facing outlets to share forms of academic writing. Many researchers are using The Conversation as a way of sharing their research findings through more accessible, shorter articles designed for the general public; these articles regularly link to the traditional academic publications as well.Research dissemination, and how the uptake of online social networks is changing individual and institution-wide practices, is a continually expanding area of research. It is apparent that while The Conversation has been widely accepted and utilised as a tool of research dissemination, there is still some uncertainty about using social media as representing or disseminating findings and ideas because of the lack of impact metrics. This is perhaps even more notable in regards to Instagram, a platform that has received comparatively little discussion in academic research more broadly.Instagram as Social MediaInstagram is a photo sharing application that launched in 2010 and has seen significant uptake by users in that time, reaching 700 million monthly active users as of April 2017 (Instagram “700 Million”). Recent additions to the service, such as the “Snapchat clone” Instagram Stories, appear to have helped boost growth (Constine, para 4). Instagram then is a major player in the social media user market, and the emergence of academic research into the platform reflect this. Early investigations include Manikonda, Hu and Kambhampati’s analysis social networks, demographics, and activities of users in which they identified some clear differences in usage compared to Flickr (another photo-sharing network) and Twitter (5). Hochman and Manovich and Hochman and Schwartz examined what information visualisations generated from Instagram images can reveal about the “visual rhythms” of geographical locations such as New York City.To provide context for the use of Instagram as a way of disseminating research through a more curated, visual approach, this section will examine professional uses of Instagram, the role of Influencers, and some of the functionalities of the platform.Instagram is now a platform that caters for both personal and professional accounts. The user-interface allows for a streamlined and easily navigable process from taking a photo, adding filters or effects, and sharing the photo instantly. The platform has developed to include web-based access to complement the mobile application, and has also introduced Instagram Business accounts, which provide “real-time metrics”, “insights into your followers”, and the ability to “add information about your company” (Instagram “Instagram Business”). This also comes with the option to pay for advertisements.Despite its name, many users of Instagram, especially those with profiles that are professional or business orientated, do not only produce instant content. While the features of Instagram, such as geotagging, timestamping, and the ability to use the camera from within the app, lend themselves to users capturing their everyday experience in the moment, more and more content is becoming carefully curated. As such, some accounts are blurring the line between personal and professional, becoming what Crystal Abidin calls Influencers, identifying the practice as when microcelebrities are able to use the “textual and visual narration of their personal, everyday lives” to generate paid advertorials (86). One effect of this, as Abidin investigates in the context of Singapore and the #OOTD (Outfit of the Day) hashtag, is the way “everyday Instagram users are beginning to model themselves after Influences” and therefore generate advertising content “that is not only encouraged by Influences and brands but also publicly utilised without remuneration” (87). Instagram, then, can be a very powerful platform for businesses to reach wide audiences, and the flexibility of caption length and visual content provides a type of viral curation practice as in the case of the #OOTD hashtag following.Considering the focus of my #AustralianBeachspace project on Australian beaches, many of the Instagram accounts and hashtags I encountered and engaged with were tourism related. Although this will be discussed in more detail below, it is worth noting that individual Influencers exist in these fields as well and often provide advertorial content for companies like accommodation chains or related products. One example is user @katgaskin, an Influencer who both takes photos, features in photos, and provides “organic” adverts for products and services (see image). Not all her photos are adverts; some are beach or ocean images without any advertorial content in the caption. In this instance, the use of distinctive photo editing, iconic imagery (the “salty pineapple” branding), and thematic content of beach and ocean landscapes, makes for a recognisable and curated aesthetic. Figure 1: An example from user @katgaskin's Instagram profile that includes a mention of a product. Image sourced from @katgaskin, uploaded 2 June 2017.@katgaskin’s profile’s aesthetic identity is, as such, linked with the ocean and the beach. Although her physical location regularly changes (her profile includes images from, for example, Nicaragua, Australia, and the United States), the thematic link is geographical. And research suggests the visual focus of Instagram lends itself to place-based content. As Hochman and Manovich state:While Instagram eliminates static timestamps, its interface strongly emphasizes physical place and users’ locations. The application gives a user the option to publicly share a photo’s location in two ways. Users can tag a photo to a specific venue, and then view all other photos that were taken and tagged there. If users do not choose to tag a photo to a venue, they can publically share their photos’ location information on a personal ‘photo-map’, displaying all photos on a zoomable word map. (para 14)This means that the use of place in the app is anchored to the visual content, not the uploader’s location. While it is possible to consider Instagram’s intention was to anchor the content and the uploader’s location together (as in the study conducted by Weilenmann, Hillman, and Jungselius that explored how Instagram was used in the museum), this is no longer always the case. In this way, Instagram is also providing a platform for more serious photographers to share their images after they have processed and edited them and connect the image with the image content rather than the uploader’s position.This place-based focus also shares origins in tourism photography practices. For instance, Kibby’s analysis of the use of Instagram as a method for capturing the “tourist gaze” in Monument Valley notes that users mostly wanted to capture the “iconic” elements of the site (most of which were landscape formations made notable through representations in popular culture).Another area of research into Instagram use is hashtag practice (see, for example, Ferrara, Interdonato, and Tagarelli). Highfield and Leaver have generated a methodology for mapping hashtags and analysing the information this can reveal about user practices. Many Instagram accounts use hashtags to provide temporal or place based information, some specific (such as #sunrise or #newyorkcity) and some more generic (such as #weekend or #beach). Of particular relevance here is the role hashtags play in generating higher levels of user engagement. It is also worth noting the role of “algorithmic personalization” introduced by Instagram earlier in 2017 and the lukewarm user response as identified by Mahnke Skrubbeltrang, Grunnet, and Tarp’s analysis, suggesting “users are concerned with algorithms dominating their experience, resulting in highly commercialised experience” (section 7).Another key aspect of Instagram’s functionality is linked to the aesthetic of the visual content: photographic filters. Now a mainstay of other platforms such as Facebook and Twitter, Instagram popularised the use of filters by providing easily accessible options within the app interface directly. Now, other apps such as VCSO allow for more detailed editing of images that can then be imported into Instagram; however, the pre-set filters have proven popular with large numbers of users. A study in 2014 by Araújo, Corrêa, da Silva et al found 76% of analysed images had been processed in some way.By considering the professional uses of Instagram and the functionality of the app (geotagging; hashtagging; and filters), it is possible to summarise Instagram as a social media platform that, although initially perhaps intended to capture the everyday visual experiences of amateur photographers using their smart phone, has adapted to become a network for sharing images that can be for both personal and professional purposes. It has a focus on place, with its geotagging capacity and hashtag practices, and can include captions The #AustralianBeachspace ProjectIn October 2016, I began a social media project called #AustralianBeachspace that was designed to showcase content from my PhD thesis and ongoing work into representations of Australian beaches in popular culture (a collection of the project posts only, as opposed to the ongoing Instagram profile, can be found here). The project was envisaged as a three month project; single posts (including an image and caption) were planned and uploaded six times a week (every day except Sundays). Although I have occasionally continued to use the hashtag since the project’s completion (on 24 Dec. 2016), the frequency and planned nature of the posts since then has significantly changed. What has not changed is the strong thematic through line of my posts, all of which continue to rely heavily on beach imagery. This is distinct from other academic social media use which if often more focused on the everyday activity of academia.Instagram was my social media choice for this project for two main reasons: I had no existing professional Instagram profile (unlike Twitter) and thus I could curate a complete project in isolation, and the subject of my PhD thesis was representations of Australian beaches in literature and film. As such, my research was appropriate for, and in fact was augmented by, visual depiction. It is also worth noting the tendency reported by myself and others (Huntsman; Booth) of academics not considering the beach an area worthy of focus. This resonates with Bech Albrechtslund and Albrechtslund’s argument that “social media practices associated with leisure and playfulness” are still meaningful and worthy of examination.Up until this point, my research outputs had been purely textual. I, therefore, needed to generate a significant number of visual elements to complement the vast amount of textual content already created. I used my PhD thesis to provide the thematic structure (I have detailed this process in more depth here), and then used the online tool Trello to plan, organise, and arrange the intended posts (image and caption). The project includes images taken by myself, my partner, and other images with no copyright limitations attached as sourced through photo sharing sites like Unsplash.com.The images were all selected because of their visual representation of an Australian beach, and the alignment of the image with the themes of the project. For instance, one theme focused on the under-represented negative aspects of the beach. One image used in this theme was a photo of Bondi Beach ocean pool, empty at night. I carefully curated the images and arranged them according to the thematic schedule (as can be seen below) and then wrote the accompanying textual captions. Figure 2: A sample of the schedule used for the posting of curated images and captions.While there were some changes to the schedule throughout (for instance, my attendance at the 2016 Sculpture by the Sea exhibition prompted me to create a sixth theme), the process of content curation and creation remained the same.Visual curation of the images was a particularly important aspect of the project, and I did use an external photo processing application to create an aesthetic across the collection. As Kibby notes, “photography is intrinsically linked with tourism” (para 9), and although not a tourism project inherently, #AustralianBeachspace certainly engaged with touristic tropes by focusing on Australian beaches, an iconic part of Australian national and cultural identity (Ellison 2017; Ellison and Hawkes 2016; Fiske, Hodge, and Turner 1987). However, while beaches are perhaps instinctively touristic in their focus on natural landscapes, this project was attempting to illustrate more complexity in this space (which mirrors an intention of my PhD thesis). As such, some images were chosen because of their “ordinariness” or their subversion of the iconic beach images (see below). Figures 3 and 4: Two images that capture some less iconic images of Australian beaches; one that shows an authentic, ordinary summer's day and another that shows an empty beach during winter.I relied on captions to provide the textual information about the image. I also included details about the photographer where possible, and linked all the images with the hashtag #AustralianBeachspace. The textual content, much of which emerged from ongoing and extensive research into the topic, was somewhat easier to collate. However, it required careful reworking and editing to suit the desired audience and to work in conjunction with the image. I kept captions to the approximate length of a paragraph and concerned with one point. This process forced me to distil ideas and concepts into short chunks of writing, which is distinct from other forms of academic output. This textual content was designed to be accessible beyond an academic audience, but still used a relatively formal voice (especially in comparison to more personal users of the platform).I provided additional hashtags in a first comment, which were intended to generate some engagement. Notably, these hashtags were content related (such as #beach and #surf; they were not targeting academic hashtags). At time of writing, my follower count is 70. The most liked (or “favourited”) photo from the project received 50 likes, and the most comments received was 6 (on a number of posts). Some photos published since the end of the project have received higher numbers of likes and comments. This certainly does not suggest enormous impact from this project. Hashtags utilised in this project were adopted from popular and related hashtags using the analytics tool Websta.me as well as hashtags used in similar content styled profiles, such as: #seeaustralia #thisisqueensland #visitNSW #bondibeach #sunshinecoast and so on. Notably, many of the hashtags were place-based. The engagement of this project with users beyond academia was apparent: followers and comments on the posts are more regularly from professional photographers, tourism bodies, or location-based businesses. In fact, because of the content or place-based hashtagging practices I employed, it was difficult to attract an academic audience at all. However, although the project was intended as an experiment with public facing research dissemination, I did not actively adopt a stringent engagement strategy and have not kept metrics per day to track engagement. This is a limitation of the study and undoubtedly allows scope for further research.ConclusionInstagram is a platform that does not have clear pathways for reaching academic audiences in targeted ways. At this stage, little research has emerged that investigates Instagram use among academics, although it is possible to presume there are similarities with blogging or Twitter (for example, conference posting and making connections with colleagues).However, the functionality of Instagram does lend itself to creating and curating aesthetically interesting ways of disseminating, and in fact representing, research. Ideas and findings must be depicted as images and captions, and the curatorial process of marrying visual images to complement or support textual information can make for more accessible and palatable content. Perhaps most importantly, the content is freely accessible and not locked behind paywalls or expensive academic publications. It can also be easily archived and shared.The #AustralianBeachspace project is small-scale and not indicative of widespread academic practice. However, examining the process of creating the project and the role Instagram may play in potentially reaching a more diverse, public audience for academic research suggests scope for further investigation. Although not playing an integral role in publication metrics and traditional measures of research impact, the current changing climate of higher education policy provides motivations to continue exploring non-traditional methods for disseminating research findings and tracking research engagement and impact.Instagram functions as a useful platform for sharing research data through a curated collection of images and captions. Rather than being a space for instant updates on the everyday life of the academic, it can also function in a more aesthetically interesting and dynamic way to share research findings and possibly generate wider, public-facing engagement for topics less likely to emerge from behind the confines of academic journal publications. ReferencesAbidin, Crystal. “Visibility Labour: Engaging with Influencers’ Fashion Brands and #Ootd Advertorial Campaigns on Instagram.” Media International Australia 161.1 (2016): 86–100. <http://journals.sagepub.com/doi/abs/10.1177/1329878X16665177>.Araújo, Camila Souza, Luiz Paulo Damilton Corrêa, Ana Paula Couto da Silva, et al. “It is Not Just a Picture: Revealing Some User Practices in Instagram.” Proceedings of the 9th Latin American Web Congress, Ouro Preto, Brazil, 22–24 October, 2014. <http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7000167>Bech Albrechtslund, Anne-Metter, and Anders Albrechtslund. “Social Media as Leisure Culture.” First Monday 19.4 (2014). <http://firstmonday.org/ojs/index.php/fm/article/view/4877/3867>.Birmingham, Simon. “2017 Pilot to Test Impact, Business Engagement of Researchers.” Media Release. Australian Government: Australian Research Council. 21 Nov. 2016. <http://www.arc.gov.au/news-media/media-releases/2017-pilot-test-impact-business-engagement-researchers>.Booth, Douglas. Australian Beach Cultures: The History of Sun, Sand, and Surf. London, United Kingdom: F. Cass, 2001.Bukvova, Helena. “Taking New Routes: Blogs, Web Sites, and Scientific Publishing.” ScieCom Info 7.2 (2011). 20 May 2017 <http://journals.lub.lu.se/index.php/sciecominfo/article/view/5148>.Constine, Josh. “Instagram’s Growth Speeds Up as It Hits 700 Million Users.” Techcrunch, 26 Apr. 2017. 1 June 2017 <https://techcrunch.com/2017/04/26/instagram-700-million-users/>.drlizellison. “Dr Liz Ellison.” Instagram.com, 2017. 8 June 2017 <http://www.instagram.com/drlizellison>.Ellison, Elizabeth. “The Australian Beachspace: Flagging the Spaces of Australian Beach Texts.” PhD thesis. Brisbane: Queensland U of Technology, 2013. <https://eprints.qut.edu.au/63468/>.Ellison, Elizabeth. “The Gritty Urban: The Australian Beach as City Periphery in Cinema.” Filmburia: Screening the Suburbs. Eds. David Forrest, Graeme Harper and Jonathan Rayner. UK: Palgrave Macmillan, 2017. 79–94.Ellison, Elizabeth, and Lesley Hawkes. “Australian Beachspace: The Plurality of an Iconic Site”. Borderlands e-Journal: New Spaces in the Humanities 15.1 (2016). 4 June 2017 <http://www.borderlands.net.au/vol15no1_2016/ellisonhawkes_beachspace.pdf>.Else, Holly. “Tell Us about Your Paper—and Make It Short and Tweet.” Times Higher Education, 9 July 2015. 1 June 2017 <https://www.timeshighereducation.com/opinion/tell-us-about-your-paper-and-make-it-short-and-tweet>.Ferrara, Emilio, Roberto Interdonato, and Andrea Tagarelli. “Online Popularity and Topical Interests through the Lens of Instagram.” Proceedings of the 25th ACM Conference on Hypertext and Social Media, Santiago, Chile, 1–4 Sep. 2014. <http://dx.doi.org/10.1145/2631775.2631808>.Gruzd, Anatoliy, Kathleen Staves, and Amanda Wilk. “Connected Scholars: Examining the Role of Social Media in Research Practices of Faculty Using the Utaut Model.” Computers in Human Behavior 28.6 (2012): 2340–50.Gunn, Andrew, and Michael Mintrom. “Evaluating the Non-Academic Impact of Academic Research: Design Considerations.” Journal of Higher Education Policy and Management 39.1 (2017): 20–30. <http://dx.doi.org/10.1080/1360080X.2016.1254429>.Highfield, Tim, and Tama Leaver. “A Methodology for Mapping Instagram Hashtags”. First Monday 20.1 (2015). 18 Oct. 2016 <http://firstmonday.org/ojs/index.php/fm/article/view/5563/4195>.Hochman, Nadav, and Lev Manovich. “Zooming into an Instagram City: Reading the Local through Social Media.” First Monday 18.7 (2013). <http://firstmonday.org/ojs/index.php/fm/article/view/4711/3698>.Hochman, Nadav, and Raz Schwartz. “Visualizing Instagram: Tracing Cultural Visual Rhythms.” Proceedings of the Workshop on Social Media Visualization (SocMedVis) in Conjunction with the Sixth International AAAI Conference on Weblogs and Social Media (ICWSM–12), 2012. 6–9. 2 June 2017 <http://razschwartz.net/wp-content/uploads/2012/01/Instagram_ICWSM12.pdf>.Huntsman, Leone. Sand in Our Souls: The Beach in Australian History. Carlton South, Victoria: Melbourne U Press, 2001.Instagram. “700 Million.” Instagram Blog, 26 Apr. 2017. 6 June 2017 <http://blog.instagram.com/post/160011713372/170426-700million>.Instagram. “Instagram Business.” 6 June 2017. <https://business.instagram.com/>.katgaskin. “Salty Pineapple”. Instagram.com, 2017. 2 June 2017 <https://www.instagram.com/katgaskin/>.katgaskin. “Salty Hair with a Pineapple Towel…” Instagram.com, 2 June 2017. 6 June 2017 <https://www.instagram.com/p/BU0zSWUF0cm/?taken-by=katgaskin>.Kibby, Marjorie Diane. “Monument Valley, Instagram, and the Closed Circle of Representation.” M/C Journal 19.5 (2016). 20 April 2017 <http://journal.media-culture.org.au/index.php/mcjournal/article/view/1152>.Kirkup, Gill. “Academic Blogging: Academic Practice and Academic Identity.” London Review of Education 8.1 (2010): 75–84.liz_ellison. “#AustralianBeachspace.” Storify.com. 8 June 2017. <https://storify.com/liz_ellison/australianbeachspace>.Mahnke Skrubbeltrang, Martina, Josefine Grunnet, and Nicolar Traasdahl Tarp. “#RIPINSTAGRAM: Examining User’s Counter-Narratives Opposing the Introduction of Algorithmic Personalization on Instagram.” First Monday 22.4 (2017). <http://firstmonday.org/ojs/index.php/fm/article/view/7574/6095>.Mahrt, Merja, Katrin Weller, and Isabella Peters. “Twitter in Scholarly Communication.” Twitter and Society. Eds. Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt, and Cornelius Puschmann. New York: Peter Lang, 2014. 399–410. <https://eprints.qut.edu.au/66321/1/Twitter_and_Society_(2014).pdf#page=438>.Manikonda, Lydia, Yuheng Hu, and Subbarao Kambhampati. “Analyzing User Activities, Demographics, Social Network Structure and User-Generated Content on Instagram.” ArXiv (2014). 1 June 2017 <https://arxiv.org/abs/1410.8099>.Niyazov, Yuri, Carl Vogel, Richard Price, et al. “Open Access Meets Discoverability: Citations to Articles Posted to Academia.edu.” PloS One 11.2 (2016): e0148257. <https://doi.org/10.1371/journal.pone.0148257>.Powell, Douglas A., Casey J. Jacob, and Benjamin J. Chapman. “Using Blogs and New Media in Academic Practice: Potential Roles in Research, Teaching, Learning, and Extension.” Innovative Higher Education 37.4 (2012): 271–82. <http://dx.doi.org/10.1007/s10755-011-9207-7>.Veletsianos, George. “Higher Education Scholars' Participation and Practices on Twitter.” Journal of Computer Assisted Learning 28.4 (2012): 336–49. <http://dx.doi.org/10.1111/j.1365-2729.2011.00449.x>.Weilenmann, Alexandra, Thomas Hillman, and Beata Jungselius. “Instagram at the Museum: Communicating the Museum Experience through Social Photo Sharing.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Paris: ACM Press, 2013. 1843–52. <dx.doi.org/10.1145/2470654.2466243>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography