Статті в журналах з теми "HTML5 APIs"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: HTML5 APIs.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-21 статей у журналах для дослідження на тему "HTML5 APIs".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lewis, David, Qun Liu, Leroy Finn, Chris Hokamp, Felix Sasaki, and David Filip. "Open, web-based internationalization and localization tools." Translation Spaces 3 (November 28, 2014): 99–132. http://dx.doi.org/10.1075/ts.3.05lew.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As many software applications have moved from a desktop software deployment model to a Software-as-a-Service (SaaS) model so we have seen tool vendors in the language service industry move to a SaaS model, e.g., for web-based Computer Assisted Translation (CAT) tools. However, many of these offerings fail to take full advantage of the Open Web Platform, i.e., the rich set of web browser-based APIs linked to HTML5. We examine the interoperability landscape that developers of web-based translation tools can benefit from, and in particular the potential offered by the open metadata defined in the W3C’s (World Wide Web Consortium) recent Internationalization Tag Set v2.0 Recommendation. We examine how this can be used in conjunction with the XML Localisation Interchange File Format (XLIFF) standardized by OASIS to exchange translation jobs between servers and Javascript-based CAT tools running in the web browser. We also explore how such open metadata can support activities in the multilingual web processing chain before and after translation.
2

Liu, Shukai, Xuexiong Yan, Qingxian Wang, Xu Zhao, Chuansen Chai, and Yajing Sun. "A Protection Mechanism against Malicious HTML and JavaScript Code in Vulnerable Web Applications." Mathematical Problems in Engineering 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/7107042.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The high-profile attacks of malicious HTML and JavaScript code have seen a dramatic increase in both awareness and exploitation in recent years. Unfortunately, exiting security mechanisms provide no enough protection. We propose a new protection mechanism named PMHJ based on the support of both web applications and web browsers against malicious HTML and JavaScript code in vulnerable web applications. PMHJ prevents the injection attack of HTML elements with a random attribute value and the node-split attack by an attribute with the hash value of the HTML element. PMHJ ensures the content security in web pages by verifying HTML elements, confining the insecure HTML usages which can be exploited by attackers, and disabling the JavaScript APIs which may incur injection vulnerabilities. PMHJ provides a flexible way to rein the high-risk JavaScript APIs with powerful ability according to the principle of least authority. The PMHJ policy is easy to be deployed into real-world web applications. The test results show that PMHJ has little influence on the run time and code size of web pages.
3

Dos Santos, Marcio Carneiro. "Métodos digitais e a memória acessada por APIs: desenvolvimento de ferramenta para extração de dados de portais jornalísticos a partir da WayBack Machine." Revista Observatório 1, no. 2 (December 8, 2015): 23. http://dx.doi.org/10.20873/uft.2447-4266.2015v1n2p23.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Explora-se a possibilidade de automação da coleta de dados em sites, a partir da aplicação de código construído em linguagem de programação Python, utilizando a sintaxe específica do HTML (HiperText Markup Language) para localizar e extrair elementos de interesse como links, texto e imagens. A coleta automatizada de dados, também conhecida como raspagem (scraping) é um recurso cada vez mais comum no jornalismo. A partir do acesso ao repositório digital do site www.web.archive.org, também conhecido como WayBackMachine, desenvolvemos a prova de conceito de um algoritmo capaz de recuperar, listar e oferecer ferramentas básicas de análise sobre dados coletados a partir das diversas versões de portais jornalísticos ao longo do tempo. PALAVRAS-CHAVE: Raspagem de dados. Python Jornalismo Digital. HTML. Memória. ABSTRACTWe explore the possibility of automation of data collection from web pages, using the application of customized code built in Python programming language, with specific HTML syntax (Hypertext Markup Language) to locate and extract elements of interest as links, text and images. The automated data collection, also known as scraping is an increasingly common feature in journalism. From the access to the digital repository site www.web.archive.org, also known as WayBackMachine, we develop a proof of concept of an algorithm able to recover, list and offer basic tools of analysis of data collected from the various versions of newspaper portals in time series.KEYWORDS: Scraping. Python. Digital Journalism. HTML. Memory. RESUMENSe explora la posibilidad de automatización de los sitios de recolección de datos, desde el código de aplicación construida en lenguaje de programación Python, utilizando la sintaxis específica de HTML (Hypertext Markup Language) para localizar y extraer elementos de interés, tales como enlaces, texto e imágenes. La colección de datos automatizada, también conocido como el raspado es una característica cada vez más común en el periodismo. Desde el acceso a la www.web.archive.org, sitio de repositorio digital, también conocida como WayBackMachine, desarrollamos una prueba de concepto de un algoritmo para recuperar, listar y ofrecer herramientas básicas de análisis de los datos recogidos de las diferentes versiones de portales de periódicos en el tiempo. PALABRAS CLAVE: Raspar datos. Python. Periodismo digital. HTML. Memoria. ReferênciasBIRD, Steven; LOPER, Edward; KLEIN, Ewan. Natural Language Processing with Python: analyzing text with the Natural Language Toolkit. New York: O'Reilly Media Inc., 2009.BONACICH, Phillip; LU, Phillip. Introduction to mathematical sociology. New Jersey: Princeton University Press, 2012.BRADSHAW, Paul. Scraping for Journalists. Leanpub, 2014, [E-book].GLEICK, James. A Informação. Uma história, uma teoria, uma enxurrada. São Paulo, Companhia das Letras, 2013.MANOVICH, Lev. The Language of New Media. Cambrige: Mit Press, 2001.MORETTI, Franco. Graphs, maps, trees. Abstract models for literary history. New York, Verso, 2007.ROGERS, Richard. Digital Methods. Cambridge: Mit Press, 2013. E-book.SANTOS, Márcio. Conversando com uma API: um estudo exploratório sobre TV social a partir da relação entre o twitter e a programação da televisão. Revista Geminis, ano 4 n. 1, p. 89-107, São Carlos. 2013. Disponível em: . Acesso em: 20 abr. 2013.SANTOS, Márcio. Textos gerados por software. Surge um novo gênero jornalístico. Anais XXXVII Congresso Brasileiro de Ciências da Comunicação. Foz do Iguaçu, 2014. Disponível em: . Acesso em 26 jan. 2014. Disponível em:Url: http://opendepot.org/2682/ Abrir em (para melhor visualização em dispositivos móveis - Formato Flipbooks):Issuu / Calameo
4

Kancherla, Jayaram, Alexander Zhang, Brian Gottfried, and Hector Corrada Bravo. "Epiviz Web Components: reusable and extensible component library to visualize functional genomic datasets." F1000Research 7 (July 17, 2018): 1096. http://dx.doi.org/10.12688/f1000research.15433.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Interactive and integrative data visualization tools and libraries are integral to exploration and analysis of genomic data. Web based genome browsers allow integrative data exploration of a large number of data sets for a specific region in the genome. Currently available web-based genome browsers are developed for specific use cases and datasets, therefore integration and extensibility of the visualizations and the underlying libraries from these tools is a challenging task. Genomic data visualization and software libraries that enable bioinformatic researchers and developers to implement customized genomic data viewers and data analyses for their application are much needed. Using recent advances in core web platform APIs and technologies including Web Components, we developed the Epiviz Component Library, a reusable and extensible data visualization library and application framework for genomic data. Epiviz Components can be integrated with most JavaScript libraries and frameworks designed for HTML. To demonstrate the ease of integration with other frameworks, we developed an R/Bioconductor epivizrChart package, that provides interactive, shareable and reproducible visualizations of genomic data objects in R, Shiny and also create standalone HTML documents. The component library is modular by design, reusable and natively extensible and therefore simplifies the process of managing and developing bioinformatic applications.
5

Lalit, Reema, Karun Handa, and Nitin Sharma. "Automated Feedback Collection and Analysis System." International Journal of Distributed Artificial Intelligence 10, no. 1 (January 2018): 43–53. http://dx.doi.org/10.4018/ijdai.2018010104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
There is a rampant appreciation to the analysis framework which is a mean for drafting a vigorous and fairer educational institution system. In a developing country like India, where higher education is presaged to develop new resources for serving its people, an extremely effectual and fair reliable automated feedback system is essential for assessment. In this article, an automated educator feedback system is proposed. As faculty performance and feedback analysis are essential to facilitate educators find effectual teaching and learning in order to better engage students in classes. This system aimed at holding teacher's accountability for their performance. The proposed system is based on technologies like PHP, JavaScript, HTML, XAMPP server, MYSQL, and Google APIs. This system will help the institution bureaucrats to write confidential appraisal report. The proposed system analysis the feedback class wise as well as individual faculty wise and provide the analysis in the form of Google charts.
6

Buhari, Bello A., Aliyu Mubarak, Bello A. Bodinga, and Muazu D. Sifawa. "Design Of A Secure Virtual File Storage System On Cloud Using Hybrid Cryptography." International Journal of Advanced Networking and Applications 13, no. 05 (2022): 5143–51. http://dx.doi.org/10.35444/ijana.2022.13508.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As Security is becoming more and more useful in the field of computing, users would like to be sure of how secure their files are on a system, as security is one of the most crucial fields in networking and file storage. Dependable file storage and access establish several security issues in a cloud computing. This research designed and implemented virtual secure file storage system on cloud using hybrid cryptography. The cryptography method used for file encryption and decryption is AES and SHA-2 hash function. It is implemented using Cloud APIs with REST calls or client libraries in PHP. The system interfaces were developed using HTML, CSS and JAVASCRIPT. Back end development was done using PHP, MYSQL and GCP Cloud Storage Library then the file encryption and decryption was achieved through PHP classes which includes open_ssl_file_encryption and decryption (AES) and also MCRYPT function. The proposed virtual system is also compared with some latest related works.
7

Khoir, Syaiful Amrial, Anton Yudhana, and Sunardi S. "Implementasi GPS (Global Positioning System) Pada Presensi Berbasis Android DI BMT Insan Mandiri." J-SAKTI (Jurnal Sains Komputer dan Informatika) 4, no. 1 (March 30, 2020): 9. http://dx.doi.org/10.30645/j-sakti.v4i1.182.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Attendance is a document that represents each employee at the company or representative. Some of the challenges experienced by KSPPS BMT INSAN MANDIRI are marketing that is difficult to carry out attendance when coming out of the office because of increased marketing activities that are directly related to members / out-of-office expenses. Therefore the researcher got findings to solve the problem that occurred by making an Android-based online attendance application that can be connected directly with the sarver provided by the office. This application is equipped with the introduction of security and the use of manipulations made by employees, and this application is equipped with monitoring features so that marketing managers and managers can set marketing positions in realtime. In this study, researchers used HTML and PHP programming languages for web applications that use presence servers / data centers, webside uses Google map APIs to obtain marketing positions, and presence reports can be purchased from the webside. The hope of this research is that the online android mobile attendance application with face recognition security can run well and meet the needs of the KSPPS BMT INSAN MANDIRI.
8

Skinner, Frances, Iouli Gordon, Christian Hill, Robert Hargreaves, Kelly Lockhart, and Laurence Rothman. "Referencing Sources of Molecular Spectroscopic Data in the Era of Data Science: Application to the HITRAN and AMBDAS Databases." Atoms 8, no. 2 (April 30, 2020): 16. http://dx.doi.org/10.3390/atoms8020016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The application described has been designed to create bibliographic entries in large databases with diverse sources automatically, which reduces both the frequency of mistakes and the workload for the administrators. This new system uniquely identifies each reference from its digital object identifier (DOI) and retrieves the corresponding bibliographic information from any of several online services, including the SAO/NASA Astrophysics Data Systems (ADS) and CrossRef APIs. Once parsed into a relational database, the software is able to produce bibliographies in any of several formats, including HTML and BibTeX, for use on websites or printed articles. The application is provided free-of-charge for general use by any scientific database. The power of this application is demonstrated when used to populate reference data for the HITRAN and AMBDAS databases as test cases. HITRAN contains data that is provided by researchers and collaborators throughout the spectroscopic community. These contributors are accredited for their contributions through the bibliography produced alongside the data returned by an online search in HITRAN. Prior to the work presented here, HITRAN and AMBDAS created these bibliographies manually, which is a tedious, time-consuming and error-prone process. The complete code for the new referencing system can be found on the HITRANonline GitHub website.
9

Mahomodally, A. Fatwimah Humairaa, and Geerish Suddul. "An Enhanced Freelancer Management System with Machine Learning-based Hiring." Shanlax International Journal of Arts, Science and Humanities 9, no. 3 (January 1, 2022): 34–41. http://dx.doi.org/10.34293/sijash.v9i3.4405.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Existing Freelancer Management Systems are not being adequately efficient, inconveniencing to a certain degree the freelance workforce, which comprises around 1.1 billion freelancers globally. This paper thereby aims to resolve the impediments of similar existing systems. Pertaining to the methodology, qualitative analysis was adopted. Interviews, participant observation, interface analysis, workshop documents, research papers, books and articles were used to draw data about similar applications. A web application was implemented to fulfil the objectives by using WAMP as a local development server, Visual Studio Code as a source code editor, and HTML, PHP, Python, SQL, JavaScript and CSS as programming languages along with Ajax for requests-handling functionalities, and already available APIs, and jQuery and Python libraries. The contributions brought forth are providing a shortlist of the best-qualified freelancers for each project via Machine Learning technique, generating an automated invoice and payment as soon as an entrepreneur supplies a monetary figure when approving the deliverable of a project, and enabling freelancers to sign contracts electronically to comply with business terms on one centralised repository, unlike existing systems which do not support these 3 features together on the same platform. The multivariate regression model used for intelligent hiring performs satisfactorily by yielding a R2 of around 0.9993.
10

Bristol, Glenn Arwin M. "Integrating of Voice Recognition Email Application System for Visually Impaired Person using Linear Regression Algorithm." Proceedings of The International Halal Science and Technology Conference 14, no. 1 (March 10, 2022): 56–66. http://dx.doi.org/10.31098/ihsatec.v14i1.486.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The outcome of this study will surely help visually impaired people who face difficulties in accessing the computer system. Voice recognition will help them to access email. This study also reduces the cognitive load taken by visually impaired users to remember and type characters using a keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in society and has fair treatment and access to technology main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use the modern application to interact with voice recognition systems with the use of email into different types of modern gadgets, Line computers, or mobile phones. In terms of functionality of the application, the proponents will use a set of APIs,' or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process through Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of a web interface. For the creation of a Web Base UI, HTML and CSS will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used a descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers want to gain a better understanding of a topic. It focuses on providing information that is useful in the development. The research is based on a mixed method focused on producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: The majority of the respondents were male adultery period ranging from ages 32-41.all are working as massage therapists. The majority of the respondents rated the overall function of the application as Excellent and rated the level of security of the application as Secured.
11

Bristola, Glenn Arwin M. "Integrating of voice recognition email application system for visually impaired person using linear regression algorithm." South Asian Journal of Engineering and Technology 12, no. 1 (March 31, 2022): 74–83. http://dx.doi.org/10.26524/sajet.2022.12.12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The outcome of this study will surely help visually impaired people, who face difficulties in accessing the computer system. Voice recognition will help them to access e-mail. This study also reduces cognitive load taken by a visually impaired users to remember and type characters using keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in the society and has fair treatment and access in technologyThe main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use modernize application to interact with voice recognition system with the use of email into different types of modern gadgets Line computers or mobile phones. In terms of Functionality of the application, the proponents will use a set of APIs’ or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process though Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of web interface. HTML and CSS is the front end programming for the creation of Web Base User Interface that will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers wants to gain a better understanding for a topic. It focuses on providing information that is useful in the development. The research is based on mixed method focused in producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: Majority of the respondents were male adultery period ranging ages 32-41.all are working in the massage therapist. Majority of the respondents rated the overall function of the application Excellent and rated the level of security of the application is Secured.
12

Ратов, Д. В. "Object adaptation of Drag and Drop technology for web-system interface components." ВІСНИК СХІДНОУКРАЇНСЬКОГО НАЦІОНАЛЬНОГО УНІВЕРСИТЕТУ імені Володимира Даля, no. 4(268) (June 10, 2021): 7–12. http://dx.doi.org/10.33216/1998-7927-2021-268-4-7-12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Today, in the development of information systems, cloud technologies are often used for remote computing and data processing. There are web technologies, and on their basis, libraries and frameworks have been developed for creating web applications and user interfaces designed for the operation of information systems in browsers. Ready-made JavaScript libraries have been developed to add drag and drop functionality to a web application. However, in some situations, the library may not be available, or there may be overhead or dependencies that the project does not need to use it. In such situations, an alternative solution provides the functionality of APIs available in modern browsers. The article discusses the current state of the methods of the Drag and Drop mechanism and proposes a programmatic way to improve the interface by creating a class for dragging and dropping elements when organizing work in multi-user information web systems. Drag and Drop is a convenient way to improve the interface. Grabbing an element with the mouse and moving it visually simplifies many operations: from copying and moving documents, as in file managers, to placing orders in online store services. The HTML drag and drop API uses the DOM event model to retrieve information about a dragged element and update that element after the drag. Using JavaScript event handlers, it is possible to turn any element of the web system into a drag-and-drop element or drop target. To solve this problem, a JavaScript object was developed with methods that allow you to create a copy of any object and handle all events of this object aimed at organizing the Drag and Drop mechanism. Basic algorithm of Drag and Drop technology based on processing mouse events. The software implementation is considered and the results of the practical use of object adaptation of the Drag and Drop technology for the interface components of the web system - the medical information system MedSystem, in which the application modules have the implementation of the dispatcher and the interactive window interface are presented. In the "Outpatient clinic" module, the Drag and Drop mechanism is used when working with the "Appointment sheet". In the "Hospital" module of the MedSystem medical information system, the Drag and Drop mechanism is used in the "List of doctor's appointments". The results of using object adaptation of Drag and Drop technology have shown that this mechanism organically fits into existing technologies for building web applications and has sufficient potential to facilitate and automate work in multi-user information systems and web services.
13

Garaizar, Pablo, Miguel Ángel Vadillo, and Diego Lopez-de-Ipina. "Benefits and Pitfalls of Using HTML5 APIs for Online Experiments and Simulations." International Journal of Online Engineering (iJOE) 8, S3 (November 30, 2012). http://dx.doi.org/10.3991/ijoe.v8is3.2254.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Baldo, Cristiano, Maria Cristina Zanchim, Vanessa Ramos Kirsten, and Ana Carolina Bertoletti De Marchi. "Diabetes Food Control – Um aplicativo móvel para avaliação do consumo alimentar de pacientes diabéticos." Revista Eletrônica de Comunicação, Informação e Inovação em Saúde 9, no. 3 (October 8, 2015). http://dx.doi.org/10.29397/reciis.v9i3.1000.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
O Diabetes mellitus (DM) é uma doença crônica de elevada prevalência. O cumprimento de uma dieta alimentar adequada, o conhecimento do perfil nutricional e a adesão ao tratamento de diabéticos tornam-se relevantes para melhorar a qualidade de vida e reduzir os custos com a saúde. Este artigo tem como objetivo apresentar o aplicativo Diabetes Food Control, desenvolvido para avaliar os marcadores do consumo alimentar dos diabéticos, baseado em um questionário validado. Foram utilizados, no seu desenvolvimento, Application Programming Interfaces (APIs) do Apache Cordova e as linguagens HTML5, CSS e JavaScript para dispositivos portáteis da plataforma Android. O aplicativo foi avaliado por especialistas da área da nutrição com um questionário adaptado do Modelo de Aceitação de Tecnologia (Technology Acceptance Model -TAM) e com a técnica thinking aloud. Os resultados identificaram uma aceitação satisfatória do aplicativo, principalmente quanto à sua utilização, por permitir maior praticidade, facilidade e agilidade na realização da coleta de dados, frente aos métodos tradicionais em papel.
15

"Techniques, for Data Extraction from, Heterogeneous Sources with Data Security." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 2152–59. http://dx.doi.org/10.35940/ijrte.b3254.078219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Data Extraction is the process of mining or fetching relevant information from unstructured data or the heterogeneous sources of data. This paper aims at mining data from three different sources such as online website, flat files and database and the extracted data are even analyzed in terms of precisions, recall and accuracy. In the environment of heterogeneous sources of data, data extraction is one of the crucial issue and therefore considering the present scenario, we can observe that the heterogeneity is expanding widespread. So this paper focus on the different sources for the data extraction and provides a single framework to perform the required tasks. In this paper, healthcare data are considered in order to show the processing starting from data extraction using three different sources to dividing them in to two clusters based on the thresholds value which has been calculated using cosine similarity and finally calculations of parameters like precisions, recall and accuracy for analyzation purpose. Fetching data online is the task in which we cannot fetch simple string from any website. The backend of each page is html and hence this paper focus on extracting that html of the page while mining data from any web server. The webpage contains a lot of html tags and all of these cannot be removed because they are complex tags which cannot be removed by regular expressions. But still 60% filtered data can be attained as demonstrated in this paper as most of the waste html will be removed. While filtration of the data, we should also note that the content containing Google APIs cannot be removed. So filtered data will contain the contents and tags which does not contain Google APIs. In order to provide data security while extraction, the connection string is being used to avoid tampering of data. This paper also focuses on one of the arguable concepts present in the generation of big data which is Data Lake. In originality, the origin about the idea of Data Lake appears from the field of business. An architectural approach which is specially designed in order to store all the data which are potentially relevant in a repository located centrally is referred to as Data Lake. The data which are stored in the central based repository are fetched from the sources belonging to public as well as enterprises and these data are further used for the purpose of organization, discovery of hidden facts, understanding of new concepts, analyzation of stored information etc. Many challenges and concerns related to privacy are faced during the adoption of Data Lake as it is a new concept which brings revolutionization. This paper also highlights some of the issues imposed by Data Lake.
16

Fokkema, Ivo F. A. C., Mark Kroon, Julia A. López Hernández, Daan Asscheman, Ivar Lugtenburg, Jerry Hoogenboom, and Johan T. den Dunnen. "The LOVD3 platform: efficient genome-wide sharing of genetic variants." European Journal of Human Genetics, September 15, 2021. http://dx.doi.org/10.1038/s41431-021-00959-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractGene variant databases are the backbone of DNA-based diagnostics. These databases, also called Locus-Specific DataBases (LSDBs), store information on variants in the human genome and the observed phenotypic consequences. The largest collection of public databases uses the free, open-source LOVD software platform. To cope with the current demand for online databases, we have entirely redesigned the LOVD software. LOVD3 is genome-centered and can be used to store summary variant data, as well as full case-level data with information on individuals, phenotypes, screenings, and variants. While built on a standard core, the software is highly flexible and allows personalization to cope with the largely different demands of gene/disease database curators. LOVD3 follows current standards and includes tools to check variant descriptions, generate HTML files of reference sequences, predict the consequences of exon deletions/duplications on the reading frame, and link to genomic views in the different genomes browsers. It includes APIs to collect and submit data. The software is used by about 100 databases, of which 56 public LOVD instances are registered on our website and together contain 1,000,000,000 variant observations in 1,500,000 individuals. 42 LOVD instances share data with the federated LOVD data network containing 3,000,000 unique variants in 23,000 genes. This network can be queried directly, quickly identifying LOVD instances containing relevant information on a searched variant.
17

Owen, David, Laurence Livermore, Quentin Groom, Alex Hardisty, Thijs Leegwater, Myriam van Walsum, Noortje Wijkamp, and Irena Spasić. "Towards a scientific workflow featuring Natural Language Processing for the digitisation of natural history collections." Research Ideas and Outcomes 6 (July 3, 2020). http://dx.doi.org/10.3897/rio.6.e55789.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We describe an effective approach to automated text digitisation with respect to natural history specimen labels. These labels contain much useful data about the specimen including its collector, country of origin, and collection date. Our approach to automatically extracting these data takes the form of a pipeline. Recommendations are made for the pipeline's component parts based on some of the state-of-the-art technologies. Optical Character Recognition (OCR) can be used to digitise text on images of specimens. However, recognising text quickly and accurately from these images can be a challenge for OCR. We show that OCR performance can be improved by prior segmentation of specimen images into their component parts. This ensures that only text-bearing labels are submitted for OCR processing as opposed to whole specimen images, which inevitably contain non-textual information that may lead to false positive readings. In our testing Tesseract OCR version 4.0.0 offers promising text recognition accuracy with segmented images. Not all the text on specimen labels is printed. Handwritten text varies much more and does not conform to standard shapes and sizes of individual characters, which poses an additional challenge for OCR. Recently, deep learning has allowed for significant advances in this area. Google's Cloud Vision, which is based on deep learning, is trained on large-scale datasets, and is shown to be quite adept at this task. This may take us some way towards negating the need for humans to routinely transcribe handwritten text. Determining the countries and collectors of specimens has been the goal of previous automated text digitisation research activities. Our approach also focuses on these two pieces of information. An area of Natural Language Processing (NLP) known as Named Entity Recognition (NER) has matured enough to semi-automate this task. Our experiments demonstrated that existing approaches can accurately recognise location and person names within the text extracted from segmented images via Tesseract version 4.0.0. Potentially, NER could be used in conjunction with other online services, such as those of the Biodiversity Heritage Library to map the named entities to entities in the biodiversity literature (https://www.biodiversitylibrary.org/docs/api3.html). We have highlighted the main recommendations for potential pipeline components. The document also provides guidance on selecting appropriate software solutions. These include automatic language identification, terminology extraction, and integrating all pipeline components into a scientific workflow to automate the overall digitisation process.
18

Sokolowicz, Carolina, Marcus Guidoti, and Donat Agosti. "Discovering Known Biodiversity: Digital accessible knowledge — Getting the community involved." Biodiversity Information Science and Standards 5 (September 14, 2021). http://dx.doi.org/10.3897/biss.5.74369.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Plazi is a non-profit organization focused on the liberation of data from taxonomic publications. As one of Plazi’s goals of promoting the accessibility of taxonomic data, our team has developed different ways of getting the outside community involved. The Plazi community on GitHub encourages the scientific community and other contributors to post GGI-related (Golden Gate Imagine document editor) questions, requirements, ideas, and/or suggestions, including bug reports and feature requests. One can contact us via this GitHub community by creating either an Issue (to report problems on our data or related systems) or a Discussion (to post questions, ideas, or suggestions). We use Github's built-in label system to actively curate the content posted in this repository in order to facilitate further interaction, including filtering and searching before creating new entries. In the plazi/community repository, there is a Q&A (question & answer) section with selected questions and answers that might help solving the encountered problems. Aiming at increasing external participation in the task of liberating taxonomic data, we are developing training courses with independent learning modules that can be combined in different ways to target different audiences (e.g., undergraduates, researchers, developers) in various formats. This material will include text, print-screens, slides, screencasts, and, eventually to a minor extent, online teaching. Each topic within a module will have one or more ‘inline tests', which will be HTML form-based with hard-coded answers to directly assess progress regarding the subject being covered in that particular topic. At the end of each module, we will have a capstone (form-based test asking questions about the topics covered in the respective module) which the user can access whenever needed. As examples of our independent learning modules we can cite Modules I, II and III and their respective topics. Module I (Biodiversity Taxonomy Basis) includes introductory topics (e.g., Topic I — Why do we classify living things; Topic II — Linnaean binomial; Topic III — How is taxonomic information displayed in the literature) aimed at those who don't have a biology/taxonomy background. Module II (The Plazi way) topics (Topic I — Plazi mission; Topic II — Taxomic treatments; Topic III — FAIR taxonomic treatments) are designed in a way that course takers can learn about Plazi processes. Module III (The Golden Gate Imagine) includes topics (Topic I — Introduction to GGI; Topic II — Other User Interface-based alternatives to annotate documents) about the document editor for marking up documents in XML. Other modules include subjects such as individual extractions, material and treatment citations, data quality control, and others. On completion of a module, the user will be awarded a certificate. The combination of these certificates will grant badges that will translate into server permissions that will allow the user to upload new liberated taxonomic treatments and edit treatments already in the system, for instance. Taxonomic treaments are any piece of information about a given taxon concept that involves, includes, or results from an interpretation of the concept of that given taxon. Additionally, Plazi TreatmentBank APIs (Application Programming Interface) are currently being expanded and redesigned and the documentation for these long-waited endpoints will be displayed, for the first time, in this talk.
19

Burgess, Jean, and Axel Bruns. "Twitter Archives and the Challenges of "Big Social Data" for Media and Communication Research." M/C Journal 15, no. 5 (October 11, 2012). http://dx.doi.org/10.5204/mcj.561.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Lists and Social MediaLists have long been an ordering mechanism for computer-mediated social interaction. While far from being the first such mechanism, blogrolls offered an opportunity for bloggers to provide a list of their peers; the present generation of social media environments similarly provide lists of friends and followers. Where blogrolls and other earlier lists may have been user-generated, the social media lists of today are more likely to have been produced by the platforms themselves, and are of intrinsic value to the platform providers at least as much as to the users themselves; both Facebook and Twitter have highlighted the importance of their respective “social graphs” (their databases of user connections) as fundamental elements of their fledgling business models. This represents what Mejias describes as “nodocentrism,” which “renders all human interaction in terms of network dynamics (not just any network, but a digital network with a profit-driven infrastructure).”The communicative content of social media spaces is also frequently rendered in the form of lists. Famously, blogs are defined in the first place by their reverse-chronological listing of posts (Walker Rettberg), but the same is true for current social media platforms: Twitter, Facebook, and other social media platforms are inherently centred around an infinite, constantly updated and extended list of posts made by individual users and their connections.The concept of the list implies a certain degree of order, and the orderliness of content lists as provided through the latest generation of centralised social media platforms has also led to the development of more comprehensive and powerful, commercial as well as scholarly, research approaches to the study of social media. Using the example of Twitter, this article discusses the challenges of such “big data” research as it draws on the content lists provided by proprietary social media platforms.Twitter Archives for ResearchTwitter is a particularly useful source of social media data: using the Twitter API (the Application Programming Interface, which provides structured access to communication data in standardised formats) it is possible, with a little effort and sufficient technical resources, for researchers to gather very large archives of public tweets concerned with a particular topic, theme or event. Essentially, the API delivers very long lists of hundreds, thousands, or millions of tweets, and metadata about those tweets; such data can then be sliced, diced and visualised in a wide range of ways, in order to understand the dynamics of social media communication. Such research is frequently oriented around pre-existing research questions, but is typically conducted at unprecedented scale. The projects of media and communication researchers such as Papacharissi and de Fatima Oliveira, Wood and Baughman, or Lotan, et al.—to name just a handful of recent examples—rely fundamentally on Twitter datasets which now routinely comprise millions of tweets and associated metadata, collected according to a wide range of criteria. What is common to all such cases, however, is the need to make new methodological choices in the processing and analysis of such large datasets on mediated social interaction.Our own work is broadly concerned with understanding the role of social media in the contemporary media ecology, with a focus on the formation and dynamics of interest- and issues-based publics. We have mined and analysed large archives of Twitter data to understand contemporary crisis communication (Bruns et al), the role of social media in elections (Burgess and Bruns), and the nature of contemporary audience engagement with television entertainment and news media (Harrington, Highfield, and Bruns). Using a custom installation of the open source Twitter archiving tool yourTwapperkeeper, we capture and archive all the available tweets (and their associated metadata) containing a specified keyword (like “Olympics” or “dubstep”), name (Gillard, Bieber, Obama) or hashtag (#ausvotes, #royalwedding, #qldfloods). In their simplest form, such Twitter archives are commonly stored as delimited (e.g. comma- or tab-separated) text files, with each of the following values in a separate column: text: contents of the tweet itself, in 140 characters or less to_user_id: numerical ID of the tweet recipient (for @replies) from_user: screen name of the tweet sender id: numerical ID of the tweet itself from_user_id: numerical ID of the tweet sender iso_language_code: code (e.g. en, de, fr, ...) of the sender’s default language source: client software used to tweet (e.g. Web, Tweetdeck, ...) profile_image_url: URL of the tweet sender’s profile picture geo_type: format of the sender’s geographical coordinates geo_coordinates_0: first element of the geographical coordinates geo_coordinates_1: second element of the geographical coordinates created_at: tweet timestamp in human-readable format time: tweet timestamp as a numerical Unix timestampIn order to process the data, we typically run a number of our own scripts (written in the programming language Gawk) which manipulate or filter the records in various ways, and apply a series of temporal, qualitative and categorical metrics to the data, enabling us to discern patterns of activity over time, as well as to identify topics and themes, key actors, and the relations among them; in some circumstances we may also undertake further processes of filtering and close textual analysis of the content of the tweets. Network analysis (of the relationships among actors in a discussion; or among key themes) is undertaken using the open source application Gephi. While a detailed methodological discussion is beyond the scope of this article, further details and examples of our methods and tools for data analysis and visualisation, including copies of our Gawk scripts, are available on our comprehensive project website, Mapping Online Publics.In this article, we reflect on the technical, epistemological and political challenges of such uses of large-scale Twitter archives within media and communication studies research, positioning this work in the context of the phenomenon that Lev Manovich has called “big social data.” In doing so, we recognise that our empirical work on Twitter is concerned with a complex research site that is itself shaped by a complex range of human and non-human actors, within a dynamic, indeed volatile media ecology (Fuller), and using data collection and analysis methods that are in themselves deeply embedded in this ecology. “Big Social Data”As Manovich’s term implies, the Big Data paradigm has recently arrived in media, communication and cultural studies—significantly later than it did in the hard sciences, in more traditionally computational branches of social science, and perhaps even in the first wave of digital humanities research (which largely applied computational methods to pre-existing, historical “big data” corpora)—and this shift has been provoked in large part by the dramatic quantitative growth and apparently increased cultural importance of social media—hence, “big social data.” As Manovich puts it: For the first time, we can follow [the] imaginations, opinions, ideas, and feelings of hundreds of millions of people. We can see the images and the videos they create and comment on, monitor the conversations they are engaged in, read their blog posts and tweets, navigate their maps, listen to their track lists, and follow their trajectories in physical space. (Manovich 461) This moment has arrived in media, communication and cultural studies because of the increased scale of social media participation and the textual traces that this participation leaves behind—allowing researchers, equipped with digital tools and methods, to “study social and cultural processes and dynamics in new ways” (Manovich 461). However, and crucially for our purposes in this article, many of these scholarly possibilities would remain latent if it were not for the widespread availability of Open APIs for social software (including social media) platforms. APIs are technical specifications of how one software application should access another, thereby allowing the embedding or cross-publishing of social content across Websites (so that your tweets can appear in your Facebook timeline, for example), or allowing third-party developers to build additional applications on social media platforms (like the Twitter user ranking service Klout), while also allowing platform owners to impose de facto regulation on such third-party uses via the same code. While platform providers do not necessarily have scholarship in mind, the data access affordances of APIs are also available for research purposes. As Manovich notes, until very recently almost all truly “big data” approaches to social media research had been undertaken by computer scientists (464). But as part of a broader “computational turn” in the digital humanities (Berry), and because of the increased availability to non-specialists of data access and analysis tools, media, communication and cultural studies scholars are beginning to catch up. Many of the new, large-scale research projects examining the societal uses and impacts of social media—including our own—which have been initiated by various media, communication, and cultural studies research leaders around the world have begun their work by taking stock of, and often substantially extending through new development, the range of available tools and methods for data analysis. The research infrastructure developed by such projects, therefore, now reflects their own disciplinary backgrounds at least as much as it does the fundamental principles of computer science. In turn, such new and often experimental tools and methods necessarily also provoke new epistemological and methodological challenges. The Twitter API and Twitter ArchivesThe Open API was a key aspect of mid-2000s ideas about the value of the open Web and “Web 2.0” business models (O’Reilly), emphasising the open, cross-platform sharing of content as well as promoting innovation at the margins via third-party application development—and it was in this ideological environment that the microblogging service Twitter launched and experienced rapid growth in popularity among users and developers alike. As José van Dijck cogently argues, however, a complex interplay of technical, economic and social dynamics has seen Twitter shift from a relatively open, ad hoc and user-centred platform toward a more formalised media business: For Twitter, the shift from being primarily a conversational communication tool to being a global, ad-supported followers tool took place in a relatively short time span. This shift did not simply result from the owner’s choice for a distinct business model or from the company’s decision to change hardware features. Instead, the proliferation of Twitter as a tool has been a complex process in which technological adjustments are intricately intertwined with changes in user base, transformations of content and choices for revenue models. (van Dijck 343)The specifications of Twitter’s API, as well as the written guidelines for its use by developers (Twitter, “Developer Rules”) are an excellent example of these “technological adjustments” and the ways they are deeply interwined with Twitter’s search for a viable revenue model. These changes show how the apparent semantic openness or “interpretive flexibility” of the term “platform” allows its meaning to be reshaped over time as the business models of platform owners change (Gillespie).The release of the API was first announced on the Twitter blog in September 2006 (Stone), not long after the service’s launch but after some popular third-party applications (like a mashup of Twitter with Google Maps creating a dynamic display of recently posted tweets around the world) had already been developed. Since then Twitter has seen a flourishing of what the company itself referred to as the “Twitter ecosystem” (Twitter, “Developer Rules”), including third-party developed client software (like Twitterific and TweetDeck), institutional use cases (such as large-scale social media visualisations of the London Riots in The Guardian), and parasitic business models (including social media metrics services like HootSuite and Klout).While the history of Twitter’s API rules and related regulatory instruments (such as its Developer Rules of the Road and Terms of Use) has many twists and turns, there have been two particularly important recent controversies around data access and control. First, the company locked out developers and researchers from direct “firehose” (very high volume) access to the Twitter feed; this was accompanied by a crackdown on free and public Twitter archiving services like 140Kit and the Web version of Twapperkeeper (Sample), and coincided with the establishment of what was at the time a monopoly content licensing arrangement between Twitter and Gnip, a company which charges commercial rates for high-volume API access to tweets (and content from other social media platforms). A second wave of controversy among the developer community occurred in August 2012 in response to Twitter’s release of its latest API rules (Sippey), which introduce further, significant limits to API use and usability in certain circumstances. In essence, the result of these changes to the Twitter API rules, announced without meaningful consultation with the developer community which created the Twitter ecosystem, is a forced rebalancing of development activities: on the one hand, Twitter is explicitly seeking to “limit” (Sippey) the further development of API-based third-party tools which support “consumer engagement activities” (such as end-user clients), in order to boost the use of its own end-user interfaces; on the other hand, it aims to “encourage” the further development of “consumer analytics” and “business analytics” as well as “business engagement” tools. Implicit in these changes is a repositioning of Twitter users (increasingly as content consumers rather than active communicators), but also of commercial and academic researchers investigating the uses of Twitter (as providing a narrow range of existing Twitter “analytics” rather than engaging in a more comprehensive investigation both of how Twitter is used, and of how such uses continue to evolve). The changes represent an attempt by the company to cement a certain, commercially viable and valuable, vision of how Twitter should be used (and analysed), and to prevent or at least delay further evolution beyond this desired stage. Although such attempts to “freeze” development may well be in vain, given the considerable, documented role which the Twitter user base has historically played in exploring new and unforeseen uses of Twitter (Bruns), it undermines scholarly research efforts to examine actual Twitter uses at least temporarily—meaning that researchers are increasingly forced to invest time and resources in finding workarounds for the new restrictions imposed by the Twitter API.Technical, Political, and Epistemological IssuesIn their recent article “Critical Questions for Big Data,” danah boyd and Kate Crawford have drawn our attention to the limitations, politics and ethics of big data approaches in the social sciences more broadly, but also touching on social media as a particularly prevalent site of social datamining. In response, we offer the following complementary points specifically related to data-driven Twitter research relying on archives of tweets gathered using the Twitter API.First, somewhat differently from most digital humanities (where researchers often begin with a large pre-existing textual corpus), in the case of Twitter research we have no access to an original set of texts—we can access only what Twitter’s proprietary and frequently changing API will provide. The tools Twitter researchers use rely on various combinations of parts of the Twitter API—or, more accurately, the various Twitter APIs (particularly the Search and Streaming APIs). As discussed above, of course, in providing an API, Twitter is driven not by scholarly concerns but by an attempt to serve a range of potentially value-generating end-users—particularly those with whom Twitter can create business-to-business relationships, as in their recent exclusive partnership with NBC in covering the 2012 London Olympics.The following section from Twitter’s own developer FAQ highlights the potential conflicts between the business-case usage scenarios under which the APIs are provided and the actual uses to which they are often put by academic researchers or other dataminers:Twitter’s search is optimized to serve relevant tweets to end-users in response to direct, non-recurring queries such as #hashtags, URLs, domains, and keywords. The Search API (which also powers Twitter’s search widget) is an interface to this search engine. Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned. Some results are refined to better combat spam and increase relevance. Due to capacity constraints, the index currently only covers about a week’s worth of tweets. (Twitter, “Frequently Asked Questions”)Because external researchers do not have access to the full, “raw” data, against which we could compare the retrieved archives which we use in our later analyses, and because our data access regimes rely so heavily on Twitter’s APIs—each with its technical quirks and limitations—it is impossible for us to say with any certainty that we are capturing a complete archive or even a “representative” sample (whatever “representative” might mean in a data-driven, textualist paradigm). In other words, the “lists” of tweets delivered to us on the basis of a keyword search are not necessarily complete; and there is no way of knowing how incomplete they are. The total yield of even the most robust capture system (using the Streaming API and not relying only on Search) depends on a number of variables: rate limiting, the filtering and spam-limiting functions of Twitter’s search algorithm, server outages and so on; further, because Twitter prohibits the sharing of data sets it is difficult to compare notes with other research teams.In terms of epistemology, too, the primary reliance on large datasets produces a new mode of scholarship in media, communication and cultural studies: what emerges is a form of data-driven research which tends towards abductive reasoning; in doing so, it highlights tensions between the traditional research questions in discourse or text-based disciplines like media and communication studies, and the assumptions and modes of pattern recognition that are required when working from the “inside out” of a corpus, rather than from the outside in (for an extended discussion of these epistemological issues in the digital humanities more generally, see Dixon).Finally, even the heuristics of our analyses of Twitter datasets are mediated by the API: the datapoints that are hardwired into the data naturally become the most salient, further shaping the type of analysis that can be done. For example, a common process in our research is to use the syntax of tweets to categorise it as one of the following types of activity: original tweets: tweets which are neither @reply nor retweetretweets: tweets which contain RT @user… (or similar) unedited retweets: retweets which start with RT @user… edited retweets: retweets do not start with RT @user…genuine @replies: tweets which contain @user, but are not retweetsURL sharing: tweets which contain URLs(Retweets which are made using the Twitter “retweet button,” resulting in verbatim passing-along without the RT @user syntax or an opportunity to add further comment during the retweet process, form yet another category, which cannot be tracked particularly effectively using the Twitter API.)These categories are driven by the textual and technical markers of specific kinds of interactions that are built into the syntax of Twitter itself (@replies or @mentions, RTs); and specific modes of referentiality (URLs). All of them focus on (and thereby tend to privilege) more informational modes of communication, rather than the ephemeral, affective, or ambiently intimate uses of Twitter that can be illuminated more easily using ethnographic approaches: approaches that can actually focus on the individual user, their social contexts, and the broader cultural context of the traces they leave on Twitter. ConclusionsIn this article we have described and reflected on some of the sociotechnical, political and economic aspects of the lists of tweets—the structured Twitter data upon which our research relies—which may be gathered using the Twitter API. As we have argued elsewhere (Bruns and Burgess)—and, hopefully, have begun to demonstrate in this paper—media and communication studies scholars who are actually engaged in using computational methods are well-positioned to contribute to both the methodological advances we highlight at the beginning of this paper and the political debates around computational methods in the “big social data” moment on which the discussion in the second part of the paper focusses. One pressing issue in the area of methodology is to build on current advances to bring together large-scale datamining approaches with ethnographic and other qualitative approaches, especially including close textual analysis. More broadly, in engaging with the “big social data” moment there is a pressing need for the development of code literacy in media, communication and cultural studies. In the first place, such literacy has important instrumental uses: as Manovich argues, much big data research in the humanities requires costly and time-consuming (and sometimes alienating) partnerships with technical experts (typically, computer scientists), because the free tools available to non-programmers are still limited in utility in comparison to what can be achieved using raw data and original code (Manovich, 472).But code literacy is also a requirement of scholarly rigour in the context of what David Berry calls the “computational turn,” representing a “third wave” of Digital Humanities. Berry suggests code and software might increasingly become in themselves objects of, and not only tools for, research: I suggest that we introduce a humanistic approach to the subject of computer code, paying attention to the wider aspects of code and software, and connecting them to the materiality of this growing digital world. With this in mind, the question of code becomes increasingly important for understanding in the digital humanities, and serves as a condition of possibility for the many new computational forms that mediate our experience of contemporary culture and society. (Berry 17)A first step here lies in developing a more robust working knowledge of the conceptual models and methodological priorities assumed by the workings of both the tools and the sources we use for “big social data” research. Understanding how something like the Twitter API mediates the cultures of use of the platform, as well as reflexively engaging with its mediating role in data-driven Twitter research, promotes a much more materialist critical understanding of the politics of the social media platforms (Gillespie) that are now such powerful actors in the media ecology. ReferencesBerry, David M. “Introduction: Understanding Digital Humanities.” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 1-20.boyd, danah, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662-79.Bruns, Axel. “Ad Hoc Innovation by Users of Social Networks: The Case of Twitter.” ZSI Discussion Paper 16 (2012). 18 Sep. 2012 ‹https://www.zsi.at/object/publication/2186›.Bruns, Axel, and Jean Burgess. “Notes towards the Scientific Study of Public Communication on Twitter.” Keynote presented at the Conference on Science and the Internet, Düsseldorf, 4 Aug. 2012. 18 Sep. 2012 http://snurb.info/files/2012/Notes%20towards%20the%20Scientific%20Study%20of%20Public%20Communication%20on%20Twitter.pdfBruns, Axel, Jean Burgess, Kate Crawford, and Frances Shaw. “#qldfloods and @QPSMedia: Crisis Communication on Twitter in the 2011 South East Queensland Floods.” Brisbane: ARC Centre of Excellence for Creative Industries and Innovation, 2012. 18 Sep. 2012 ‹http://cci.edu.au/floodsreport.pdf›Burgess, Jean E. & Bruns, Axel (2012) “(Not) the Twitter Election: The Dynamics of the #ausvotes Conversation in Relation to the Australian Media Ecology.” Journalism Practice 6.3 (2012): 384-402Dixon, Dan. “Analysis Tool Or Research Methodology: Is There an Epistemology for Patterns?” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 191-209.Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, Mass.: MIT P, 2005.Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12.3 (2010): 347-64.Harrington, Stephen, Highfield, Timothy J., & Bruns, Axel (2012) “More than a Backchannel: Twitter and Television.” Ed. José Manuel Noguera. Audience Interactivity and Participation. COST Action ISO906 Transforming Audiences, Transforming Societies, Brussels, Belgium, pp. 13-17. 18 Sept. 2012 http://www.cost-transforming-audiences.eu/system/files/essays-and-interview-essays-18-06-12.pdfLotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd. “The Arab Spring: The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5 (2011): 1375-1405. 18 Sep. 2012 ‹http://ijoc.org/ojs/index.php/ijoc/article/view/1246/613›.Manovich, Lev. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: U of Minnesota P, 2012. 460-75.Mejias, Ulises A. “Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond.” Fibreculture Journal 20 (2012). 18 Sep. 2012 ‹http://twenty.fibreculturejournal.org/2012/06/20/fcj-147-liberation-technology-and-the-arab-spring-from-utopia-to-atopia-and-beyond/›.O’Reilly, Tim. “What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.” O’Reilly Network 30 Sep. 2005. 18 Sep. 2012 ‹http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html›.Papacharissi, Zizi, and Maria de Fatima Oliveira. “Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt.” Journal of Communication 62.2 (2012): 266-82.Sample, Mark. “The End of Twapperkeeper (and What to Do about It).” ProfHacker. The Chronicle of Higher Education 8 Mar. 2011. 18 Sep. 2012 ‹http://chronicle.com/blogs/profhacker/the-end-of-twapperkeeper-and-what-to-do-about-it/31582›.Sippey, Michael. “Changes Coming in Version 1.1 of the Twitter API.” 16 Aug. 2012. Twitter Developers Blog. 18 Sep. 2012 ‹https://dev.Twitter.com/blog/changes-coming-to-Twitter-api›.Stone, Biz. “Introducing the Twitter API.” Twitter Blog 20 Sep. 2006. 18 Sep. 2012 ‹http://blog.Twitter.com/2006/09/introducing-Twitter-api.html›.Twitter. “Developer Rules of the Road.” Twitter Developers Website 17 May 2012. 18 Sep. 2012 ‹https://dev.Twitter.com/terms/api-terms›.Twitter. “Frequently Asked Questions.” 18 Sep. 2012 ‹https://dev.twitter.com/docs/faq›.Van Dijck, José. “Tracing Twitter: The Rise of a Microblogging Platform.” International Journal of Media and Cultural Politics 7.3 (2011): 333-48.Walker Rettberg, Jill. Blogging. Cambridge: Polity, 2008.Wood, Megan M., and Linda Baughman. “Glee Fandom and Twitter: Something New, or More of the Same Old Thing?” Communication Studies 63.3 (2012): 328-44.
20

Hall, James, Laura Glitsos, and Jess Taylor. "Fungible." M/C Journal 25, no. 2 (April 25, 2022). http://dx.doi.org/10.5204/mcj.2905.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
At its core the quality of being fungible is the quality of being interchangeable, more specifically interchangeable with its likeness. Our currencies, ergo our financial systems, ergo our ways of life have been underpinned by the stability that a $5 note is worth the same as every other $5 note. This is perhaps why the word fungible has never really spilled over into everyday usage: it has traditionally been a word for legal documents and economics texts. However, in the last couple of years the word fungible has made its way out of the lecture theatres of law classes and into the headlines of mainstream news services. On the back of a crypto currency boom it seemed only logical that markets that utilised this new form of wealth would emerge, the most prominent of these being the, at times lucrative, NFT (non-fungible token) market. Defining an NFT is problematic, because it is more about what it isn’t than what it is. People who have searched online looking for a definition will probably find an article or video that starts off with a semantic definition, e.g. it is a digital token with a unique signature making it unlike other tokens that are similar, which is then followed up by a spuriously comprehensible but ultimately ephemeral analogy. These definitions perhaps suffer by their ulterior motive of making NFTs sound more ground-breaking and more revolutionary than they are. If you were to say NFTs are like digital snowflakes, in that no two are the same, that might help, but it doesn’t add anything to their significance because whilst we may notionally find the idea interesting that no two snowflakes are the same, we ultimately don’t really care, and this doesn’t make any snowflake more important or valuable than any other. However, imagine a scenario in late capitalism where a certain configuration of snowflake has an exchange value greater than other configurations, or a scenario where a snowflake is worth more because Elon Musk once owned it. In practice, NFTs are comparable to digital receipts that give the owner exclusive access to a piece of data. This data maybe a small digital image, it might be a gif, it might be a high resolution digital artwork, it might be anything that can be stored digitally. The allure or uniqueness of these pieces of data lies in their non-fungibility. They are acquired through a crypto currency exchange (more often than not Ethereum, but not necessarily so) and as such are verified and secure, though it is worth noting that in 2021 crypto currency theft totalled A$4.5b and money lost to crypto scams totalled A$11b (Lane). There is an irony that emerges here in that the digital culture that has allowed the proliferation of fungible content has given rise to its own non-fungible counter-culture. It is as if the digital annihilation of Benjamin’s aura has been replaced by an 8-bit digital aura. Every $5 note may still have exactly the same value as another $5 note, and the actual Mona Lisa may be less beguiling now you can own it on a tote bag, but not every Bored Ape (an avatar comprised of a cartoon ape, generated by an algorithm) has the same value as another Bored Ape (see Bored Ape Yacht Club statistics). For example, less than 0.5% of generated Bored Apes have gold fur, making them more desirable, and all of a sudden it begins to feel like a familiar market with familiar characteristics of supply and demand. 2020 was a turbulent year, so it’s understandable that the seeds of some culturally significant trends were overlooked. Amongst these was the boom in the trading card market. This saw trading cards – those things kids buy in packs with their pocket money – become an investor industry. Sale prices skyrocketed during global pandemic lockdowns: for example, a LeBron James 2003-4 Upper Deck Exquisite Rookie Patch Autograph card (numbered 14/23) sold at Golden Auctions for US$1.84m; another version of the same card sold in April of 2021 for US$5.2m. This boom in the trading card market rolled over into the early adoption of NFT technology within the sports trading card market, a development that has been generally glossed over. Well before Beeple’s sale of Everydays: The First 5,000 Days (a collage of 5,000 digital artworks sold as an NFT) at Christie’s for slightly under US$70m (see Guardian), NFTs were breaking new ground in the sports card market in the form of NBA Top Shots (an official NBA product produced by Dapper Labs). When a person opens a digital pack of Top Shots they reveal “moments”, uniquely serial numbered highlight videos lasting a few seconds. Sales of NBA Top Shots totalled US$230m in 2020 (Young). There is perhaps little surprise in this early adoption of the investor/trading aspects of NFTs, given the crossover between pandemic-era sports card collectors and crypto currency speculators (Yahoo! Finance). Beyond these developments in NFT hobby collectibles, there has also been the development and gamification of NFT gambling in the form of horse-racing platforms like Zed Run. Zed Run allows users to race NFT horses in their virtual stable at the cost of a fee (payable in crypto currency), which is ostensibly a wager. Users can breed NFT horses with other NFT horses to create new NFT horses with unique characteristics, and then race them against other horses with comparable attributes. This platform, and ones like it, are playing a role in creating an unregulated gambling platform that operates on a global scale, at a time where many states in the USA are only years into a relaxed sports betting environment (in 2018 a Supreme Court ruling opened the door for all states to legalise sports betting; until that point sports betting was only legal in 4 states). It remains to be seen if the continued gamification of gambling will entrench itself further through means such as Zed Run, or if the practice will remain niche without the existence of a widely populated metasphere. It is clear that we are currently in the midst of a wave, potentially a flood, of NFT content, and a majority of this content exists as a variation of the theme “how to make money through NFTs”. NFTs are currently considered more for their potential profitability rather than their utility. The residue of this is that non-fungible markets seem to be replicating the traditional markets that they are notionally trying to subvert, and the practical uses of NFTs, e.g. as a solution to issues of digital ownership, are being overlooked. Perhaps this is the new manifestation of the neoliberal ideology, or perhaps it is the case in point that future generations will look back upon. Of course, there is an as yet generally unstated and significant point here, that what is being discussed is fungibility in terms of its non-ness. The mention of the term fungibility in a popular culture context immediately gives way to the consideration of the non-fungible, and the non-fungible is seemingly resolving itself, or at least can be understood, in the context of traditional wealth, with all of its fungible interchangeability. This issue of M/C Journal presents a range of insights and perspectives on this word that is increasingly flowing through discourses and practices. NFTs have a range of implications and a spectrum of potential uses depending on their context. But additionally, the usefulness of fungibility as a concept also comes into play here, as terminology traditionally shackled to other disciplines but increasingly pliable in the arts and humanities. This issue’s feature by Russell, “NFTs and Value”, meets some of the above issues head-on by immediately addressing the dichotomy of NFTs as the start of a new art format or NFTs as Western society’s most recent bubble market. Irrespective of these two positions there is an undeniable reality that these digital artefacts can potentially have real world wealth. Russell explores the potential underlying factors of this wealth and in turn what creates artistic wealth. Here a combination of factors such as the discourse around the work itself, or the place that work has in the context of Western art history are all considered as potential drivers of this new wave/bubble. Mason takes up the financial gains associated with some NFTs by examining the commodification of memes through the NFT format. In particular Mason considers the broader implications of this phenomenon outside of NFTs themselves by discussing the potential cultural and racial legacies at play. Mason’s work also notes the dominance of non-Black memes in the non-fungible market and the subsequent development of non-Black wealth that follows. Through this case study Mason touches upon an as of yet widely overlooked cultural implication of the non-fungible market, that of racial inequality and exploitation. In a different wing of the art world, Binns focusses on film, noting, after highlighting the significant ecological price and damage that comes with making transactions on prominent block chains, that the implications of NFTs on the film industry are still emerging. Despite the presence of some emerging marketplaces and vendors, the full utility of NFTs within the film industry remains untapped and unclear. Perhaps NFTs will supplement crowdfunding by offering exclusive memberships or perks (similar to the Bored Apes Yacht Club), or perhaps the fad will fade into the background without ever leaving an impression. In contrast, Robinson embraces the notion of fungibility as fungibility, stepping away from the contemporary discussion of “fungible” as being inherently “non-fungible” and looking at the interchangeability of identity and experience in online spaces. Through interviews Robinson considers how traditional notions of national and political identity are rendered fungible by digital spaces and how this aspect of fungibility manifests itself in invisibility, efficacy, and antagonism. This work is an important reminder of the suitability of fungible as a term in academic scholarship: Robinson’s notion of fungible citizenship opens up new perspectives in who we are, who we see ourselves to be, and to what we might aspire. Lyubchenko’s work is concerned with the place that NFT art has within a broader sense of art history. For Lyubchenko, crypto art can be considered as the culmination of the Dada movement, influenced by its various iterations such as Neo-Dadaism and Pop-Art. The result here is not so much a digital embodiment of the anti-art movement, arriving to land the final blow, but rather the newest form of anti-art, whose existence seems to only breathe life into that which it intends to kill. For Lyubchenko, crypto art it not so much a threat to traditional art forms, but rather a call to arms, a catalyst to regroup and reassert art’s timeless values. The place of the NFT in music is then the focus of Rogers et al., who seek to explore where music sits in the newly framed context of Web3. Whilst this position is not entirely constituted by the integration of NFT technology in music, it is at present a considerable factor and one that Rogers et al. explore through examination of functionality and discourse analysis. They note a degree of cynicism in the discourses surrounding popular music’s flirtation with NFTs, emerging largely from environmental impacts of blockchain ledgers and potential grey areas surrounding the industry’s legitimacy as a whole when it comes to claims of authenticity, security, and capacity. Interestingly they also note similarities in many of the cases they discuss with discourses surrounding previously emergent forms of music. Even seemingly banal music technologies in the past, such as the jukebox and the player piano, were subjected to comparable scrutiny. In the end time will give us a greater sense of whether the first few years of music within Web3 represent a cultural touchstone or a commercially driven false start. Finally, this collection progresses the discussion on how NFTs themselves present new opportunities for art practitioners. As Wilson notes, there is an inevitability that artists will begin to embrace the production of NFTs as part of the artistic process, as opposed to simply porting over existing artworks to the NFT format. Wilson considers his own work and Damien Hirst’s 2021 NFT works as examples of how considered and practical adoption of this new format challenges the neo-liberal economic conception of what NFTs are and what they are for. References Bored Ape Yacht Club statistics. 16 Apr. 2022 <https://www.nft-stats.com/collection/boredapeyachtclub>. The Guardian. “Christie’s Auctions 'First Digital-Only Artwork' for $70m.” 12 Mar. 2021. 16 Apr. 2022 <https://www.theguardian.com/artanddesign/2021/mar/11/christies-first-digital-only-artwork-70m-nft-beeple>. Lane, Aaron M. “Crypto Theft Is on the Rise. Here’s How the Crimes are Committed, and How You Can Protect Yourself.” The Conversation 3 Feb. 2022. 15 Apr. 2022 <https://theconversation.com/crypto-theft-is-on-the-rise-heres-how-the-crimes-are-committed-and-how-you-can-protect-yourself-176027>. Yahoo! Finance. “Collector Coin Becomes First and Only Cryptocurrency for Card Collectors.” 30 June 2021. 16 Apr. 2022 <https://finance.yahoo.com/news/collector-coin-becomes-first-only-185000184.html>. Young, Jabari. “People Have Spent More than $230 Million Buying and Trading Digital Collectibles of NBA Highlights.” CNBC 28 Feb. 2021. 16 Apr. 2022 <https://www.cnbc.com/2021/02/28/230-million-dollars-spent-on-nba-top-shot.html>.
21

Crooks, Juliette. "Recreating Prometheus." M/C Journal 4, no. 4 (August 1, 2001). http://dx.doi.org/10.5204/mcj.1926.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prometheus, chained to a rock, having his liver pecked out by a great bird only for the organ to grow back again each night so that the torture may be repeated afresh the next day must be the quintessential image of masculinity in crisis. This paper will consider Promethean myth and the issues it raises regarding 'creation' including: the role of creator, the relationship between creator and created, the usurping of maternal (creative) power by patriarchy and, not least, the offering of an experimental model in which masculine identity can be recreated. I argue that Promethean myth raises significant issues relating to anxieties associated with notions of masculinity and gender, which are subsequently transposed in Shelley's modernist recasting of the myth, Frankenstein. I then consider 'Promethean' science fiction film, as an area particularly concerned with re-creation, in terms of construction of the self, gender and masculinity. Prometheus & Creation Prometheus (whose name means 'forethought') was able to foresee the future and is credited with creating man from mud/clay. As Man was inferior to other creations and unprotected, Prometheus allowed Man to walk upright [1] like the Gods. He also stole from them the gift of fire, to give to Man, and tricked the Gods into allowing Man to keep the best parts of sacrifices (giving the Gods offal, bones and fat). Thus Prometheus is regarded as the father and creator of Mankind, and as Man's benefactor and protector; whose love of Man (or love of trickery and his own cleverness) leads him to deceive the Gods. Prometheus's brother, Epimetheus (whose name means 'afterthought'), was commissioned to make all the other creations and Prometheus was to overlook his work when it was done. Due to Epimetheus's short-sightedness there were no gifts left (such as fur etc.) to bestow upon Man – the nobler animal which Prometheus was entrusted to make. Prometheus, a Titan, and illegitimate son of Iapetus and the water nymph Clymene (Kirkpatrick, 1991), helped fight against the Titans the side of Zeus, helping Zeus seize the throne. More than simple indication of a rebellious spirit, his illegitimate status (albeit as opposed to an incestuous one – Iapetus was married to his sister Themis) raises the important issues of both legitimacy and filial loyalty, so recurrent within accounts of creation (of man, and human artifice). Some hold that Prometheus is punished for his deceptions i.e. over fire and the sacrifices, thus he is punished as much for his brother's failings as much as for his own ingenuity and initiative. Others maintain he is punished for refusing to tell Zeus which of Zeus's sons would overthrow him, protecting Zeus' half mortal son and his mortal mother. Zeus's father and grandfather suffered castration and usurpment at the hands of their offspring – for both Zeus and Prometheus (pro)creation is perilous. Prometheus's punishment here is for withholding a secret which accords power. In possessing knowledge (power) which could have secured his release, Prometheus is often viewed as emblematic of endurance, suffering and resistance and parental martyrdom. Prometheus, as mentioned previously, was chained to a rock where a great bird came and tore at his liver [2], the liver growing back overnight for the torture to be repeated afresh the following day. Heracles, a half mortal son of Zeus, slays the bird and frees Prometheus, thus Man repays his debt by liberation of his benefactor, or, in other accounts, he is required to take Prometheus's place, and thus liberating his creator and resulting in his own enslavement. Both versions clearly show the strength of bond between Prometheus and his creation but the latter account goes further in suggesting that Man and Maker are interchangeable. Also linked to Promethean myth is the creation of the first woman, Pandora. Constructed (by Jupiter at Zeus's command) on one hand as Man's punishment for Prometheus's tricks, and on the other as a gift to Man from the Gods. Her opening of 'the box', either releasing all mans ills, plagues and woes, or letting all benevolent gifts but hope escape, is seen as disastrous from either perspective. However what is emphasised is that the creation of Woman is secondary to the creation of Man. Therefore Prometheus is not the creator of humankind but of mankind. The issue of gender is an important aspect of Promethean narrative, which I discuss in the next section. Gender Issues Promethean myths raise a number of pertinent issues relating to gender and sexuality. Firstly they suggest that both Man and Woman are constructed [3], and that they are constructed as distinct entities, regarding Woman as inferior to Man. Secondly creative power is posited firmly with the masculine (by virtue of the male sex of both Prometheus and Jupiter), negating maternal and asserting patriarchal power. Thirdly Nature, which is associated with the feminine, is surpassed in that whilst Man is made from the earth (mud/clay) it is Prometheus who creates him (Mother Earth providing only the most basic raw materials for production); and Nature is overcome as Man is made independent of climate through the gift of fire. Tensions arise in that Prometheus's fate is also linked to childbirth in so far as that which is internal is painfully rendered external (strongly raising connotations of the abject – which threatens identity boundaries). The intense connection between creation and childbirth indicates that the appropriation of power is of a power resting not with the gods, but with women. The ability to see the future is seen as both frightening and reassuring. Aeschylus uses this to explain Prometheus's tolerance of his fate: he knew he had to endure pain but he knew he would be released, and thus was resigned to his suffering. As the bearer of the bleeding wound Prometheus is feminised, his punishment represents a rite of passage through which he may earn the status 'Father of Man' and reassert and define his masculine identity, hence a masochistic desire to suffer is also suggested. Confrontations with the abject, the threat posed to identity, and Lacanian notions of desire in relation to the other, are subjects which problematise the myth's assertion of masculine power. I will now consider how the Promethean myth is recast in terms of modernity in the story of Frankenstein and the issues regarding male power this raises. Frankenstein - A Modern Prometheus Consistent with the Enlightenment spirit of renewal and reconstruction, the novel Frankenstein emerges in 1818, re-casting Promethean myth in terms of science, and placing the scientist (i.e. man) as creator. Frankenstein in both warning against assuming the power of God and placing man as creator, simultaneously expresses the hopes and fears of the transition from theocratic belief to rationality. One of the strategies Frankenstein gives us through its narrative use of science and technology is a social critique and interrogation of scientific discourse made explicit through its alignment with gender discourse. In appropriating reproductive power without women, it enacts an appropriation of maternity by patriarchy. In aligning the use of this power by patriarchy with the power of the gods, it attempts to deify and justify use of this power whilst rendering women powerless and indeed superfluous. Yet as it offers the patriarchal constructs of science and technology as devoid of social responsibility, resulting in monstrous productions, it also facilitates a critique of patriarchy (Cranny Francis, 1990, p220). The creature, often called 'Frankenstein' rather than 'Frankenstein's monster', is not the only 'abomination to God'. Victor Frankenstein is portrayed as a 'spoilt brat of a child', whose overindulgence results in his fantasy of omnipotent power over life itself, and leads to neglect of, and lack of care towards, his creation. Indeed he may be regarded as the true 'monster' of the piece, as he is all too clearly lacking Prometheus's vision and pastoral care [4]. "Neither evil nor inhuman, [the creature] comes to seem little more than morally uninformed, poorly 'put together' by a human creator who has ill served both his creation and his fellow humans." (Telotte, 1995, p. 76). However, the model of the natural – and naturally free – man emerges in the novel from an implied pattern of subjection which demonstrates that the power the man-made constructs of science and technology give us come at great cost: "[Power] is only made possible by what [Mary Shelley] saw as a pointedly modern devaluation of the self: by affirming that the human is, at base, just a put together thing, with no transcendent origin or purpose and bound to a half vital existence at best by material conditions of its begetting."(ibid.) Frankenstein's power expressed through his overcoming of Nature, harnessing of technology and desire to subject the human body to his will, exhibits the modern world's mastery over the self. However it also requires the devaluation of self so that the body is regarded as subject, thus leading to our own subjection. For Telotte (1995, p37), one reflection of our Promethean heritage is that as everything comes to seem machine-like and constructed, the human too finally emerges as a kind of marvellous fiction, or perhaps just another empty invention. Access to full creative potential permits entry "into a true 'no man's land'…. a wonderland...where any wonder we might conceive, or any wondrous way we might conceive of the self, might be fashioned". Certainly the modernist recasting of Promethean myth embodies that train of thought which is most consciously aiming to discover the nature of man through (re)creating him. It offers patriarchal power as a power over the self (independent of the gods); a critique of the father; and the fantasy of (re)construction of the self at the cost of deconstruction of the body which, finally, leads to the subjection of the self. The Promethean model, I maintain, serves to illuminate and further our understanding of the endurance, popularity and allure of fantasies of creation, which can be so readily found in cinematic history, and especially within the science fiction genre. This genre stands out as a medium both well suited to, and enamoured with, Promethean reworkings [5]. As religion (of which Greek mythology is a part) and science both attempt to explain the world and make it knowable they offer the reassurance, satisfaction and the illusion of security and control, whilst tantalising with notions of possible futures. Promethean science fiction film realises the visual nature of these possible futures providing us, in its future visions, with glimpses of alternative ways of seeing and being. Promethean Science Fiction Film Science fiction, can be seen as a 'body genre' delineated not by excess of sex, blood or emotion but by excess of control over the body as index of identity (Cook, 1999, p.193). Science fiction films can be seen to fall broadly into three categories: space flight, alien invaders and futuristic societies (Hayward, 1996, p.305). Within these, Telotte argues (Replications, 1995), most important are the images of "human artifice", which form a metaphor for our own human selves, and have come to dominate the contemporary science fiction film (1995, p11). The science fiction film contains a structural tension that constantly rephrases central issues about the self and constructedness. Paradoxically whilst the science fiction genre profits from visions of a technological future it also displays technophobia – the promises of these fictions represent dangerous illusions with radical and subversive potential, suggesting that nature and the self may be 'reconstructable' rather than stable and unchanging. Whilst some films return us safely to a comforting stable humanity, others embrace and affirm the subversive possibilities advocating an evolution or rebirth of the human. Regardless of their conservative (The Iron Giant, 1999, Planet of the Apes, 1968) or subversive tendencies (Metropolis 1926, Blade Runner 1982, Terminator 1984), they offer the opportunity to explore "a space of desire" (Telotte, p. 153, 1990) a place where the self can experience a kind of otherness and possibilities exceed the experience of our normal being (The Stepford Wives 1974, The Fly 1986, Gattaca 1997 [6]). What I would argue is central to the definition of a Promethean sub-genre of science fiction is the conscious depiction and understanding of the (hu)man subject or artifice as technological or scientific construction rather than natural. Often, as in Promethean myth, there is a mirroring between creator and creation, constructor and constructed, which serves to bind them despite their differences, and may often override them. Power in this genre is revealed as masculine power over the feminine, namely reproductive power; as such tensions in male identity arise and may be interrogated. Promethean (film) texts have at their centre issues of what it is to be human, and within this, what it is to be a man. There is a focus on hegemonic masculinity within these texts, which serves as a measure of masculinity. Furthermore these texts are most emphatically concerned with the construction of masculinity and with masculine power. The notion of creation raises questions of paternity, motherhood, parenting, and identification with the father, although the ways in which these issues are portrayed or explored may be quite diverse. As a creation of man, rather than of 'woman', the subjects created are almost invariably 'other' to their creators, whilst often embodying the fantasies, desires and repressed fears of their makers. That otherness and difference form central organising principles in these texts is undisputable, however there also can be seen to exist a bond between creator and created which is worthy of exploration, as the progeny of man retains a close likeness (though not always physically) to its maker [7]. Particularly in the Promethean strand of science fiction film we encounter the abject, posing a threat to fragile identity constructions (recalling the plight of Prometheus on his rock and his feminised position). I also maintained that 'lack' formed part of the Promethean heritage. Not only are the desires of the creators often lacking in Promethean care and vision, but their creations are revealed as in some way lacking, falling short of their creator's desire and indeed their own [8]. From the very beginnings of film we see the desire to realise (see) Promethean power accorded to man and to behold his creations. The mad scientists of film such as Frankenstein (1910), Homunculus (1916), Alraune (1918), Orlacs Hande (1925) and Metropolis (1926) and Frankenstein (1931) all point to the body as source of subjection and resistance. Whilst metal robots may be made servile, "the flesh by its very nature always rebels" (Telotte, 1995, p. 77). Thus whilst they form a metaphor for the way the modern self is subjugated, they also suggest resistance to that subjugation, pointing to "a tension between body and mind, humanity and its scientific attainments, the self and a cultural subjection" (ibid.). The films of the 1980's and 90's, such as Blade Runner (1982), Robocop (1987) Terminator 2: Judgement Day (1994), point towards "the human not as ever more artificial but the artificial as ever more human" (Telotte, 1995, p.22). However, these cyborg bodies are also gendered bodies providing metaphors for the contemporary anxieties about 'masculinities'. Just as the tale of Prometheus is problematic in that there exist many variations of the myth [9], with varying accounts capable of producing a range of readings, concepts of 'masculinity' are neither stable nor uniform, and are subject to recasting and reconstruction. Likewise in Promethean science fiction film masculine identities are multiple, fragmented and dynamic. These films do not simply recreate masculinities in the sense that they mirror extant anxieties but recreate in the sense that they 'play' with these anxieties, possibilities of otherness and permeate boundaries. We may see this 'play' as liberating, in that it offers possible ways of being and understanding difference, or conservative, reinstating hegemonic masculinity by asserting old hierarchies. As versions of the myth are reconstructed what new types of creator/creature will emerge? What will they say about our understanding and experiences of "masculinities"? What new possibilities and identities may we envision? Perhaps the most significant aspect of our Promethean heritage is that, as Prometheus is chained to his rock and tortured, through the perpetual regeneration of his liver, almost as if to counterweight or ballast the image of masculinity in crisis, comes the 'reassuring' notion that whatever the strains cracks or injuries the patriarchal image endures: 'we can rebuild him' [10]. We not only can but will, for in doing so we are also reconstructing ourselves. Footnote According to Bulfinch (web) he gave him an upright stature so he could look to the Heavens and gaze on the stars. Linking to Science Fiction narratives of space exploration etc. (Encyclopedia Mythica – [web]) -The liver was once regarded as the primary organ of our being (the heart being our contemporary equivalent) where passions and pain and were felt. Both physically constructed and sociologically, with woman as inferior lesser being and implying gender determinism. This is further articulated to effect in the James Whale film (Frankenstein, 1931), where 'Henry' Frankenstein's creation is regarded as his 'first born' and notions of lineage predominate, ultimately implying he will now pursue more natural methods of (pro)creation. Frankenstein is seen by some as the first cyborg novel in its linking of technology and creation and also often cited as the first science fiction film (although there were others). For example in Andrew Niccol's Gattaca (1997), the creation of man occurs through conscious construction of the self, acknowledging that we are all constructed and acknowledging that masculinity must be reconstructed if it is to be validated. Patriarchy has worked to mythologise our relationship to (mother) nature, so that the human becomes distinct from the manufactured. What is perhaps the most vital aspect of the character Vincent in Gattaca is his acknowledgement that the body must be altered, restructured, reshaped and defined in order to pass from insignificance to significance in terms of hegemonic masculine identity. It is therefore through a reappraisal of the external that the internal gains validity. See Foucault on resemblance and similitude (in The Gendered Cyborg, 2000). See Scott Bukatman on Blade Runner in Kuhn, 1990. The tale of Prometheus had long existed in oral traditions and folklore before Hesiod wrote of it in Theogeny and Works and Days, and Aeschylus, elaborated on Hesiod, when he wrote Prometheus Bound (460B.C). Catchphrase used in the 1970's popular TV series The Six Million Dollar Man in relation to Steve Austin the 'bionic' character of the title. References Bernink, M. & Cook, P. (eds.) The Cinema Book (2nd edition). London: British Film Institute Publishing, 1999. Clute, J. Science Fiction: The Illustrated Encyclopaedia. London: Dorling Kindersley, 1995. Cohan, S. & Hark, I.R. (eds.) Screening the Male. London: Routledge, 1993. Hall, S., Held, D. & McLennan, G. (eds.) Modernity and its Futures. Cambridge and Oxford: Polity Press in association with The Open University, 1993. Jancovich, M. Rational Fears: American horror in the 1950's. Manchester and New York: Manchester University Press, 1996. Jeffords, S. Can Masculinity be Terminated? In Cohan, S. & Hark, I.R. (eds.) Screening the Male. London and New York: Routledge, 1993. Kirkup, G., Janes, L., Woodward, K. & Hovenden, F. (eds.) The Gendered Cyborg: A Reader. London: Routledge, 2000. Kuhn, A. (ed.) Alien Zone: Cultural Theory and Contemporary Science Fiction Cinema. London and New York: Verso, 1990. Sobchack, V. Screening Space. New Brunswick, New Jersey and London: Rutgers University Press 1999. Telotte, J.P. A Distant Technology: Science Fiction Film and the Machine Age, Hanover and London: Wesleyan University Press, 2000. Telotte, J.P. Replications. Urbana and Chicago: University of Illinois Press, 1995 Bulfinch's Mythology, The Age of Fable – Chapter 2: Prometheus and Pandora: (accessed 21st March 2000) http://www.bulfinch.org/fables/bull2.html Bulfinch's Mythology: (accessed March 21st 2000) http://www.bulfinch.org.html Encyclopaedia Mythica: Greek Mythology: (accessed June 15th 2000) http://oingo.com/topic/20/20246.html Encyclopaedia Mythica: Articles (accessed 15th June 2000) http://www.pantheon.org/mythica/articles.html

До бібліографії