To see the other types of publications on this topic, follow the link: Incomplete download.

Journal articles on the topic 'Incomplete download'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 journal articles for your research on the topic 'Incomplete download.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sandvik, Kristin B. "“Smittestopp”: If you want your freedom back, download now." Big Data & Society 7, no. 2 (July 2020): 205395172093998. http://dx.doi.org/10.1177/2053951720939985.

Full text
Abstract:
The intervention attempts to engage critically with the Smittestopp app as a specifically Norwegian technofix. Culturally and politically, much of the Covid-19 response and the success of social distancing rules have been organized around the widespread trust in the government and public health authorities, and a focus on the citizens’ duty to contribute to the dugnaðr. The intervention argues that Smittestopp has been co-created by the mobilization of trust and dugnaðr, resulting in the launch of an incomplete and poorly defined data-hoarding product with significant vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
2

Schrag, Janelle, Latha Shivakumar, Monique Dawkins, Leigh Boehmer, and Lorna Lucas. "Exploring online access of an immuno-oncology wallet card among oncology providers." Journal of Clinical Oncology 38, no. 5_suppl (February 10, 2020): 93. http://dx.doi.org/10.1200/jco.2020.38.5_suppl.93.

Full text
Abstract:
93 Background: In 2019, the Association of Community Cancer Centers (ACCC) developed an immuno-oncology (IO) wallet card to address the continuous need for immune-related adverse event education and resources, particularly for IO patients and the non-oncology providers from whom they receive care. The wallet card was distributed to ACCC’s membership of cancer programs via mailings and online, which included a short survey for users to complete at time of download. Methods: To better understand the demographics and motivations of individuals who access the IO wallet card, an exploratory analysis was performed on data collected through the download survey. Data included the survey responses from all downloads between March and September 2019 (n = 141), which was then cleaned to remove duplicates, incomplete responses, and responses from ACCC staff, international users, pharmaceutical representatives, consultants, and patients. Analysis was then performed on the resulting data set of downloads from US-based health care providers (n = 86). Results: Cancer program administrators and nurses accounted for the majority of downloads (30% and 20%, respectively), as well as individuals from comprehensive community cancer programs (44%) and NCI-designated comprehensive cancer programs (16%). Fewer downloads came from other oncology disciplines (2-9%) and small practices (2-6%). Survey responses indicated that the majority of downloads were due to the cancer program not already having the resource (47%), or for comparison with a wallet card developed by the cancer program (15%) or another organization (10%). Patient education materials provided by these institutions included wallet cards (45%), as well as print materials developed by the cancer program (16%), another professional organization (7%), or distributed by drug companies (5%). Conclusions: These finding shed light on the primary audiences accessing the IO wallet card, how this resource may complement other IO patient education materials, and areas where additional education may be needed. Specifically, IO wallet card dissemination or related education may need to be tailored to better reach specific oncology disciplines as well as those practicing in smaller clinics.
APA, Harvard, Vancouver, ISO, and other styles
3

Mullen, Deborah M., Richard Bergenstal, Amy Criego, Kathleen Cecilia Arnold, Robin Goland, and Sara Richter. "Time Savings Using a Standardized Glucose Reporting System and Ambulatory Glucose Profile." Journal of Diabetes Science and Technology 12, no. 3 (November 24, 2017): 614–21. http://dx.doi.org/10.1177/1932296817740592.

Full text
Abstract:
Background: Diabetes care is predominately done at home by the patient. When clinics do not have a reliable, easy process for obtaining this patient data, clinical decisions must be made with incomplete verbal recall reports. Unused or inaccessible glucose data represent a large information gap affecting clinical decision making. This study’s purpose was to design an optimized glucose device download system with a standardized report and to evaluate its efficiency. Methods: Observations and evaluations of glucose data retrieval occurred at two clinics; an additional clinic utilized the optimized process doing only post process timings. Patients/families and clinicians were surveyed about their experiences with the system and the standardized report (AGP). The study was approved by all the sites’ IRBs. Results: Optimized systems saved staff at least 3 min per patient. Standardized AGP reports and an optimized data system made the work flow of glucose data easier to complete. The AGP report was preferred by patients, families, and clinicians. Conclusions: An optimized system takes advantage of patient lobby downtime to download glucose devices and ensures that diabetes clinical decisions are made utilizing all available data. Staff and patients liked the software lobby system and found it a valuable time-saving tool.
APA, Harvard, Vancouver, ISO, and other styles
4

Oliveira, Ricardo, and Rafael Moreno. "HARVESTING, INTEGRATING AND DISTRIBUTING LARGE OPEN GEOSPATIAL DATASETS USING FREE AND OPEN-SOURCE SOFTWARE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 939–40. http://dx.doi.org/10.5194/isprsarchives-xli-b7-939-2016.

Full text
Abstract:
Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.
APA, Harvard, Vancouver, ISO, and other styles
5

Oliveira, Ricardo, and Rafael Moreno. "HARVESTING, INTEGRATING AND DISTRIBUTING LARGE OPEN GEOSPATIAL DATASETS USING FREE AND OPEN-SOURCE SOFTWARE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (June 22, 2016): 939–40. http://dx.doi.org/10.5194/isprs-archives-xli-b7-939-2016.

Full text
Abstract:
Federal, State and Local government agencies in the USA are investing heavily on the dissemination of Open Data sets produced by each of them. The main driver behind this thrust is to increase agencies’ transparency and accountability, as well as to improve citizens’ awareness. However, not all Open Data sets are easy to access and integrate with other Open Data sets available even from the same agency. The City and County of Denver Open Data Portal distributes several types of geospatial datasets, one of them is the city parcels information containing 224,256 records. Although this data layer contains many pieces of information it is incomplete for some custom purposes. Open-Source Software were used to first collect data from diverse City of Denver Open Data sets, then upload them to a repository in the Cloud where they were processed using a PostgreSQL installation on the Cloud and Python scripts. Our method was able to extract non-spatial information from a ‘not-ready-to-download’ source that could then be combined with the initial data set to enhance its potential use.
APA, Harvard, Vancouver, ISO, and other styles
6

Aditya, Aditya, and Retnowati Wahyuning Dyas Tuti. "Analysis of the Quality of Digital Library Services in DKI Jakarta." Transparansi : Jurnal Ilmiah Ilmu Administrasi 3, no. 2 (January 18, 2021): 210–18. http://dx.doi.org/10.31334/transparansi.v3i2.1169.

Full text
Abstract:
This research was conducted to analyze in depth how the quality of Jakarta digital library services in the Jakarta Provincial Library and Archives Service. The theory analysis is a theory of five dimensions of service quality consisting of :Tangible, Reliability, Responsiveness, Assurance, and Empathy. The method in this study useds a qualitative approach with descriptive methods Data collection techniques are done through observation, interviews and documentation. Data validity test uses Triangulation. The results showed that the quality of digital library services in the DKI Jakarta Provincial Library and Archives Service has not been fully said to be good. Collection of books in the Department of Library and Archives of the DKI Jakarta Provincial Government is incomplete, iJakarta applications sometimes experience errors/bugs, other than that the download time of books to be read is relatively long, can reach 5 minutes per book, Inadequate complaints channel for users / users who want to convey the problem,Handling of complaints is still very dependent on PT.Aksaramaya. This can hinder readers/visitors who need a quick complaint handling because the Library and Archives Service must first coordinate with PT.Aksaramaya and There is no audio book facility that should be available for persons with disabilities from the blind group.
APA, Harvard, Vancouver, ISO, and other styles
7

Di Giulio, Giuseppe, Giovanna Cultrera, Cécile Cornou, Pierre-Yves Bard, and Bilal Al Tfaily. "Quality assessment for site characterization at seismic stations." Bulletin of Earthquake Engineering 19, no. 12 (June 16, 2021): 4643–91. http://dx.doi.org/10.1007/s10518-021-01137-6.

Full text
Abstract:
AbstractMany applications related to ground-motion studies and engineering seismology benefit from the opportunity to easily download large dataset of earthquake recordings with different magnitudes. In such applications, it is important to have a reliable seismic characterization of the stations to introduce appropriate correction factors for including site amplification. Generally, seismic networks in Europe describe the site properties of a station through geophysical or geological reports, but often ad-hoc field surveys are missing and the characterization is done using indirect proxy. It is then necessary to evaluate the quality of a seismic characterization, accounting for the available site information, the measurements procedure and the reliability of the applied methods to obtain the site parameters.In this paper, we propose a strategy to evaluate the quality of site characterization, to be included in the station metadata. The idea is that a station with a good site characterization should have a larger ranking with respect to one with poor or incomplete information. The proposed quality metric includes the computation of three indices, which take into account the reliability of the available site indicators, their number and importance, together with their consistency defined through scatter plots for each single pair of indicators. For this purpose, we consider the seven indicators identified as most relevant in a companion paper (Cultrera et al. 2021): fundamental resonance frequency, shear-wave velocity profile, time-averaged shear-wave velocity over the first 30 m, depth of both seismological and engineering bedrock, surface geology and soil class.
APA, Harvard, Vancouver, ISO, and other styles
8

Gill, Bethany, Leila Khoja, Zhu Juan Li, Robert James Hamilton, Marianne Koritzinsky, Linda Penn, Kald Abdallah, Melania Pintilie, and Anthony M. Joshua. "Project data sphere (PDS) in prostate cancer: A first look including concomitant medication use." Journal of Clinical Oncology 33, no. 7_suppl (March 1, 2015): 204. http://dx.doi.org/10.1200/jco.2015.33.7_suppl.204.

Full text
Abstract:
204 Background: PDS enables patient-level analyses of control arms of cancer trials. The interface (www.projectdatasphere.org) allows for both web-based and download-based analyses. We aimed to validate established prostate cancer prognostic models and explore the effect of concomitant medications on survival in mCRPC. Methods: Data was obtained for 2,747 control subjects with mCRPC from 7 Phase III clinical trials with 1962 subjects available for OS analyses from 5 studies. Overall survival was estimated using the Kaplan-Meier Method. Cox-proportional hazards models, stratified by trial, were used to estimate hazard ratios. Results: Metastatic site was significant for overall survival (Median: Node only 23.69m, Bone 18.17m, Lung 14.72m, Liver 9.43m; p < 0.001). Of the 23 types of medication examined, after adjusting for metastatic site, patients taking proton pump inhibitors (HR: 1.155, p=0.017) and Erythropoietin (HR: 1.49, p-value<.001) had worse overall survival whilst patients taking fish oil (HR:0.68, p-value=0.033) and non-lipophilic statins (HR:0.69, p=.00277) had improved overall survival. Within the limits of available data, we validated the prognostic models for overall survival proposed by Templeton et al. and Sonpavde et al individually and after inclusion of concomitant medication where patients taking metformin (HR=0.729, p=.0082) and Cox 2 inhibitors (HR=0.708, p=.015) had improved OS whilst those taking low molecular weight heparin (HR=1.352, p=.004) had worse OS. Conclusions: As a first project utilizing open-source PDS data in prostate cancer, we validated two prostate cancer prognostic models and illuminated the ability to undertake novel analyses such as the association of concomitant medications with outcome. Limitations of the data relate to incomplete and inconsistent data entry. Future expansion of patient trials and numbers will help to facilitate future analyses.
APA, Harvard, Vancouver, ISO, and other styles
9

., Hidayah, Lutfan Lazuardi, and Wiwin Lismidiati. "RANCANGAN PEMBELAJARAN KASUS BERBASIS E-LEARNING UNTUK ASUHAN KEPERAWATAN MATERNITAS DENGAN PENDEKATAN TAKSONOMI NANDA-I, NIC, NOC." Jurnal Persatuan Perawat Nasional Indonesia (JPPNI) 1, no. 3 (March 16, 2017): 176. http://dx.doi.org/10.32419/jppni.v1i3.28.

Full text
Abstract:
ABSTRAKTujuan Penelitian: Mendeskripsikan kebutuhan pengguna untuk merancang pembelajaran kasusberbasis e-learning dengan menggunakan pedoman NANDA-I, NIC, NOC dalam asuhan keperawatanmaternitas. Metode: Desain penelitian yang digunakan ialah metode kualitatif dengan pendekatanstudi deskriptif analitik. Proses pengambilan sampel dengan cara purposive sampling. Partisipanterdiri atas 2 orang dosen dan 5 orang mahasiswa. Penelitian dilakukan pada bulan November 2015selama 3 minggu. Analisis data dengan pendekatan kualitatif dan penyajian data dilakukan secaradeskriptif. Hasil Penelitian: Hasil penelitian terdiri atas 4 tema utama, yaitu (1) permasalahan dalamproses pembelajaran asuhan keperawatan maternitas dengan menggunakan pedoman NANDA-I,NIC, NOC baik yang berasal dari mahasiswa, dosen, maupun sistem pembelajaran. Permasalahanmeliputi pengajaran asuhan keperawatan belum seluruhnya menggunakan NANDA-I, NIC, NOC,ketidakpahaman mahasiswa menggunakan NANDA-I, NIC, NOC jika diberikan kasus, fasilitas bukuperpustakaan terkait NANDA-I, NIC, NOC terbatas dan kurang lengkap; (2) tujuan pembelajaranasuhan keperawatan maternitas dengan menggunakan pedoman NANDA-I, NIC dan NOC; (3)tujuan pengembangan prototype e-learning, (4) spesifi kasi rancangan prototype E-learning yangdibutuhkan pengguna yang meliputi tampilan user friendly dan menarik; konten yang diinginkanberupa latihan penyelesaian kasus; kasus yang terdiri atas DM gestasional, preeklampsi, eklampsi,sindrom HELLP, solutio plasenta, perdarahan antepartum, dan masalah sistem reproduksi. Sistempengamanan berupa password dan account, serta dilengkapi proses download serta fl eksibeldan kompatibel. Diskusi: Faktor penyebab utama adanya masalah dalam pembelajaran asuhankeperawatan dengan menggunakan NANDA-I, NIC, NOC, karena fokus pembelajaran yang lebihditekankan pada pengumpulan pengetahuan tanpa mempertimbangkan keterampilan dalammelakukan asuhan keperawatan. Fokus rancangan prototype pembelajaran kasus berbasise-learning ini untuk selanjutnya bertumpu pada tampilan antarmuka serta pilihan skenario kasusyang dapat mengakomodasi kebutuhan mahasiswa dalam pembelajaran kasus. Kesimpulan:Pengembangan rancangan prototype pembelajaran kasus berbasis e-learning ini ditujukan sebagaipelengkap pembelajaran konvensional yang berfokus pada aspek pengetahuan mahasiswa dalammenerapkan penggunaan NANDA-I, NIC, NOC melalui latihan-latihan kasus yang diberikan.Kata Kunci: E-learning, asuhan keperawatan maternitas, NANDA-I, NIC, NOCTHE DEVELOPMENT OF E-LEARNING-BASED CASE LEARNING FOR MATERNITY NURSINGCARE USING NANDA-I, NIC, NOC TAXONOMY APPROACHABSTRACTObjective: To describe the development of E-learning-based case learning using the guidelinestaken from NANDA-I, NIC, NOC for maternity nursing care. Methods: This study was conductedusing a qualitative method with decsriptive analytical approach. Samples were taken usingpurposive sampling technique. Participants consisted of two lectures and fi ve students. The studywas conducted in November 2015 for three weeks. Data were analyzed qualitatively and presenteddescriptively. Results: The results of the study consisted of four major themes: (1) problems inmaternity nursing care learning process by using the guidelines taken from NANDA-I, NIC, NOCfrom students, lecturers and learning systems. The problems were that not all teachings of nursingcare used NANDA-I, NIC, NOC, students did not understand using NANDA-I, NIC, NOC if case weregiven, books related to NANDA-I, NIC, NOC in the library were limited and incomplete, (2) Objectivesof maternity nursing care learning by using the guidelines taken from NANDA-I, NIC and NOC,(3) Objectives of the development of E-learning prototype, (4) Specifi cation of E-learning prototyperequired by users such as user-friendly and interesting interface, contents consisting of caseexercises, cases including gestational DM, pre-eclampsia, eclampsia, HELLP syndrome, placentasolution, antepartum hemorrhage and reproduction system problems. Security system appliedpassword and account accompanied with a fl exible and compatible download page. Discussion:Problems arose in nursing care learning which employed NANDA-I, NIC, NOC because thelearning process focused on collecting knowledge without taking skills in performing nursing careinto consideration. The development of E-learning-based case learning prototype was focused oninterface and the options of case scenario that can accommodate students’ requirements in the caselearning. Conclusion: E-learning-based case learning prototype was developed as a complementarymedia for the conventional learning which focused on the cognitive aspects of students in employingNANDA-I, NIC, NOC through case exercises provided.Keywords: E-learning, Maternity Nursing Care, NANDA-I, NIC, NOC
APA, Harvard, Vancouver, ISO, and other styles
10

Rusmini, Ni Putu. "PERILAKU PENGGUNAAN ALAT PELINDUNG DIRI DAN PENULARAN PENYAKIT KULIT PADA PETUGAS TPS DI KECAMATAN SAWAHAN SURABAYA." Adi Husada Nursing Journal 1, no. 2 (December 15, 2015): 38. http://dx.doi.org/10.37036/ahnj.v1i2.20.

Full text
Abstract:
ABSTRAKPetugas TPS atau petugas pengangkut sampah merupakan pekerja yang setiap harinya mengambil atau mengangkut sampah dari rumah ke rumah untuk dikumpulkan kemudian di pilah-pilah di TPS dan akan dikirimkan ke tempat pembuangan yang lebih besar yaitu Tempat Pembuangan Akhir (TPA). Sepanjang hari petugas TPS bekerja dengan sampah sehingga membuat mereka mempunyai risiko tinggi terkena penularan penyakit kulit, baik yang memiliki efek secara langsung maupun tidak langsung. Salah satu upaya yang dapat dilakukan untuk mengurangi resiko terkena penularan penyakit kulit adalah dengan menggunakan Alat Pelindung Diri (APD) Kurangnya kesadaran, kepatuhan dan informasi tentang risiko bahaya, sebagian dari mereka tidak tidak menggunakan APD. APD yang kurang lengkap dapat memungkinkan kontak langsung dengan sampah sehingga mengakibatkan terjadinya gangguan kesehatan salah satunya yaitu menyebabkan penularan penyakit kulit. Jenis penelitian ini adalah analitik korelasi dengan pendekatan cross-sectional. Data diuji dengan Spearman rank test. Pengumpulan data dengan cara observasi, wawancara dan kuesioner. Peneliti menggunakan metode total sampling. Hasil uji statistik menunjukkan p=0.00 (α<0.05) dan r=0.761, sehingga terdapat hubungan yang kuat antara perilaku pemakaian APD dengan penularan penyakit kulit pada petugas TPS. Oleh sebab itu, diharapkan program pemerintah dan petugas kesehatan dapat mendukung penggunaan APD sebagai upaya preventif terhadap penularan penyakit kulit pada petugas TPS.Kata kunci : sampah, petugas TPS, alat pelindung diri (APD), penularan penyakit kulitABSTRACTA garbage worker who take or hauling garbage from house to house and collected and then sorted into the TPS every day and will be sent to landfills larger is the final disposal (landfill). Throughout the day poll workers working with litter so as to make them have a higher risk of skin disease transmission, both of which have the effect of directly or indirectly. One effort that can be done to reduce the risk of skin disease transmission is to use Personal Protective Equipment (PPE) Lack of awareness, compliance and information about the risk of harm, some of them not using PPE. APD incomplete can allow direct contact with garbage, which causes health problems one of which causes the skin disease transmission. This type of research is an analytic correlation with cross-sectional approach. Data were tested with Spearman rank test. The collection of data by means of observation, interviews and questionnaires. Researchers used total sampling method. Statistical analysis showed p = 0.00 (α <0,05) and r = 0.761, so there is a strong relationship between the behavior of the use of PPE with the skin disease transmission at the polling station officials. Therefore, it is expected the government programs and health workers can support the use of PPE as a preventative measure against the spread of skin disease at polling station officials.Keywords: garbage, garbage workers, Personal Protective Equipment (PPE), skin disease transmission. DOWNLOAD FULL TEXT PDF >>
APA, Harvard, Vancouver, ISO, and other styles
11

McGuffie, Matthew J., and Jeffrey E. Barrick. "pLannotate: engineered plasmid annotation." Nucleic Acids Research 49, W1 (May 21, 2021): W516—W522. http://dx.doi.org/10.1093/nar/gkab374.

Full text
Abstract:
Abstract Engineered plasmids are widely used in the biological sciences. Since many plasmids contain DNA sequences that have been reused and remixed by researchers for decades, annotation of their functional elements is often incomplete. Missing information about the presence, location, or precise identity of a plasmid feature can lead to unintended consequences or failed experiments. Many engineered plasmids contain sequences—such as recombinant DNA from all domains of life, wholly synthetic DNA sequences, and engineered gene expression elements—that are not predicted by microbial genome annotation pipelines. Existing plasmid annotation tools have limited feature libraries and do not detect incomplete fragments of features that are present in many plasmids for historical reasons and may impact their newly designed functions. We created the open source pLannotate web server so users can quickly and comprehensively annotate plasmid features. pLannotate is powered by large databases of genetic parts and proteins. It employs a filtering algorithm to display only the most relevant feature matches and also reports feature fragments. Finally, pLannotate displays a graphical map of the annotated plasmid, explains the provenance of each feature prediction, and allows results to be downloaded in a variety of formats. The webserver for pLannotate is accessible at: http://plannotate.barricklab.org/
APA, Harvard, Vancouver, ISO, and other styles
12

Yi, Myongho. "Exploring the quality of government open data." Electronic Library 37, no. 1 (February 4, 2019): 35–48. http://dx.doi.org/10.1108/el-06-2018-0124.

Full text
Abstract:
Purpose The use of “open data” can help the public find value in various areas of interests. Many governments have created and published a huge amount of open data; however, people have a hard time using open data because of data quality issues. The UK, the USA and Korea have created and published open data; however, the rate of open data implementation and level of open data impact is very low because of data quality issues like incompatible data formats and incomplete data. This study aims to compare the statuses of data quality from open government sites in the UK, the USA and Korea and also present guidelines for publishing data format and enhancing data completeness. Design/methodology/approach This study uses statistical analysis of different data formats and examination of data completeness to explore key issues of data quality in open government data. Findings Findings show that the USA and the UK have published more than 50 per cent of open data in level one. Korea has published 52.8 per cent of data in level three. Level one data are not machine-readable; therefore, users have a hard time using them. The level one data are found in portable document format and hyper text markup language (HTML) and are locked up in documents; therefore, machines cannot extract out the data. Findings show that incomplete data are existing in all three governments’ open data. Originality/value Governments should investigate data incompleteness of all open data and correct incomplete data of the most used data. Governments can find the most used data easily by monitoring data sets that have been downloaded most frequently over a certain period.
APA, Harvard, Vancouver, ISO, and other styles
13

Vanhatalo, Jarno, Zitong Li, and Mikko J. Sillanpää. "A Gaussian process model and Bayesian variable selection for mapping function-valued quantitative traits with incomplete phenotypic data." Bioinformatics 35, no. 19 (March 8, 2019): 3684–92. http://dx.doi.org/10.1093/bioinformatics/btz164.

Full text
Abstract:
Abstract Motivation Recent advances in high dimensional phenotyping bring time as an extra dimension into the phenotypes. This promotes the quantitative trait locus (QTL) studies of function-valued traits such as those related to growth and development. Existing approaches for analyzing functional traits utilize either parametric methods or semi-parametric approaches based on splines and wavelets. However, very limited choices of software tools are currently available for practical implementation of functional QTL mapping and variable selection. Results We propose a Bayesian Gaussian process (GP) approach for functional QTL mapping. We use GPs to model the continuously varying coefficients which describe how the effects of molecular markers on the quantitative trait are changing over time. We use an efficient gradient based algorithm to estimate the tuning parameters of GPs. Notably, the GP approach is directly applicable to the incomplete datasets having even larger than 50% missing data rate (among phenotypes). We further develop a stepwise algorithm to search through the model space in terms of genetic variants, and use a minimal increase of Bayesian posterior probability as a stopping rule to focus on only a small set of putative QTL. We also discuss the connection between GP and penalized B-splines and wavelets. On two simulated and three real datasets, our GP approach demonstrates great flexibility for modeling different types of phenotypic trajectories with low computational cost. The proposed model selection approach finds the most likely QTL reliably in tested datasets. Availability and implementation Software and simulated data are available as a MATLAB package ‘GPQTLmapping’, and they can be downloaded from GitHub (https://github.com/jpvanhat/GPQTLmapping). Real datasets used in case studies are publicly available at QTL Archive. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
14

Duan, J., K. Flock, M. Zhang, A. K. Jones, S. M. Pillai, M. L. Hoffman, H. Jiang, et al. "109 Dosage Compensation of the X Chromosome in Ovine Embryos, Late Gestation, and Adult Somatic Tissues." Reproduction, Fertility and Development 30, no. 1 (2018): 194. http://dx.doi.org/10.1071/rdv30n1ab109.

Full text
Abstract:
Deviations from proper gene dosage of the autosome range from severe to lethal consequences in mammals. Eutherian males (XY), however, have reduced gene dosage compared with females (XX) due to a single X and deteriorating Y chromosome. This dosage imbalance is resolved through X chromosome dosage compensation, according to Ohno’s hypothesis: X-linked gene expression is doubled in both males and females to balance expression of the X chromosome and autosomes. To compensate for doubled X chromosome expression in females, X chromosome inactivation (XCI) inactivates a single X chromosome in each cell. Although these mechanisms have been well studied in mice and humans, controversies exist due to the analysis and interpretation of RNA sequencing data. Here we described X chromosome dosage compensation in the sheep. Twelve ewes were fed 100% (control), 60% (restricted), or 140% (overfed) of the National Research Council requirements for a ewe pregnant with twins (NRC, 1985; Nutrient Requirements of Sheep, 6th ed.). Day 135 brain, lung, and kidney tissues were collected from fetuses of the control, restricted, and overfed groups (n = 7, 4, and 4; respectively). RNA seq libraries were prepared using the Illumina TruSeq stranded mRNA kit and sequenced on the NextSEqn 500 (Illumina Inc., San Diego, CA, USA). Two additional RNA-seq datasets were downloaded from Sequence Read Archive (SRA), including Day 14 embryos (PRJNA254105), and adult and juvenile heart, brain, liver, muscle, rumen, and female- and male-specific tissues (PRJEB6169). The RNA-seq data were trimmed and mapped to the ovine reference genome assembly Oar_v4.0 using Hisat2 (version 2.0.5; https://ccb.jhu.edu/software/hisat2/index.shtml) aligner. The mRNA level of each gene was estimated by transformed transcripts per kilobase million (TPM) and was quantified using IsoEM (version 1.1.4; http://dna.engr.uconn.edu/). The relative expression of X to autosomal (A)(RXE) was calculated using RXE = log2(X expression) – log2(A expression) with an average of 486 X-linked genes and 13,001 autosomal genes after TPM >1 filtering. RXE ≥0 (or X:A ratio ≥ 1); <0, = –1 indicate complete, incomplete, or no dosage compensation; respectively. Control, restricted, and overfed ovine fetal somatic tissues displayed incomplete dosage compensation. Incomplete dosage compensation was also observed in juvenile and adult somatic major organs and female specific tissues. Brain tissues, apart from the cerebellum, displayed complete dosage compensation with an RXE range of 0 to 0.16. An interesting pattern was observed in the male specific tissues with complete dosage compensation in the epididymis (RXE = 0.32) and incomplete dosage compensation in the testes (RXE = –0.84). No significant RXE differences were observed between ovine female and male somatic tissues, supporting Ohno’s hypothesis of balanced expression of X-linked genes to autosomal genes. Our results indicate that a mechanism for dosage compensation exists in the sheep, although it is largely incomplete.
APA, Harvard, Vancouver, ISO, and other styles
15

Luz Júnior, João da Cruz Rosal da, João Henrique Piauilino Rosal, Vinicius José de Melo Sousa, Débora Karine dos Santos Pacífico, and Luan Kelves Miranda de Souza. "Distúrbios gastrointestinais associados à infecção pelo vírus SARS-COV-2: Uma revisão sistemática de literatura." Research, Society and Development 10, no. 8 (July 5, 2021): e8910816654. http://dx.doi.org/10.33448/rsd-v10i8.16654.

Full text
Abstract:
Esse trabalho apresenta algumas considerações referente aos distúrbios gastrointestinais em paciente com diagnóstico para o COVID-19, assim como a relação da transmissão fecal-oral. A partir de então, tenciona-se no estudo, gerar ao público leitor, maior esclarecimento sobre a temática da pesquisa. Com isso, objetiva-se analisar artigos de pesquisas e revisões bibliografias voltados aos distúrbios gastrointestinais associados à infecção pelo vírus SARS-COV-2. Para a consolidação de seu objetivo, fez-se necessário um estudo voltado para a revisão sistemática da literatura, o qual utilizou-se 7 artigos publicados no ano de 2020 e 2021, completos e disponíveis para downloads de forma gratuita, no idioma português. Foram excluídos da busca, artigos incompletos, de revisão de literatura, e fora do tempo determinado para a pesquisa. O estudou mostrou a relação dos sintomas gástricos associado aos pacientes com o vírus SARS-COV-2, assim como apresentou dados sobre meio de cuidados de higiene para o enfretamento da doença. Após a análise dos estudos selecionados conclui-se que, é de grande relevância cientifica a solicitação de radiologias abdominais para o prognóstico do paciente.
APA, Harvard, Vancouver, ISO, and other styles
16

Tavitian, Elizabet, Donna Mastey, Meghan Salcedo, Andrew Zarski, Aisara Chansakul, Victoria Diab, Sham Mailankody, et al. "Continuous Mobile Wearable Bio-Monitoring of Newly Diagnosed Multiple Myeloma Patients Undergoing Initial Chemotherapy." Blood 132, Supplement 1 (November 29, 2018): 4751. http://dx.doi.org/10.1182/blood-2018-99-116545.

Full text
Abstract:
Abstract Introduction The current standard to assess chemotherapy tolerability relies on patient self-reporting. However, as the sole mechanism of managing symptom burden, this may be inconsistent and fraught with bias. Mobile wearable health devices have the ability to monitor and aggregate objective activity and sleep data over long periods of time, but have not been systematically used in the oncology clinic. The aim of the study was to assess whether the use of mobile wearable technology establishes patterns of "sleep" and "wake" states in newly diagnosed Multiple Myeloma (NDMM) patients receiving therapy, and whether these patterns differ over time. Methods Patients presenting to the myeloma clinic at Memorial Sloan Kettering Cancer Center (MSKCC) with a new diagnosis of Multiple Myeloma and smart phone or tablet (iOS or Android) compatible with the Garmin Vivofit device were offered to participate in a mobile wearable bio-monitoring study. All eligible participants were required to receive primary chemotherapy treatment at a MSKCC facility. Treatment was determined by physician. NDMM patients were assigned to one of two cohorts (20 in each; Cohort A - patients <65 years; Cohort B - patients ≥ 65 years). Patients were given Garmin Vivofit devices and asked to download a Garmin Vivofit application and Medidata electronic patient reported outcome (ePRO) application on their phone or tablet. Patients were bio-monitored for physical activity and sleep during baseline period (1-7 days prior to chemotherapy initiation) and continuously up to 6 cycles of chemotherapy. Additionally, patients completed mobile ePRO questionnaires [(EORTC - QLQC30 and MY20) and brief pain inventory scales (BPI)] using the Medidata application at baseline and after each induction cycle. Activity, sleep data, and completed ePRO questionnaire data were automatically synced or transferred to Medidata Rave database through Medidata Sensorlink technology. In this abstract, we report initial results on prospective collection of activity measurements. Additional data from the health-related quality of life questionnaires and clinical outcomes will be presented at later date. Results Between February 2017-March 2018, 37 patients (19 males and 18 females) enrolled onto the study, with 20 in cohort A and 17 in cohort B. The mean age was 55 years (range 41-64) for cohort A and 72 years (range 65-82) for cohort B. Treatment regimens included Carfilzomib/Revlimid/Dexamethasone 14(38%), Velcade/Revlimid/Dexamethasone 10(27%), Daratumumab/Carfilzomib Revlimid/Dexamethasone, 7(19%), Cyclophosphamide/Velcade/Dexamethasone 3(8%), Revlimid/Dexamethasone 2(5%), and Velcade/Revlimid/Dexamethasone-Lite 1(3%). Twenty-four patients have completed the trial, and 7 remain active. Six patients came off-study due to the following reasons: lost devices (n=4), intolerable rash during cycle 3 (n=1), and incompletion of baseline activity (n=1). Three patients were excluded for incomplete data sets with no baseline data collection at the time of analysis. Fifteen patients were available for data review including 10 in cohort A and 5 in cohort B. Mean activity for cohort A was 6,437 steps/24 hr period (1,002 - 12,754) versus for cohort B was 3,218.37 steps/24 hr period (387 - 6,155) (p <0.05). In comparing pre- and post-therapy, overall mean activity for cohort A increased from 5,995 to 6,513 steps/24 hr, 8.6% increase (p=0.78), and for cohort B mean activity increased from 2,249 to 3,420 steps/24 hr, a 52% increase (p=0.2140). We assessed short term effects therapy initiation had on activity for NDMM patients by comparing percent changes in activity (steps/24 hrs) from baseline period to cycle 1 period. We found 3 patients had a >100% increase, 1 patient had 50-100% increase, and 11 patients had within +/- 50% change in activity from baseline. Conclusion Electronic mobile wearable device monitoring in symptomatic NDMM patients may be a useful tool to assess a patient's overall wellness and health as they are receiving chemotherapy. For three patients, we were able to capture a dramatic increase in activity after initiation of treatment. Overall activity in the elderly NDMM patients is decreased compared to younger patients. Mobile wearable monitoring may be an even more useful strategy for tracking elderly and unfit patients that are more prone to side effects, where the balance of response versus quality of life is paramount. Figure. Figure. Disclosures Mailankody: Physician Education Resource: Honoraria; Janssen: Research Funding; Takeda: Research Funding; Juno: Research Funding. Hassoun:Oncopeptides AB: Research Funding. Lesokhin:Squibb: Consultancy, Honoraria; Serametrix, inc.: Patents & Royalties: Royalties; Janssen: Research Funding; Bristol-Myers Squibb: Consultancy, Honoraria, Research Funding; Genentech: Research Funding; Takeda: Consultancy, Honoraria. Smith:Celgene: Consultancy, Patents & Royalties: CAR T cell therapies for MM, Research Funding. Shah:Amgen: Research Funding; Janssen: Research Funding. Landgren:Takeda: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding; Celgene: Consultancy, Research Funding; Pfizer: Consultancy; Janssen: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding; Amgen: Consultancy, Research Funding; Merck: Membership on an entity's Board of Directors or advisory committees; Karyopharm: Consultancy. Korde:Amgen: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
17

Wade, G. "Implementing SNOMED CT for Quality Reporting: Avoiding Pitfalls." Applied Clinical Informatics 02, no. 04 (2011): 534–44. http://dx.doi.org/10.4338/aci-2011-10-ra-0056.

Full text
Abstract:
SummaryObjective: To implement the SNOMED CT electronic specifications for reporting quality measures and to identify critical issues that affect implementation.Background: The Centers for Medicare and Medicaid (CMS) have issued the electronic specifications for reporting quality measures requiring vendors and hospital systems to use standardized data elements to provide financial incentives for eligible providers.Methods: The electronic specifications from CMS were downloaded and extracted. All SNOMED CT codes were examined individually as part of the creation of a mapping table for distribution by a vendor for incorporation into electronic health record systems. A qualitative and quantitative evaluation of the SNOMED CT codes was done as a follow up to the mapping project.Results: A total of 10643 SNOMED codes were examined for the 44 measures. The approved SNOMED CT code sets contain aberrancies in content such as incomplete IDs, the use of description IDs instead of concept IDs, inactive codes, morphology and observable codes for clinical findings and the inclusion of non-human content.Conclusion: Implementers of these approved specifications must do additional rigorous review and make edits in order to avoid incorporating errors into their EHR products and systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Mitchell, Scott, and Sheryl N. Hamilton. "Playing at apocalypse: Reading Plague Inc. in pandemic culture." Convergence: The International Journal of Research into New Media Technologies 24, no. 6 (January 17, 2017): 587–606. http://dx.doi.org/10.1177/1354856516687235.

Full text
Abstract:
Plague Inc. is an enduringly popular mobile video game in which players create diseases and attempt to eradicate humanity; it has been downloaded more than 60 million times and been met with largely positive critical reception, with many reviews praising the game as a ‘realistic outbreak simulator’. This article explores Plague Inc. as both an artifact, and productive, of ‘pandemic culture’, a social imaginary that describes how the threat of pandemic increasingly shapes our day-to-day life. Ludic and narrative elements of the game were identified and selected for analysis, along with paratexts surrounding the game. Three aspects of Plague Inc. were used to structure the analysis: its politics of global scale, its viral realism, and its visual culture of contagion. The article examines how the ways in which Plague Inc. articulates ideas about pandemic may not only explain the game’s immense success but also provide insights into public perceptions and popular discourses about disease threats. The article argues that the game is an incomplete text that depends on preexisting familiarity with other disease media. It concludes that the popularity and longevity of Plague Inc., as well as its broader social relevance, can be explained by placing it within the context of public anxieties about vulnerability to infectious diseases.
APA, Harvard, Vancouver, ISO, and other styles
19

Ye, Xinghuo, Zhihong Yang, Yeqin Jiang, Lan Yu, Rongkai Guo, Yijun Meng, and Chaogang Shao. "sRNATargetDigger: A bioinformatics software for bidirectional identification of sRNA-target pairs with co-regulatory sRNAs information." PLOS ONE 15, no. 12 (December 28, 2020): e0244480. http://dx.doi.org/10.1371/journal.pone.0244480.

Full text
Abstract:
Identification of the target genes of microRNAs (miRNAs), trans-acting small interfering RNAs (ta-siRNAs), and small interfering RNAs (siRNAs) is an important step for understanding their regulatory roles in plants. In recent years, many bioinformatics software packages based on small RNA (sRNA) high-throughput sequencing (HTS) and degradome sequencing data analysis have provided strong technical support for large-scale mining of sRNA-target pairs. However, sRNA-target regulation is achieved using a complex network of interactions since one transcript might be co-regulated by multiple sRNAs and one sRNA may also affect multiple targets. Currently used mining software can realize the mining of multiple unknown targets using known sRNA, but it cannot rule out the possibility of co-regulation of the same target by other unknown sRNAs. Hence, the obtained regulatory network may be incomplete. We have developed a new mining software, sRNATargetDigger, that includes two function modules, “Forward Digger” and “Reverse Digger”, which can identify regulatory sRNA-target pairs bidirectionally. Moreover, it has the ability to identify unknown sRNAs co-regulating the same target, in order to obtain a more authentic and reliable sRNA-target regulatory network. Upon re-examination of the published sRNA-target pairs in Arabidopsis thaliana, sRNATargetDigger found 170 novel co-regulatory sRNA-target pairs. This software can be downloaded from http://www.bioinfolab.cn/sRNATD.html.
APA, Harvard, Vancouver, ISO, and other styles
20

Chou, Li-Wei, Kang-Ming Chang, and Ira Puspitasari. "Drug Abuse Research Trend Investigation with Text Mining." Computational and Mathematical Methods in Medicine 2020 (February 1, 2020): 1–8. http://dx.doi.org/10.1155/2020/1030815.

Full text
Abstract:
Drug abuse poses great physical and psychological harm to humans, thereby attracting scholarly attention. It often requires experience and time for a researcher, just entering this field, to find an appropriate method to study drug abuse issue. It is crucial for researchers to rapidly understand the existing research on a particular topic and be able to propose an effective new research method. Text mining analysis has been widely applied in recent years, and this study integrated the text mining method into a review of drug abuse research. Through searches for keywords related to the drug abuse, all related publications were identified and downloaded from PubMed. After removing the duplicate and incomplete literature, the retained data were imported for analysis through text mining. A total of 19,843 papers were analyzed, and the text mining technique was used to search for keyword and questionnaire types. The results showed the associations between these questionnaires, with the top five being the Addiction Severity Index (16.44%), the Quality of Life survey (5.01%), the Beck Depression Inventory (3.24%), the Addiction Research Center Inventory (2.81%), and the Profile of Mood States (1.10%). Specifically, the Addiction Severity Index was most commonly used in combination with Quality of Life scales. In conclusion, association analysis is useful to extract core knowledge. Researchers can learn and visualize the latest research trend.
APA, Harvard, Vancouver, ISO, and other styles
21

Giommi, P., Y. L. Chang, S. Turriziani, T. Glauch, C. Leto, F. Verrecchia, P. Padovani, et al. "Open Universe survey of Swift-XRT GRB fields: Flux-limited sample of HBL blazars." Astronomy & Astrophysics 642 (October 2020): A141. http://dx.doi.org/10.1051/0004-6361/202037921.

Full text
Abstract:
Aims. The sample of serendipitous sources detected in all Swift-XRT images pointing at gamma ray bursts (GRBs) constitutes the largest existing medium-deep survey of the X-ray sky. To build such dataset we analysed all Swift X-ray images centred on GRBs and observed over a period of 15 years using automatic tools that do not require any expertise in X-ray astronomy. Besides presenting a new large X-ray survey and a complete sample of blazars, this work aims to be a step in the direction of achieving the ultimate goal of the Open Universe Initiative, which is to enable non-expert people to benefit fully from space science data, possibly extending the potential for scientific discovery, which is currently confined within a small number of highly specialised teams, to a much larger population. Methods. We used the Swift_deepsky Docker container encapsulated pipeline to build the largest existing flux-limited and unbiased sample of serendipitous X-ray sources. Swift_deepsky runs on any laptop or desktop computer with a modern operating system. The tool automatically downloads the data and the calibration files from the archives, runs the official Swift analysis software, and produces a number of results including images, the list of detected sources, X-ray fluxes, spectral energy distribution data, and spectral slope estimations. Results. We used our source list to build the LogN-LogS of extra-galactic sources, which perfectly matches that estimated by other satellites. Combining our survey with multi-frequency data, we selected a complete radio-flux-density-limited sample of high energy peaked blazars (HBL). The LogN-LogS built with this data set confirms that previous samples are incomplete below ∼20 mJy.
APA, Harvard, Vancouver, ISO, and other styles
22

Lo Duca, Angelica, and Andrea Marchetti. "Open data for tourism: the case of Tourpedia." Journal of Hospitality and Tourism Technology 10, no. 3 (September 17, 2019): 351–68. http://dx.doi.org/10.1108/jhtt-07-2017-0042.

Full text
Abstract:
PurposeThis paper aims to describe Tourpedia, a website about tourism, built on open data provided by official government agencies. Tourpedia provides data under a public license.Design/methodology/approachTourpedia is built upon a modular architecture, which allows a developer to add a new source of data easily. This is achieved through a simple mapping language, namely, Tourpedia mapping language, which maps the original open data set model to the Tourpedia data model.FindingsTourpedia contains more than 70.000 accommodations, downloaded from open data provided by Italian, French and Spanish regions.Research limitations/implicationsTourpedia presents some limitations. First, extracted data are not homogeneous and often they are incomplete or wrong. Second, Tourpedia contains only accommodations. Finally, at the moment Tourpedia covers only some Italian, French and Spanish regions.Practical implicationsThe most important implication of Tourpedia concerns the construction of a single access point for all Italian, French and Spanish open data about accommodations. In addition, a simple mechanism for the integration of new sources of open data is defined.Social implicationsThe current version of Tourpedia opens also the road to three new possible social scenarios. First, Tourpedia could be transformed into an open source of updated information about tourism. Second, Tourpedia could be empowered to support tours, which include some tourist attractions and/or events and suggest the nearest accommodations. Finally, Tourpedia may help tourists to discover unknown places.Originality/valueTourpedia constitutes an access point for data sets providers, application developers and tourists because it provides a unique website.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Qiang, Qiangqiang Yuan, Jie Li, Yuan Wang, Fujun Sun, and Liangpei Zhang. "Generating seamless global daily AMSR2 soil moisture (SGD-SM) long-term products for the years 2013–2019." Earth System Science Data 13, no. 3 (March 31, 2021): 1385–401. http://dx.doi.org/10.5194/essd-13-1385-2021.

Full text
Abstract:
Abstract. High-quality and long-term soil moisture products are significant for hydrologic monitoring and agricultural management. However, the acquired daily Advanced Microwave Scanning Radiometer 2 (AMSR2) soil moisture products are incomplete in global land (just about 30 %–80 % coverage ratio), due to the satellite orbit coverage and the limitations of soil moisture retrieval algorithms. To solve this inevitable problem, we develop a novel spatio-temporal partial convolutional neural network (CNN) for AMSR2 soil moisture product gap-filling. Through the proposed framework, we generate the seamless daily global (SGD) AMSR2 long-term soil moisture products from 2013 to 2019. To further validate the effectiveness of these products, three verification methods are used as follows: (1) in situ validation, (2) time-series validation, and (3) simulated missing-region validation. Results show that the seamless global daily soil moisture products have reliable cooperativity with the selected in situ values. The evaluation indexes of the reconstructed (original) dataset are a correlation coefficient (R) of 0.685 (0.689), root-mean-squared error (RMSE) of 0.097 (0.093), and mean absolute error (MAE) of 0.079 (0.077). The temporal consistency of the reconstructed daily soil moisture products is ensured with the original time-series distribution of valid values. The spatial continuity of the reconstructed regions is in accordance with the spatial information (R: 0.963–0.974, RMSE: 0.065–0.073, and MAE: 0.044–0.052). This dataset can be downloaded at https://doi.org/10.5281/zenodo.4417458 (Zhang et al., 2021).
APA, Harvard, Vancouver, ISO, and other styles
24

Reibe, Saskia, Marit Hjorth, Mark A. Febbraio, and Martin Whitham. "GeneXX: an online tool for the exploration of transcript changes in skeletal muscle associated with exercise." Physiological Genomics 50, no. 5 (May 1, 2018): 376–84. http://dx.doi.org/10.1152/physiolgenomics.00127.2017.

Full text
Abstract:
Exercise stimulates a wide array of biological processes, but the mechanisms involved are incompletely understood. Many previous studies have adopted transcriptomic analyses of skeletal muscle to address particular research questions, a process that ultimately results in the collection of large amounts of publicly available data that has not been fully integrated or interrogated. To maximize the use of these available transcriptomic exercise data sets, we have downloaded and reanalyzed them and formulated the data into a searchable online tool, geneXX. GeneXX is highly intuitive and free and provides immediate information regarding the response of a transcript of interest to exercise in skeletal muscle. To demonstrate its utility, we carried out a meta-analysis on the included data sets and show transcript changes in skeletal muscle that persist regardless of sex, exercise mode, and duration, some of which have had minimal attention in the context of exercise. We also demonstrate how geneXX can be used to formulate novel hypotheses on the complex effects of exercise, using preliminary data already generated. This resource represents a valuable tool for researchers with interests in human skeletal muscle adaptation to exercise.
APA, Harvard, Vancouver, ISO, and other styles
25

Ramosaj, Burim, Lubna Amro, and Markus Pauly. "A cautionary tale on using imputation methods for inference in matched-pairs design." Bioinformatics 36, no. 10 (February 12, 2020): 3099–106. http://dx.doi.org/10.1093/bioinformatics/btaa082.

Full text
Abstract:
Abstract Motivation Imputation procedures in biomedical fields have turned into statistical practice, since further analyses can be conducted ignoring the former presence of missing values. In particular, non-parametric imputation schemes like the random forest have shown favorable imputation performance compared to the more traditionally used MICE procedure. However, their effect on valid statistical inference has not been analyzed so far. This article closes this gap by investigating their validity for inferring mean differences in incompletely observed pairs while opposing them to a recent approach that only works with the given observations at hand. Results Our findings indicate that machine-learning schemes for (multiply) imputing missing values may inflate type I error or result in comparably low power in small-to-moderate matched pairs, even after modifying the test statistics using Rubin’s multiple imputation rule. In addition to an extensive simulation study, an illustrative data example from a breast cancer gene study has been considered. Availability and implementation The corresponding R-code can be accessed through the authors and the gene expression data can be downloaded at www.gdac.broadinstitute.org. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
26

Mirus, Benjamin B., Eric S. Jones, Rex L. Baum, Jonathan W. Godt, Stephen Slaughter, Matthew M. Crawford, Jeremy Lancaster, et al. "Landslides across the USA: occurrence, susceptibility, and data limitations." Landslides 17, no. 10 (May 29, 2020): 2271–85. http://dx.doi.org/10.1007/s10346-020-01424-4.

Full text
Abstract:
Abstract Detailed information about landslide occurrence is the foundation for advancing process understanding, susceptibility mapping, and risk reduction. Despite the recent revolution in digital elevation data and remote sensing technologies, landslide mapping remains resource intensive. Consequently, a modern, comprehensive map of landslide occurrence across the United States (USA) has not been compiled. As a first step toward this goal, we present a national-scale compilation of existing, publicly available landslide inventories. This geodatabase can be downloaded in its entirety or viewed through an online, searchable map, with parsimonious attributes and direct links to the contributing sources with additional details. The mapped spatial pattern and concentration of landslides are consistent with prior characterization of susceptibility within the conterminous USA, with some notable exceptions on the West Coast. Although the database is evolving and known to be incomplete in many regions, it confirms that landslides do occur across the country, thus highlighting the importance of our national-scale assessment. The map illustrates regions where high-quality mapping has occurred and, in contrast, where additional resources could improve confidence in landslide characterization. For example, borders between states and other jurisdictions are quite apparent, indicating the variation in approaches to data collection by different agencies and disparity between the resources dedicated to landslide characterization. Further investigations are needed to better assess susceptibility and to determine whether regions with high relief and steep topography, but without mapped landslides, require further landslide inventory mapping. Overall, this map provides a new resource for accessing information about known landslides across the USA.
APA, Harvard, Vancouver, ISO, and other styles
27

Devkota, Kapil, James M. Murphy, and Lenore J. Cowen. "GLIDE: combining local methods and diffusion state embeddings to predict missing interactions in biological networks." Bioinformatics 36, Supplement_1 (July 1, 2020): i464—i473. http://dx.doi.org/10.1093/bioinformatics/btaa459.

Full text
Abstract:
Abstract Motivation One of the core problems in the analysis of biological networks is the link prediction problem. In particular, existing interactions networks are noisy and incomplete snapshots of the true network, with many true links missing because those interactions have not yet been experimentally observed. Methods to predict missing links have been more extensively studied for social than for biological networks; it was recently argued that there is some special structure in protein–protein interaction (PPI) network data that might mean that alternate methods may outperform the best methods for social networks. Based on a generalization of the diffusion state distance, we design a new embedding-based link prediction method called global and local integrated diffusion embedding (GLIDE). GLIDE is designed to effectively capture global network structure, combined with alternative network type-specific customized measures that capture local network structure. We test GLIDE on a collection of three recently curated human biological networks derived from the 2016 DREAM disease module identification challenge as well as a classical version of the yeast PPI network in rigorous cross validation experiments. Results We indeed find that different local network structure is dominant in different types of biological networks. We find that the simple local network measures are dominant in the highly connected network core between hub genes, but that GLIDE’s global embedding measure adds value in the rest of the network. For example, we make GLIDE-based link predictions from genes known to be involved in Crohn’s disease, to genes that are not known to have an association, and make some new predictions, finding support in other network data and the literature. Availability and implementation GLIDE can be downloaded at https://bitbucket.org/kap_devkota/glide. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhou, Qiang, Chenglin Jia, Wenxue Ma, Yue Cui, Xiaoyu Jin, Dong Luo, Xueyang Min, and Zhipeng Liu. "MYB transcription factors in alfalfa (Medicago sativa): genome-wide identification and expression analysis under abiotic stresses." PeerJ 7 (September 17, 2019): e7714. http://dx.doi.org/10.7717/peerj.7714.

Full text
Abstract:
Background Alfalfa is the most widely cultivated forage legume and one of the most economically valuable crops in the world. Its survival and production are often hampered by environmental changes. However, there are few studies on stress-resistance genes in alfalfa because of its incomplete genomic information and rare expression profile data. The MYB proteins are characterized by a highly conserved DNA-binding domain, which is large, functionally diverse, and represented in all eukaryotes. The role of MYB proteins in plant development is essential; they function in diverse biological processes, including stress and defense responses, and seed and floral development. Studies on the MYB gene family have been reported in several species, but they have not been comprehensively analyzed in alfalfa. Methods To identify more comprehensive MYB transcription factor family genes, the sequences of 168 Arabidopsis thaliana, 430 Glycine max, 185 Medicago truncatula, and 130 Oryza sativa MYB proteins were downloaded from the Plant Transcription Factor Database. These sequences were used as queries in a BLAST search against the M. sativa proteome sequences provided by the Noble Research Institute. Results In the present study, a total of 265 MsMYB proteins were obtained, including 50 R1-MYB, 186 R2R3-MYB, 26 R1R2R3-MYB, and three atypical-MYB proteins. These predicted MsMYB proteins were divided into 12 subgroups by phylogenetic analysis, and gene ontology (GO) analysis indicated that most of the MsMYB genes are involved in various biological processes. The expression profiles and quantitative real-time PCR analysis indicated that some MsMYB genes might play a crucial role in the response to abiotic stresses. Additionally, a total of 170 and 914 predicted protein–protein and protein-DNA interactions were obtained, respectively. The interactions between MsMYB043 and MSAD320162, MsMYB253 and MSAD320162, and MsMYB253 and MSAD308489 were confirmed by a yeast two-hybrid system. This work provides information on the MYB family in alfalfa that was previously lacking and might promote the cultivation of stress-resistant alfalfa.
APA, Harvard, Vancouver, ISO, and other styles
29

Hu, J., H. Zhang, Q. Ying, S. H. Chen, F. Vandenberghe, and M. J. Kleeman. "Long-term particulate matter modeling for health effects studies in California – Part 1: Model performance on temporal and spatial variations." Atmospheric Chemistry and Physics Discussions 14, no. 14 (August 14, 2014): 20997–1036. http://dx.doi.org/10.5194/acpd-14-20997-2014.

Full text
Abstract:
Abstract. For the first time, a decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution and daily time resolution has been conducted in California to provide air quality data for health effects studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, EC, OC, nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95%, 100%, 71%, 73%, and 92% of the simulated months, respectively. The base dataset provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated one day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to over-predict wind speed during stagnation events, leading to under-predictions of high PM concentrations, usually in winter months. The WRF model also generally under-predicts relative humidity, resulting in less particulate nitrate formation especially during winter months. These issues will be improved in future studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/.
APA, Harvard, Vancouver, ISO, and other styles
30

Hu, J., H. Zhang, Q. Ying, S. H. Chen, F. Vandenberghe, and M. J. Kleeman. "Long-term particulate matter modeling for health effect studies in California – Part 1: Model performance on temporal and spatial variations." Atmospheric Chemistry and Physics 15, no. 6 (March 30, 2015): 3445–61. http://dx.doi.org/10.5194/acp-15-3445-2015.

Full text
Abstract:
Abstract. For the first time, a ~ decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution over populated regions and daily time resolution has been conducted for California to provide air quality data for health effect studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, elemental carbon (EC), organic carbon (OC), nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95, 100, 71, 73, and 92% of the simulated months, respectively. The base data set provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated 1 day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to overpredict wind speed during stagnation events, leading to underpredictions of high PM concentrations, usually in winter months. The WRF model also generally underpredicts relative humidity, resulting in less particulate nitrate formation, especially during winter months. These limitations must be recognized when using data in health studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/ .
APA, Harvard, Vancouver, ISO, and other styles
31

Zhuang, Yan, Hailong Wang, Da Jiang, Ying Li, Caijuan Tian, Zanmei Xu, Meina Su, et al. "Multigene mutation signatures in colorectal cancer patients: Predict for the diagnosis, pathological classification, staging and prognosis." Journal of Clinical Oncology 38, no. 15_suppl (May 20, 2020): e16113-e16113. http://dx.doi.org/10.1200/jco.2020.38.15_suppl.e16113.

Full text
Abstract:
e16113 Background: Like most cancers, early diagnosis of colorectal cancer (CRC) contributes to a good prognosis. Recently, some studies indicated that mutation gene signatures have good performances in predicting the treatment and prognosis in CRC patients. However, incomplete understanding of the genotype is not conducive to proper treatment. The objective was to give further genetic insights into CRC and provide more comprehensive references for clinical practice. Methods: In the training group, the whole genome sequencing (WGS), clinical, and demographic data of 531 patients were downloaded from The Cancer Genome Atlas (TCGA). The validation group contained 53 patients, which were collected in Tianjin Medical University Cancer Institute and Hospital (Tianjin, China) from April 2014 to November 2018. The fresh tissues collected within 24 hours after operation were used to extract genomic DNA, and then sequenced with targeted next-generation sequencing (NGS) technology to examine somatic mutations and analyze relevant gene concerning clinical indicators. The correlation between gene variation distribution and various indexes such as cancer species, stage, total survival period, sex, age and race were evaluated. Results: A total of 44 mutant genes with mutation frequencies above 5% were found in both TCGA cases and validated cases. Mutations of TP53, APC, KRAS, BRAF and ATM covered 97.55% of TCGA population and 83.02% validation patients, which were proved to be associated with the development of CRC and could be used as a diagnostic signature. Importantly, mutations of TP53, PIK3CA, FAT4, FMN2 and TRRAP had a remarkable difference between early (I/II) and advanced (III/IV) stage patients (P < 0.0001). Besides, we also confirmed that PIK3CA, LRP1B, FAT4 and ROS1 were a mutated gene signature for the prognosis, LRP1B portended to a higher of recurrence and shorter progression-free survival (PFS); mutation of FAT4 portended to a lower of recurrence and longer PFS, which could predict the recurrence and survival of CRC patients. Conclusions: We have indicated gene mutation signatures related to the diagnosis, pathological classification, staging and prognosis in CRC, which provides further insights into the study of CRC genotype. It is helpful to further improve the personalized diagnosis and treatment.
APA, Harvard, Vancouver, ISO, and other styles
32

Tana, Jonas Christoffer, Jyrki Kettunen, Emil Eirola, and Heikki Paakkonen. "Diurnal Variations of Depression-Related Health Information Seeking: Case Study in Finland Using Google Trends Data." JMIR Mental Health 5, no. 2 (May 23, 2018): e43. http://dx.doi.org/10.2196/mental.9152.

Full text
Abstract:
Background Some of the temporal variations and clock-like rhythms that govern several different health-related behaviors can be traced in near real-time with the help of search engine data. This is especially useful when studying phenomena where little or no traditional data exist. One specific area where traditional data are incomplete is the study of diurnal mood variations, or daily changes in individuals’ overall mood state in relation to depression-like symptoms. Objective The objective of this exploratory study was to analyze diurnal variations for interest in depression on the Web to discover hourly patterns of depression interest and help seeking. Methods Hourly query volume data for 6 depression-related queries in Finland were downloaded from Google Trends in March 2017. A continuous wavelet transform (CWT) was applied to the hourly data to focus on the diurnal variation. Longer term trends and noise were also eliminated from the data to extract the diurnal variation for each query term. An analysis of variance was conducted to determine the statistical differences between the distributions of each hour. Data were also trichotomized and analyzed in 3 time blocks to make comparisons between different time periods during the day. Results Search volumes for all depression-related query terms showed a unimodal regular pattern during the 24 hours of the day. All queries feature clear peaks during the nighttime hours around 11 PM to 4 AM and troughs between 5 AM and 10 PM. In the means of the CWT-reconstructed data, the differences in nighttime and daytime interest are evident, with a difference of 37.3 percentage points (pp) for the term “Depression,” 33.5 pp for “Masennustesti,” 30.6 pp for “Masennus,” 12.8 pp for “Depression test,” 12.0 pp for “Masennus testi,” and 11.8 pp for “Masennus oireet.” The trichotomization showed peaks in the first time block (00.00 AM-7.59 AM) for all 6 terms. The search volumes then decreased significantly during the second time block (8.00 AM-3.59 PM) for the terms “Masennus oireet” (P<.001), “Masennus” (P=.001), “Depression” (P=.005), and “Depression test” (P=.004). Higher search volumes for the terms “Masennus” (P=.14), “Masennustesti” (P=.07), and “Depression test” (P=.10) were present between the second and third time blocks. Conclusions Help seeking for depression has clear diurnal patterns, with significant rise in depression-related query volumes toward the evening and night. Thus, search engine query data support the notion of the evening-worse pattern in diurnal mood variation. Information on the timely nature of depression-related interest on an hourly level could improve the chances for early intervention, which is beneficial for positive health outcomes.
APA, Harvard, Vancouver, ISO, and other styles
33

Whittle, Alasdair, Don Brothwell, Rachel Cullen, Neville Gardner, and M. P. Kerney. "Wayland's Smithy, Oxfordshire: Excavations at the Neolithic Tomb in 1962–63 by R. J. C. Atkinson and S. Piggott." Proceedings of the Prehistoric Society 57, no. 2 (1991): 61–101. http://dx.doi.org/10.1017/s0079497x00004515.

Full text
Abstract:
Wayland's Smithy, on the north scarp of the downs above the Vale of the White Horse, is a two-phase Neolithic tomb. It has been a recognized feature of the historic landscape since at least the 10th century AD. It was recorded by Aubrey and later antiquaries, and continued to be of interest in the 19th century. It was amongst the first monuments to be protected by scheduling from 1882. The first excavations in 1919–20 were haphazardly organized and poorly recorded, but served to confirm, as suggested by Akerman and Thurnam, that the stone terminal chamber was transepted, to show that it had held burials, and to indicate the likely existence of an earlier structural phase.Further excavations took place in 1962–63 to explore the monument more and restore it for better presentation. The excavations revealed a two-phase monument. Wayland's Smithy I is a small oval barrow, defined by flanking ditches, an oval kerb, and a low chalk and sarsen barrow. It contains a mortuary structure defined by large pits which held posts of split trunks, a pavement, and opposed linear cairns of sarsen. This has been seen as the remains of a pitched and ridged mortuary tent, in the manner proposed also for the structure under the Fussell's Lodge long barrow, but in the light of ensuing debate and of subsequent discoveries elsewhere, it can also be seen as an embanked, box-like structure, perhaps with a flat wooden roof. This structure contained the remains of at least fourteen human skeletons, in varying states of completeness. The burial rite may have included primary burial or exposure elsewhere, but some at least of the bodies could have been deposited directly into the mortuary structure, and subsequent circulation or removal of bones cannot be discounted. Little silt accumulated in the ditches of phase I before the construction of phase II, and a charcoal sample from this interval gave a date of 3700–3390 BC.Wayland's Smithy II consists of a low sarsen-kerbed trapezoidal barrow, with flanking ditches, which follows the north–south alignment of phase I. At the south end there was a façade of larger sarsen stones, from which ran back a short passage leading to a transepted chamber, roofed with substantial capstones. This could have risen above the surrounding barrow. The excavations of 1919–20 revealed the presence of incomplete human burials in the west transept; the chamber had probably already been disturbed. The excavations of 1962–63 revealed further structural detail of the surrounds of the chamber, including a sarsen cairn piled in front and around it; deposits of calcium carbonate well up the walls of the chamber could be taken to suggest the former existence of chalk rubble blocking, in the manner of the West Kennet long barrow.The monuments were built over a thin chalk soil which had been a little disturbed. The molluscan evidence shows open surroundings. Molluscan samples from the ditch of Wayland's Smithy II show subsequent regeneration of woodland.Later activity on the site took the form of field ditches and lynchets, part of locally extensive field systems in the Iron Age and Romano-British period. Molluscan samples show again open country. There is evidence for disturbance of the tomb in late prehistoric and Roman times, and the denudation of the barrow had probably largely been effected by the end of the Roman era.Wayland's Smithy provides important evidence for the sequence and development of Neolithic mortuary structures and burials. It is possible to suggest a gradual development for the structures ofWayland's Smithy I, in which opposed pits and substantial posts were incorporated into a box-like, linear mortuary structure, which in turn was incorporated into a small barrow. The subsequent construction of Wayland's Smithy II has become a classic example of the succession from small to large, and fits the late date of tombs with transepted chambers suggested by recent study of other sites. The nature of the circumstances surrounding this transformation remains unclear. The burials of phase I suggest the necessity of revising current notions about the ubiquity of secondary disposal in mortuary structures and tombs. In situtransformations suggest a very active concern with the dead, and offset the non-monumental character of the primary mortuary structure. In the relative absence of other detailed local evidence it is hard to relate the site to its local context, though comparisons can be drawn with the sequences of the neighbouring upper Thames valley and the upper Kennet valley and surrounding downland.
APA, Harvard, Vancouver, ISO, and other styles
34

Boguslav, Mayla R., Nourah M. Salem, Elizabeth K. White, Sonia M. Leach, and Lawrence E. Hunter. "Identifying and classifying goals for scientific knowledge." Bioinformatics Advances 1, no. 1 (January 1, 2021). http://dx.doi.org/10.1093/bioadv/vbab012.

Full text
Abstract:
Abstract Motivation Science progresses by posing good questions, yet work in biomedical text mining has not focused on them much. We propose a novel idea for biomedical natural language processing: identifying and characterizing the questions stated in the biomedical literature. Formally, the task is to identify and characterize statements of ignorance, statements where scientific knowledge is missing or incomplete. The creation of such technology could have many significant impacts, from the training of PhD students to ranking publications and prioritizing funding based on particular questions of interest. The work presented here is intended as the first step towards these goals. Results We present a novel ignorance taxonomy driven by the role statements of ignorance play in research, identifying specific goals for future scientific knowledge. Using this taxonomy and reliable annotation guidelines (inter-annotator agreement above 80%), we created a gold standard ignorance corpus of 60 full-text documents from the prenatal nutrition literature with over 10 000 annotations and used it to train classifiers that achieved over 0.80 F1 scores. Availability and implementation Corpus and source code freely available for download at https://github.com/UCDenver-ccp/Ignorance-Question-Work. The source code is implemented in Python.
APA, Harvard, Vancouver, ISO, and other styles
35

Herliyani, Elly, Jajang Suryana, and Ketut Nala Hari Wardana. "ANALISIS VISUAL GRAPHICAL USER INTERFACE (GUI) WEBSITE UNIVERSITAS NEGERI EKS. IKIP: BAHAN PENGEMBANGAN MATERI AJAR DESAIN KOMUNIKASI VISUAL BERBASIS PENDIDIKAN KARAKTER." PRASI 12, no. 02 (December 26, 2017). http://dx.doi.org/10.23887/prasi.v12i02.13923.

Full text
Abstract:
This is a descriptive analysis research. The data examined were the conditions of visual dan functional of the content of an interface (GUI, graphical user interface) of the state university websites of former IKIP: UPI, UNY, UNNES, UNESA, and UM. Five of these universities, in the rank period of January – June 2014 by webometrics and 4ICU, were the top 20 in national level. The website UNIDKSHA was used in the comparative analysis of the materials for the input concerning the condition of UNDIKSHA website. The comparison result will be used in developing the material of Visual Communication Design course. The result of the research showed that the interface of the six universities websites had interesting view for visitors. The menus and sub menus are provided on page interfaces, all have links to the respective faculty sub web, web journals and the database of scientific work of the lecturers. In line with one of rank patterns of Webometrics and 4ICU, it seems that the spatial information of scientific report (e.g. research results, scientific articles, and the dissemination of scientific results) still had less attention from some web managers as the objects of the study. The service provided and direct download pattern set at any information that is accessed by a visitor was regarded unsatisfying to visitors, as well as the information provided was likely incomplete.
APA, Harvard, Vancouver, ISO, and other styles
36

Wu, Hoi-Yan, Kwun-Tin Chan, Grace Wing-Chiu But, and Pang-Chui Shaw. "Assessing the reliability of medicinal Dendrobium sequences in GenBank for botanical species identification." Scientific Reports 11, no. 1 (February 9, 2021). http://dx.doi.org/10.1038/s41598-021-82385-z.

Full text
Abstract:
AbstractDNA-based method is a promising tool in species identification and is widely used in various fields. DNA barcoding method has already been included in different pharmacopoeias for identification of medicinal materials or botanicals. Accuracy and validity of DNA-based methods rely on the accuracy and taxonomic reliability of the DNA sequences in the database to be compared against. Here we evaluated the annotation quality and taxonomic reliability of selected barcode loci (rbcL, matK, psbA-trnH, trnL-trnF and ITS) of 41 medicinal Dendrobium species downloaded from GenBank. Annotations of most accessions are incomplete. Only 53.06% of the 2041 accessions downloaded contain a reference to a voucher specimen. Only 31.60% and 4.8% of the entries are annotated with country of origin and collector or assessor, respectively. Taxonomic reliability of the sequences was evaluated by a Megablast search based on similarity to sequences submitted by other research groups. A small number of sequences (211, 7.14%) was regarded as highly doubted. Moreover, 10 out of 60 complete chloroplast genomes contain highly doubted sequences. Our findings suggest that sequences of GenBank should be used with caution for species-level identification. The scientific community should provide more important information regarding identity and traceability of the sample when they deposit sequences to public databases.
APA, Harvard, Vancouver, ISO, and other styles
37

Vizcaíno Hebert Leonidas, Atencio, Tintín Perdomo Verónica Paulina, Caiza Caizabuano José Rubén, and Caicedo Altamirano Fernando Sebastián. "Methods used in mobile applications for the diagnosis of hearing loss: A systematic mapping study." KnE Engineering, January 8, 2020. http://dx.doi.org/10.18502/keg.v5i1.5923.

Full text
Abstract:
Hearing loss is one of the most common health problems today, it can appear at any age and the causes are varied, in order to prevent it or adapt to the changes brought about by the hearing impairment, it is necessary to diagnose it in time. The technology in terms of applications for health care smartphones has constantly evolved, so that today play an important role and are among the most downloaded from application stores, several of these applications are the diagnosis of hearing loss and use the method of pure tones. In this study a Systematic Mapping of Literature SMS (Systematic Mapping Study) is made to look for mobile applications that use other diagnostic methods that offer similar or better results, of the 13 applications found, 11 used the method of pure tones and in only 2 of them was implemented the speech audiometry (word recognition), concludes that diagnostic hearing loss tests based on mobile applications are reliable alternatives to conventional audiometric systems, and that pure tone thresholds alone are an incomplete assessment of hearing, and there is a need to develop new hearing measurement methods and combine them with other methods to complement the diagnosis. Resumen: La pérdida de la audición es uno de los problemas de salud más comunes en la actualidad, puede aparecer a cualquier edad y las causas son variadas, para poder prevenirla o adaptarse a los cambios que conlleva la deficiencia auditiva, es necesario diagnosticarla a tiempo. La tecnología en cuanto a aplicaciones para smartphones de asistencia de salud ha evolucionado constantemente, tal es así que hoy en día juegan un papel importante y son de las más descargadas de las tiendas de aplicaciones, varias de esas aplicaciones son las de diagnóstico de pérdida auditiva y utilizan el método de los tonos puros. En este estudio se hace un Mapeo Sistemático de Literatura SMS (Systematic Mapping Study) para buscar aplicaciones móviles que utilicen otros métodos de diagnóstico que ofrezcan similares o mejores resultados, de las 13 aplicaciones encontradas, 11 utilizaron el método de los tonos puros y en solo 2 de ellas se implementó la logoaudiometria (reconocimiento de palabras), por lo que se concluye que las pruebas de diagnóstico de pérdida auditiva basadas en aplicaciones móviles, son alternativas confiables a los sistemas de audiometría convencionales, y que los umbrales de tonos puros por sí solos son una evaluación incompleta de la audición, y existe la necesidad de desarrollar nuevos métodos de medición de audición y combinarlos con otros métodos para complementar el diagnóstico.
APA, Harvard, Vancouver, ISO, and other styles
38

Tan, Nicholas, Mina K. Chung, Jonathan D. Smith, David R. Van Wagoner, and John Barnard. "Abstract 470: Computational Identification of NKX2-5 Binding Sites and Downstream Gene Targets Using Transcription Factor Motif and Human Heart-specific Experimental Data." Circulation Research 119, suppl_1 (July 22, 2016). http://dx.doi.org/10.1161/res.119.suppl_1.470.

Full text
Abstract:
Background: While NKX2-5 plays pivotal roles in human cardiac development and disease, no ChIP-Seq studies of NKX2-5 in human cardiac tissues currently exist, resulting in an incomplete understanding of its direct gene targets. Modern computational methods which identify binding sites using transcription factor motif and tissue-specific experimental data can help to fill this knowledge gap. Objective: To use computational methods to identify likely NKX2-5 binding sites and downstream gene targets using human heart-specific experimental data. Methods: Human cardiomyocyte DNAse hypersensitivity data (2 replicates) were downloaded from the Encyclopedia of DNA Elements (ENCODE) database. The position weight matrix (PWM) representing the transcription factor motif of NKX2-5 was obtained from JASPAR. We applied the Protein Interaction Quantification (PIQ) algorithm to detect NKX2-5 binding sites using the PWM and DNAse hypersensitivity data as inputs. RNA-Seq data from 108 human heart-specific samples (atrial appendages and left ventricles) were downloaded from the Genotype-Tissue Expression (GTEx) database. Protein-coding genes significantly expressed in the heart (RPKM <= 1 based on the GTEx RNA-Seq data) that were within 100kb of the predicted binding sites were then identified. Pathway analysis of these genes was performed using Ingenuity Pathway Analysis (IPA). Results: 1283 binding sites for NKX2-5 were discovered by PIQ. Of 12698 protein-coding, heart-expressed genes, 625 were within 100kb of these binding sites. The identified genes were highly enriched in physiologic categories like “Vasculogenesis” and “Development of Cardiovascular Tissue” (p = 2.36x10-9 and 2.31x10-8 respectively). Notable genes included: cardiac transcription factors (MEF2A, TBX20); growth factors (TGFB2, BMP2); muscle and ion channel function (ACTA2, BIN1), and; calcium signaling (CALM2, CAMK2D). Conclusion: By using computational analyses of transcription factor motif and human heart-specific experimental data, we have identified candidate downstream targets of NKX2-5. Future work will include validation studies in an external cohort and analysis of associations between these candidate genes and cardiac disease.
APA, Harvard, Vancouver, ISO, and other styles
39

Xu, Jin, Qing Yan, Chengcheng Song, Jingjia Liang, Liang Zhao, Xin Zhang, Zhenkun Weng, et al. "An Axin2 mutation and perinatal risk factors contribute to sagittal craniosynostosis: evidence from a Chinese female monochorionic diamniotic twin family." Hereditas 158, no. 1 (June 16, 2021). http://dx.doi.org/10.1186/s41065-021-00182-0.

Full text
Abstract:
Abstract Background Craniosynostosis, defined as premature fusion of one or more cranial sutures, affects approximately 1 in every 2000–2500 live births. Sagittal craniosynostosis (CS), the most prevalent form of isolated craniosynostosis, is caused by interplay between genetic and perinatal environmental insults. However, the underlying details remain largely unknown. Methods The proband (a female monochorionic twin diagnosed with CS), her healthy co-twin sister and parents were enrolled. Obstetric history was extracted from medical records. Genetic screening was performed by whole exome sequencing (WES) and confirmed by Sanger sequencing. Functional annotation, conservation and structural analysis were predicted in public database. Phenotype data of Axin2 knockout mice was downloaded from The International Mouse Phenotyping Consortium (IMPC, http://www.mousephenotype.org). Results Obstetric medical records showed that, except for the shared perinatal risk factors by the twins, the proband suffered additional persistent breech presentation and intrauterine growth restriction. We identified a heterozygous mutation of Axin2 (c.1181G > A, p.R394H, rs200899695) in monochorionic twins and their father, but not in the mother. This mutation is not reported in Asian population and results in replacement of Arg at residue 394 by His (p.R394H). Arg 394 is located at the GSK3β binding domain of Axin2 protein, which is highly conserved across species. The mutation was predicted to be potentially deleterious by in silico analysis. Incomplete penetrance of Axin2 haploinsufficiency was found in female mice. Conclusions Axin2 (c.1181G > A, p.R394H, rs200899695) mutation confers susceptibility and perinatal risk factors trigger the occurrence of sagittal craniosynostosis. Our findings provide a new evidence for the gene-environment interplay in understanding pathogenesis of craniosynostosis in Chinese population.
APA, Harvard, Vancouver, ISO, and other styles
40

Scotch, Matthew, Arjun Magge, and Matteo Valente. "ZooPhy: A bioinformatics pipeline for virus phylogeography and surveillance." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9729.

Full text
Abstract:
ObjectiveWe will describe the ZooPhy system for virus phylogeography and public health surveillance [1]. ZooPhy is designed for public health personnel that do not have expertise in bioinformatics or phylogeography. We will show its functionality by performing case studies of different viruses of public health concern including influenza and rabies virus. We will also provide its URL for user feedback by ISDS delegates.IntroductionSequence-informed surveillance is now recognized as an important extension to the monitoring of rapidly evolving pathogens [2]. This includes phylogeography, a field that studies the geographical lineages of species including viruses [3] by using sequence data (and relevant metadata such as sampling location). This work relies on bioinformatics knowledge. For example, the user first needs to find a relevant sequence database, navigate through it, and use proper search parameters to obtain the desired data. They also must ensure that there is sufficient metadata such as collection date and sampling location. They then need to align the sequences and integrate everything into specific software for phylogeography. For example, BEAST [4] is a popular tool for discrete phylogeography. For proper use, the software requires knowledge of phylogenetics and utilization of BEAUti, its XML processing software. The user then needs to use other software, like TreeAnnotator [4], to produce a single (“representative”) maximum clade credibility (MCC) tree. Even then, the evolutionary spread of the virus can be difficult to interpret via a simple tree viewer. There is software (such as SpreaD3 [5]) for visualizing a tree within a geographic context, yet for novice users, it might not be easy to use. Currently, there are only a few systems designed to automate these types of tasks for virus surveillance and phylogeography.MethodsWe have developed ZooPhy, a pipeline for sequence-informed surveillance and phylogeography [1]. It is designed for health agency personnel that do not have expertise in bioinformatics or phylogeography. We created a large database of all virus sequences and metadata from GenBank [6] as well as a smaller database for selected viruses perceived to be of great interest for health agencies including: influenza (A, B, and C), Ebola, rabies, West Nile virus, and Zika virus.In Figure 1A, we show our front-end architecture, created in the style of the influenza research database [7], that enables the user to search by: virus, gene name, host, time-frame, and geography. We also allow users to upload their own list of GenBank accessions or unpublished sequences. Hitting “Search” produces a Results tab which includes the metadata of the sequences. We provide a feature to randomly down-sample by a specified percentage or number. We also allow the user to download the metadata in CSV format or the unaligned sequences in FASTA format.The final tab, "Run", includes a text box for specifying an email in order to send job updates and final results on virus spread. We also enable for the user to study the influence of predictors on virus spread (via a generalized linear model). Currently, we have predictors such as temperature, great circle distance, population, and sample size for selected countries. We also offer experts the ability to specify advanced modeling parameters including the molecular clock type (strict vs. relaxed), coalescent tree prior, and chain length and sampling frequency for the Markov-chain Monte Carlo. When the user selects “Start ZooPhy”, a pre-processor eliminates incomplete or non-disjoint record locations and sends the rest for analysis.ResultsWhen initiated, the ZooPhy pipeline includes sequence alignment via Mafft [8] and creation of an XML template via BEASTGen for input into BEAST for discrete phylogeography. It then uses TreeAnnotator [3] to create an MCC tree from the posterior distribution of sampled trees. ZooPhy uses the MCC as input into SpreaD3 for a recreation of the time-estimated migration via a map. If the user selects the GLM option, the system runs an R script to calculate the Bayes factor of the inclusion probability for each predictor and draws a plot including the regression coefficient and its 95% Bayesian credible interval. We are currently working on new visualization techniques such as those demonstrated by Dudas et al. that combine time-oriented spread via a map and evolution on a phylogenetic tree annotated by discrete locations [9].ConclusionsRecent advances in phylodynamics, bioinformatics, and visualization have demonstrated the potential of pipelines to support surveillance. One example is NextStrain which can perform real-time virus phylodynamics [10]. The system has recently been added as an app to the Global Initiative on Sharing Avian Influenza Data (GISAID) database for influenza tracking using DNA sequences [11]. This presentation will highlight a pipeline for virus phylogeography designed for epidemiologists who are not experts in bioinformatics but wish to leverage virus sequence data as part of routine surveillance. We will describe the development and implementation of our system, ZooPhy, and use real-world case studies to demonstrate its functionality. We invite ISDS delegates to use the system via our web portal, https://zodo.asu.edu/zoophy/ and provide feedback on system utilization.References1. Scotch, M., et al., At the intersection of public-health informatics and bioinformatics: using advanced Web technologies for phylogeography. Epidemiology, 2010. 21(6), 764-768.2. Gardy, J.L. and N.J. Loman, Towards a genomics-informed, real-time, global pathogen surveillance system. Nat Rev Genet, 2018. 19: p. 9-20.3. Avise, J.C., Phylogeography : the history and formation of species. 2000, Cambridge, Mass.: Harvard University Press.4. Suchard, M.A., et al., Bayesian phylogenetic and phylodynamic data integration using BEAST 1.10. Virus Evol, 2018. 4.5. Bielejec, F., et al., SpreaD3: Interactive Visualization of Spatiotemporal History and Trait Evolutionary Processes. Mol Biol Evol, 2016. 33(8): p. 2167-9.6. Benson, D. A.,et al., GenBank. Nucleic Acids Res, 2018. 46, p. D41-D47.7. Zhang, Y., et al., Influenza Research Database: An integrated bioinformatics resource for influenza virus research. Nucleic Acids Res, 2017. 45: p. D466-D474.8. Katoh, K. and D.M. Standley, MAFFT: iterative refinement and additional methods. Methods Mol Biol, 2014. 1079: p. 131-46.9. Dudas, G., et al., Virus genomes reveal factors that spread and sustained the Ebola epidemic. Nature, 2017. 544(7650): p. 309-315.10. Hadfield, J., et al., Nextstrain: real-time tracking of pathogen evolution. Bioinformatics, 2018.11. NextFlu. 2018; Available from: https://www.gisaid.org/epiflu-applications/nextflu-app/.
APA, Harvard, Vancouver, ISO, and other styles
41

Gao, Yao, Huiliang Zhao, Teng Xu, Junsheng Tian, and Xuemei Qin. "Identification of Crucial Genes and Diagnostic Value Analysis in Major Depressive Disorder Using Bioinformatics Analysis." Combinatorial Chemistry & High Throughput Screening 23 (November 24, 2020). http://dx.doi.org/10.2174/1386207323999201124204413.

Full text
Abstract:
Aim and Objective:: Despite the prevalence and burden of major depressive disorder (MDD), our current understanding of the pathophysiology is still incomplete. Therefore, this paper aims to explore genes and evaluate their diagnostic ability in the pathogenesis of MDD. Methods:: Firstly, the expression profiles of mRNA and microRNA were downloaded from the gene expression database and analyzed by the GEO2R online tool to identify differentially expressed genes (DEGs) and differentially expressed microRNAs (DEMs). Then, the DAVID tool was used for functional enrichment analysis. Secondly, the comprehensive protein- protein interaction (PPI) network was analyzed using Cytoscape, and the network MCODE was applied to explore hub genes. Thirdly, the receiver operating characteristic (ROC) curve of the core gene was drawn to evaluate clinical diagnostic ability. Finally, mirecords was used to predict the target genes of DEMs. Results:: A total of 154 genes were identified as DEGs, and 14 microRNAs were identified as DEMs. Pathway enrichment analysis showed that DEGs were mainly involved in hematopoietic cell lineage, PI3K-Akt signaling pathway, cytokinecytokine receptor interaction, chemokine signaling pathway, and JAK-STAT signaling pathway. Three important modules are identified and selected by the MCODE clustering algorithm. The top 12 hub genes including CXCL16, CXCL1, GNB5, GNB4, OPRL1, SSTR2, IL7R, MYB, CSF1R, GSTM1, GSTM2, and GSTP1 were identified as important genes for subsequent analysis. Among these important hub genes, GSTM2, GNB4, GSTP1 and CXCL1 have good diagnostic ability. Finally, by combining these four genes, the diagnostic ability of MDD can be improved to 0.905, which is of great significance for the clinical diagnosis of MDD. Conclusion:: Our results indicate that GSTM2, GNB4, GSTP1 and CXCL1 have potential diagnostic markers and are of great significance in clinical research and diagnostic application of MDD. This result needs a large sample study to further confirm the pathogenesis of MDD.
APA, Harvard, Vancouver, ISO, and other styles
42

Ding, Jianfeng, Xiaobo He, Xiao Cheng, Guodong Cao, Bo Chen, Sihan Chen, and Maoming Xiong. "A 4-gene-based hypoxia signature is associated with tumor immune microenvironment and predicts the prognosis of pancreatic cancer patients." World Journal of Surgical Oncology 19, no. 1 (April 17, 2021). http://dx.doi.org/10.1186/s12957-021-02204-7.

Full text
Abstract:
Abstract Background Pancreatic cancer (PAC) is one of the most devastating cancer types with an extremely poor prognosis, characterized by a hypoxic microenvironment and resistance to most therapeutic drugs. Hypoxia has been found to be one of the factors contributing to chemoresistance in PAC, but also a major driver of the formation of the tumor immunosuppressive microenvironment. However, the method to identify the degree of hypoxia in the tumor microenvironment (TME) is incompletely understood. Methods The mRNA expression profiles and corresponding clinicopathological information of PAC patients were downloaded from The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO) database, respectively. To further explore the effect of hypoxia on the prognosis of patients with PAC as well as the tumor immune microenvironment, we established a hypoxia risk model and divided it into high- and low-risk groups in line with the hypoxia risk score. Results We established a hypoxia risk model according to four hypoxia-related genes, which could be used to demonstrate the immune microenvironment in PAC and predict prognosis. Moreover, the hypoxia risk score can act as an independent prognostic factor in PAC, and a higher hypoxia risk score was correlated with poorer prognosis in patients as well as the immunosuppressive microenvironment of the tumor. Conclusions In summary, we established and validated a hypoxia risk model that can be considered as an independent prognostic indicator and reflected the immune microenvironment of PAC, suggesting the feasibility of hypoxia-targeted therapy for PAC patients.
APA, Harvard, Vancouver, ISO, and other styles
43

Reifegerste, Doreen, and Annemarie Wiedicke. "Quality (Health Coverage)." DOCA - Database of Variables for Content Analysis, March 26, 2021. http://dx.doi.org/10.34778/2a.

Full text
Abstract:
To judge the quality of the media coverage of health information, research mostly focuses on ten criteria: adequately discussion of costs, quantification of benefits, adequately explanation and quantification of potential harms, comparison of the new idea with existing alternatives, independence of sources and discussion of potential conflicts of interests, avoidance of disease mongering, review of methodology or the quality of the evidence, discussion of the true novelty and availability of the idea, approach or product as well as giving information that go beyond a news release (Schwitzer, 2008, 2014; Smith et al., 2005). Other quality dimensions applied in content analyses of health news coverage are diversity, completeness, relevance, understandability and objectiveness (Reineck, 2014; Reineck & Hölig, 2013). These criteria are increasingly relevant as people use online health information more frequently and in addition to the information from their physician for medical decision making (Wang, Xiu, & Shahzad, 2019). Thus, analyzing the quality of health content in the media coverage becomes even more relevant. As Schwitzer (2017) points out, there is a variety of quality problems due to hurried, incomplete, poorly researched news. To measure quality, the content of health news coverage can be compared to content of the original research paper (e.g., Ashorkhani et al., 2012) or the quality of media content is continuously judged by journalist, medical experts or independent organizations such as HealthNewsReview with respect to different criteria (e.g., Schwitzer, 2008; Selvaraj et al., 2014). Field of application/theoretical foundation: Online health information, medical decision making, journalism studies References/combination with other methods: Focus group discussions with journalists, editors-in-chief and news gatekeepers (Ashorkhani et al., 2012), focus group discussions with consumers of health information (Marshall & Williams, 2006) Example studies: Anhäuser & Wormer (2012); Schwitzer (2008); Wormer (2014); Reineck & Hölig (2013); Reineck (2014) Information on Reineck & Hölig, 2013 Authors: Dennis Reineck, Sascha Hölig Research question: Which factors contribute to the quality of health journalism? Object of analysis: Sample of all health-related articles in four German newspapers: Süddeutsche Zeitung (n = 167), Die Welt (n = 426), Frankfurter Rundschau (n = 219) and die tageszeitung (n = 84) Time frame of analysis: March, 1, 2010 to February, 28, 2011 Info about variables Variables: Variables defining five dimensions of quality for health-related newspaper articles, deduction of a quality index: coding of 0 to 100 points for each indicator of the different variables, deduction of a quality index for each article based on these points Level of analysis: news article Quality dimension Variable Indicator(s) Diversity (rH= 0.78) Quantitative diversity Length of the article Source diversity Number of sources Opinion diversity Discussion of contrary opinions Completeness (rH= 0.86) Journalistic completeness and scientific completeness, risks For diseases: information about prevention, symptoms and remedies Scientific completeness For research studies: information about method, sample and results Risks For treatment options: addressing of risks and side effects Relevance (rH= 0.85) Source credibility Sources with the highest reputation Usefulness Take-home-messages, references to additional information Newsworthiness News factors (e.g., topicality) Understandability (rH= 0.86) Simplicity Simplicity vs. complexity of language Structure Well-structured vs. inadequately structured presentation Conciseness Concise vs. circuitous presentation Storytelling Storytelling vs. matter-of-fact presentation Objectiveness (rH= 0.95) Emotionalization Emotional language Dramatization Dramatization of information References Anhäuser, M., & Wormer, H. (2012). A question of quality: Criteria for the evaluation of science and medical reporting and testing their applicability. PCST 2012 Book of Papers: Quality, Honesty and Beauty in Science and Technology Communication. http://www.medien-doktor.de/medizin/wp-content/uploads/sites/3/downloads/2014/04/Paper-Florenz.pdf Ashorkhani, M., Gholami, J., Maleki, K., Nedjat, S., Mortazavi, J., & Majdzadeh, R. (2012). Quality of health news disseminated in the print media in developing countries: A case study in Iran. BMC Public Health, 12, 627. https://doi.org/10.1186/1471-2458-12-627 Marshall, L. A., & Williams, D. (2006). Health information: does quality count for the consumer? Journal of Librarianship and Information Science, 38(3), 141–156. https://doi.org/10.1177/0961000606066575 Reineck, D. (2014). Placebo oder Aufklärung mit Wirkpotenzial? Eine Diagnose der Qualität der Gesundheitsberichterstattung in überregionalen Tageszeitungen. In V. Lilienthal (Ed.), Qualität im Gesundheitsjournalismus: Perspektiven aus Wissenschaft und Praxis (Vol. 325, pp. 39–60). Springer VS. https://doi.org/10.1007/978-3-658-02427-7_3 Reineck, D., & Hölig, S. (2013). Patient Gesundheitsjournalismus: Eine inhaltsanalytische Untersuchung der Qualität in überregionalen Tageszeitungen. In C. Rossmann & M. R. Hastall (Eds.), Medien + Gesundheit: Band 6. Medien und Gesundheitskommunikation: Befunde, Entwicklungen, Herausforderungen (1st ed., pp. 19–31). Nomos. Schwitzer, G. (2008). How do US journalists cover treatments, tests, products, and procedures? An evaluation of 500 stories. PLoS Medicine, 5(5), e95. Schwitzer, G. (2014). A guide to reading health care news stories. JAMA Internal Medicine, 174(7), 1183–1186. https://doi.org/10.1001/jamainternmed.2014.1359 Schwitzer, G. (2017). Pollution of health news. BMJ (Clinical Research Ed.), 356, j1262. https://doi.org/10.1136/bmj.j1262 Selvaraj, S., Borkar, D. S., & Prasad, V. (2014). Media coverage of medical journals: Do the best articles make the news? PloS One, 9(1), e85355. https://doi.org/10.1371/journal.pone.0085355 Smith, D. E., Wilson, A. J., & Henry, D. A. (2005). Monitoring the quality of medical news reporting: Early experience with media doctor. The Medical Journal of Australia, 183(4), 190–193.
APA, Harvard, Vancouver, ISO, and other styles
44

Kuntsman, Adi. "“Error: No Such Entry”." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2707.

Full text
Abstract:
“Error: no such entry.” “The thread specified does not exist.” These messages appeared every now and then in my cyberethnography – a study of Russian-Israeli queer immigrants and their online social spaces. Anthropological research in cyberspace invites us to rethink the notion of “the field” and the very practice of ethnographic observation. In negotiating my own position as an anthropologist of online sociality, I was particularly inspired by Radhika Gajjala’s notion of “cyberethnography” as an epistemological and methodological practice of examining the relations between self and other, voice and voicelessness, belonging, exclusion, and silencing as they are mediated through information-communication technologies (“Interrupted” 183). The main cyberethnographic site of my research was the queer immigrants’ Website with its news, essays, and photo galleries, as well as the vibrant discussions that took place on the Website’s bulletin board. “The Forum,” as it was known among the participants, was visited daily by dozens, among them newbies, passers-by, and regulars. My study, dedicated to questions of home-making, violence, and belonging, was following the publications that appeared on the Website, as well as the daily discussions on the Forum. Both the publications and the discussions were archived since the Website’s creation in 2001 and throughout my fieldwork that took place in 2003-04. My participant observations of the discussions “in real time” were complemented by archival research, where one would expect to discover an anthropologist’s wildest dreams: the fully-documented everyday life of a community, a word-by-word account of what was said, when, and to whom. Or so I hoped. The “error” messages that appeared when I clicked on some of the links in the archive, or the absence of a thread I knew was there before, raised the question of erasure and deletion, of empty spaces that marked that which used to be, but which had ceased to exist. The “error” messages, in other words, disrupted my cyberethnography through what can be best described as haunting. “Haunting,” writes Avery Gordon in her Ghostly Matters, “describes how that which appears to be not there is often a seething presence, acting on and often meddling with taken-for-granted realities” (8). This essay looks into the seething presence of erasures in online archives. What is the role, I will ask, of online archives in the life of a cybercommunity? How and when are the archives preserved, and by whom? What are the relations between archives, erasure, and home-making in cyberspace? *** Many online communities based on mailing lists, newsgroups, or bulletin boards keep archives of their discussions – archives that at times go on for years. Sometimes they are accessible only to members of lists or communities that created them; other times they are open to all. Archived discussions can act as a form of collective history and as marks of belonging (or exclusion). As the records of everyday conversations remain on the Web, they provide a unique glance into the life of an online collective for a visitor or a newcomer. For those who participated in the discussions browsing through archives can bring nostalgic feelings: memories – pleasurable and/or painful – of times shared with others; memories of themselves in the past. Turning to archived discussions was not an infrequent act in the cybercommunity I studied. While there is no way to establish how many participants looked into how many archives, and how often they did so, there is a clear indicator that the archives were visited and reflected on. For one, old threads were sometimes “revived”: technically, a discussion thread is never closed unless the administrator decides to “freeze” it. If the thread is not “frozen,” anyone can go to an old discussion and post there; a new posting would automatically move an archived thread to the list of “recent”/“currently active” ones. As all the postings have times and dates, the reappearance of threads from months ago among the “recent discussions” indicates the use of archives. In addition to such “revivals,” every now and then someone would open a new discussion thread, posting a link to an old discussion and expressing thoughts about it. Sometimes it was a reflection on the Forum itself, or on the changes that took place there; many veteran participants wrote about the archived discussion in a sentimental fashion, remembering “the old days.” Other times it was a reflection on a participant’s life trajectory: looking at one’s old postings, a person would reflect on how s/he changed and sometimes on how the Website and its bulletin board changed his/her life. Looking at old discussions can be seen as performances of belonging: the repetitive reference to the archives constitutes the Forum as home with a multilayered past one can dwell on. Turning to the archives emphasises the importance of preservation, of keeping cyberwords as an object of collective possession and affective attachment. It links the individual and the collective: looking at old threads one can reflect on “how I used to be” and “how the Forum used to be.” Visiting the archives, then, constitutes the Website as simultaneously a site where belonging is performed, and an object of possession that can belong to a collective (Fortier). But the archives preserved on the Forum were never a complete documentation of the discussions. Many postings were edited immediately after appearance or later. In the first two and a half years of the Website’s existence any registered participant, as long as his/her nickname was not banned from the Forum, could browse through his/her messages and edit them. One day in 2003 one person decided to “commit virtual suicide” (as he and others called it). He went through all the postings and, since there was no option for deleting them all at once, he manually erased them one by one. Many participants were shocked to discover his acts, mourning him as well as the archives he damaged. The threads in which he had once taken part still carried signs of his presence: when participants edit their postings, all they can do is delete the text, leaving an empty space in the thread’s framework (only the administrator can modify the framework of a thread and delete text boxes). But the text box with name and date of each posting is still there. “The old discussions don’t make sense now,” a forum participant lamented, “because parts of the arguments are missing.” Following this “suicide” the Website’s administrator decided that from that point on participants could only edit their last posting but could not make any retrospective changes to the archives. Both the participants’ mourning of the mutilated threads and administrator’s decision suggest that there is a desire to preserve the archives as collective possession belonging to all and not to be tampered with by individuals. But the many conflicts between the administrator and some participants on what could be posted and what should be censored reveal that another form of ownership/ possession was at stake. “The Website is private property and I can do anything I like,” the administrator often wrote in response to those who questioned his erasure of other people’s postings, or his own rude and aggressive behaviour towards participants. Thus he broke the very rules of netiquette he had established – the Website’s terms of use prohibit personal attacks and aggressive language. Possession-as-belonging here was figured as simultaneously subjected to a collective “code of practice” and as arbitrary, dependant on one person’s changes of mind. Those who were particularly active in challenging the administrator (for example, by stating that although the Website is indeed privately owned, the interactions on the Forum belong to all; or by pointing out to the administrator that he was contradicting his own rules) were banned from the site or threatened with exclusion, and the threads where the banning was announced were sometimes deleted. Following the Forum’s rules, the administrator was censoring messages of an offensive nature, for example, commercial advertisements or links to pornographic Websites, as well as some personal attacks between participants. But among the threads doomed for erasure were also postings of a political nature, in particular those expressing radical left-wing views and opposing the tone of political loyalty dominating the site (while attacks on those participants who expressed the radical views were tolerated and even encouraged by the administrator). *** The archives that remain on the site, then, are not a full documentation of everyday narratives and conversations but the result of selection and regulation of both individual participants and – predominantly – the administrator. These archives are caught between several contradictory approaches to the Forum. One is embedded within the capitalist notion of payment as conferring ownership: I paid for the domain, says the administrator, therefore I own everything that takes place there. Another, manifested in the possibility of editing one’s postings, views cyberspeech as belonging first and foremost to the speaker who can modify and erase them as s/he pleases. The third defines the discussions that take place on the Forum as collective property that cannot be ruled by a single individual, precisely because it is the result of collective interaction. But while the second and the third approaches are shared by most participants, it is the idea of private ownership that seemed to dominate and facilitate most of the erasures. Erasure and modification performed by the administrator were not limited to censorship of particular topics, postings, or threads. The archive of the Forum as a whole was occasionally “cleared.” According to the administrator, the limited space on the site required “clearance” of the oldest threads to make room for new ones. Decisions about such clearances were not shared with anyone, nor were the participants notified about it in advance. One day parts of the archive simply disappeared, as I discovered during my fieldwork. When I began daily observations on the Website in December 2003, I looked at the archives page and saw that the General Forum section of the Forum went back for about a year and a half, and the Lesbian Forum section for about a year. I then decided to follow the discussions as they emerged and unfolded for 5-6 months, saving only the most interesting threads in my field diary, and to download all the archived threads later, for future detailed analysis. But to my great surprise, in May 2004 I discovered that while the General Forum still dated back to September 2002, the oldest thread on the Lesbian Forum was dated December 2003! All earlier threads were removed without any notice to Forum participants; and, as I learned later, no record of the threads was kept on- or offline. These examples of erasure and “clearance” demonstrate the complexity of ownership on the site: a mixture of legal and capitalist power intertwined with social hierarchies that determine which discussions and whose words are (more) valuable (The administrator has noted repeatedly that the discussions on the Lesbian Forum are “just chatter.” Ironically, both the differences in style between the General Forum and the Lesbian Forum and the administrator’s account of them resemble the (stereo)typical heterosexual gendering of talk). And while the effects and the results of erasure are compound, they undoubtly point to the complexity – and fragility! – of “home” in cyberspace and to the constant presence of violence in its constitution. During my fieldwork I felt the strange disparity between the narratives of the Website as a homey space (expressed both in the site’s official description and in some participants’ account of their experiences), and the frequent acts of erasure – not only of particular participants but more broadly of large parts of its archives. All too often, the neat picture of the “community archive” where one can nostalgically dwell on the collective past was disrupted by the “error” message. Error: no such entry. The thread specified does not exist. It was not only the incompleteness of archives that indicated fights and erasures. As I gradually learned throughout my fieldwork, the history of the Website itself was based on internal conflicts, omitted contributions, and constantly modified stories of origins. For example, the story of the Website’s establishment, as it was published in the About Us section of the site and reprinted in celebratory texts of the first anniversaries, presents the site as created by “three fathers.” The three were F, the administrator, M. who wrote, edited, and translated most of the material, and the third person whose name was never mentioned. When I asked about him on the site and later in interviews with both M. and F., they repeatedly and steadily ignored the question, and changed the subject of conversation. But the third “father” was not the only one whose name was omitted. In fact, the original Website was created by three women and another man. M. and F. joined later, and soon afterwards F., who had acted as the administrator during my fieldwork, took over the material and moved the site to another domain. Not only were the original creators erased from the site’s history; they were gradually ostracised from the new Website. When I interviewed two of the women, I mentioned the narrative of the site as a “child of three fathers.” “More like an adopted child,” chuckled one of them with bitterness, and told me the story of the original Website. Moved by their memories, the two took me to the computer. They went to the Internet Archive’s “WayBack Machine” Website – a mega-archive of sorts, an online server that keeps traces of old Web pages. One of the women managed to recover several pages of the old Website; sad and nostalgic, she shared with me the few precious traces of what was once her and her friends’ creation. But those, too, were haunted pages – most of the hyperlinks there generated “error” messages instead of actual articles or discussion threads. Error: no such entry. The thread specified does not exist. After a few years of working closely together on their “child,” M. and F. drifted apart, too. The hostility between the two intensified. Old materials (mostly written, translated, or edited by M. over a 3 year period) were moved into an archive by F. the administrator. They were made accessible through a small link hidden at the bottom of the homepage. One day they disappeared completely. Shortly afterwards, in September 2006, the Website celebrated its fifth anniversary. For this occasion the administrator wrote “the history of the Website,” where he presented it as his enterprise, noting in passing two other contributors whose involvement was short and marginal. Their names were not mentioned, but the two were described in a defaming and scornful way. *** So where do the “error” messages take us? What do they tell us about homes and communities in cyberspace? In her elaboration on cybercommunities, Radhika Gajjala notes that: Cyberspace provides a very apt site for the production of shifting yet fetishised frozen homes (shifting as more and more people get online and participate, frozen as their narratives remain on Websites and list archives through time in a timeless floating fashion) (“Interrupted”, 178). Gajjala’s notion of shifting yet fetishised and frozen homes is a useful term for capturing the nature of communication on the Forum throughout the 5 years of its existence. It was indeed a shifting home: many people came and participated, leaving parts of themselves in the archives; others were expelled and banned, leaving empty spaces and traces of erasure in the form of “error” messages. The presence of those erased or “cleared” was no longer registered in words – an ultimate sign of existence in the text-based online communication. And yet, they were there as ghosts, living through the traces left behind and the “seething presence” of haunting (Gordon 8). The Forum was a fetishised home, too, as the negotiation of ownership and the use of old threads demonstrate. However, Gajjala’s vision of archives suggests their wholeness, as if every word and every discussion is “frozen” in its entirety. The idea of fetishised homes does gesture to the complex and complicated reading of the archives; but what is left unproblematic are the archives themselves. Being attentive to the troubled, incomplete, and haunted archives invites a more careful and critical reading of cyberhomes – as Gajjala herself demonstrates in her discussion of online silences – and of the interrelation of violence and belonging in it (CyberSelves 2, 5). Constituted in cyberspace, the archives are embedded in the particular nature of online sociality, with its fantasy of timeless and floating traces, as well as with its brutality of deletion. Cyberwords do remain on archives and servers, sometimes for years; they can become ghosts of people who died or of collectives that no longer exist. But these ghosts, in turn, are haunted by the words and Webpages that never made it into the archives – words that were said but then deleted. And of course, cyberwords as fetishised and frozen homes are also haunted by what was never said in the first place, by silences that are as constitutive of homes as the words. References Fortier, Anne-Marie. “Community, Belonging and Intimate Ethnicity.” Modern Italy 1.1 (2006): 63-77. Gajjala, Radhika. “An Interrupted Postcolonial/Feminist Cyberethnography: Complicity and Resistance in the ‘Cyberfield’.” Feminist Media Studies 2.2 (2002): 177-193. Gajjala, Radhika. Cyber Selves: Feminist Ethnographies of South-Asian Women. Oxford: Alta Mira Press, 2004. Gordon, Avery. Ghostly Matters: Haunting and the Sociological Imagination. Minneapolis and London: U of Minneapolis P, 1997. Citation reference for this article MLA Style Kuntsman, Adi. "“Error: No Such Entry”: Haunted Ethnographies of Online Archives." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0711/05-kuntsman.php>. APA Style Kuntsman, A. (Oct. 2007) "“Error: No Such Entry”: Haunted Ethnographies of Online Archives," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0711/05-kuntsman.php>.
APA, Harvard, Vancouver, ISO, and other styles
45

Newman, James. "Save the Videogame! The National Videogame Archive: Preservation, Supersession and Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.167.

Full text
Abstract:
Introduction In October 2008, the UK’s National Videogame Archive became a reality and after years of negotiation, preparation and planning, this partnership between Nottingham Trent University’s Centre for Contemporary Play research group and The National Media Museum, accepted its first public donations to the collection. These first donations came from Sony’s Computer Entertainment Europe’s London Studios who presented the original, pre-production PlayStation 2 EyeToy camera (complete with its hand-written #1 sticker) and Harmonix who crossed the Atlantic to deliver prototypes of the Rock Band drum kit and guitar controllers along with a slew of games. Since then, we have been inundated with donations, enquiries and volunteers offering their services and it is clear that we have exciting and challenging times ahead of us at the NVA as we seek to continue our collecting programme and preserve, conserve, display and interpret these vital parts of popular culture. This essay, however, is not so much a document of these possible futures for our research or the challenges we face in moving forward as it is a discussion of some of the issues that make game preservation a vital and timely undertaking. In briefly telling the story of the genesis of the NVA, I hope to draw attention to some of the peculiarities (in both senses) of the situation in which videogames currently exist. While considerable attention has been paid to the preservation and curation of new media arts (e.g. Cook et al.), comparatively little work has been undertaken in relation to games. Surprisingly, the games industry has been similarly neglectful of the histories of gameplay and gamemaking. Throughout our research, it has became abundantly clear that even those individuals and companies most intimately associated with the development of this form, do not hold their corporate and personal histories in the high esteem we expected (see also Lowood et al.). And so, despite the well-worn bluster of an industry that proclaims itself as culturally significant as Hollywood, it is surprisingly difficult to find a definitive copy of the boxart of the final release of a Triple-A title let alone any of the pre-production materials. Through our journeys in the past couple of years, we have encountered shoeboxes under CEOs’ desks and proud parents’ collections of tapes and press cuttings. These are the closest things to a formalised archive that we currently have for many of the biggest British game development and publishing companies. Not only is this problematic in and of itself as we run the risk of losing titles and documents forever as well as the stories locked up in the memories of key individuals who grow ever older, but also it is symptomatic of an industry that, despite its public proclamations, neither places a high value on its products as popular culture nor truly recognises their impact on that culture. While a few valorised, still-ongoing, franchises like the Super Mario and Legend of Zelda series are repackaged and (digitally) re-released so as to provide continuity with current releases, a huge number of games simply disappear from view once their short period of retail limelight passes. Indeed, my argument in this essay rests to some extent on the admittedly polemical, and maybe even antagonistic, assertion that the past business and marketing practices of the videogames industry are partly to blame for the comparatively underdeveloped state of game preservation and the seemingly low cultural value placed on old games within the mainstream marketplace. Small wonder, then, that archives and formalised collections are not widespread. However antagonistic this point may seem, this essay does not set out merely to criticise the games industry. Indeed, it is important to recognise that the success and viability of projects such as the NVA is derived partly from close collaboration with industry partners. As such, it is my hope that in addition to contributing to the conversation about the importance and need for formalised strategies of game preservation, this essay goes some way to demonstrating the necessity of universities, museums, developers, publishers, advertisers and retailers tackling these issues in partnership. The Best Game Is the Next Game As will be clear from these opening paragraphs, this essay is primarily concerned with ‘old’ games. Perhaps surprisingly, however, we shall see that ‘old’ games are frequently not that old at all as even the shiniest, and newest of interactive experiences soon slip from view under the pressure of a relentless industrial and institutional push towards the forthcoming release and the ‘next generation’. More surprising still is that ‘old’ games are often difficult to come by as they occupy, at best, a marginalised position in the contemporary marketplace, assuming they are even visible at all. This is an odd situation. Videogames are, as any introductory primer on game studies will surely reveal, big business (see Kerr, for instance, as well as trade bodies such as ELSPA and The ESA for up-to-date sales figures). Given the videogame industry seems dedicated to growing its business and broadening its audiences (see Radd on Sony’s ‘Game 3.0’ strategy, for instance), it seems strange, from a commercial perspective if no other, that publishers’ and developers’ back catalogues are not being mercilessly plundered to wring the last pennies of profit from their IPs. Despite being cherished by players and fans, some of whom are actively engaged in their own private collecting and curation regimes (sometimes to apparently obsessive excess as Jones, among others, has noted), videogames have, nonetheless, been undervalued as part of our national popular cultural heritage by institutions of memory such as museums and archives which, I would suggest, have largely ignored and sometimes misunderstood or misrepresented them. Most of all, however, I wish to draw attention to the harm caused by the videogames industry itself. Consumers’ attentions are focused on ‘products’, on audiovisual (but mainly visual) technicalities and high-definition video specs rather than on the experiences of play and performance, or on games as artworks or artefact. Most damagingly, however, by constructing and contributing to an advertising, marketing and popular critical discourse that trades almost exclusively in the language of instant obsolescence, videogames have been robbed of their historical value and old platforms and titles are reduced to redundant, legacy systems and easily-marginalised ‘retro’ curiosities. The vision of inevitable technological progress that the videogames industry trades in reminds us of Paul Duguid’s concept of ‘supersession’ (see also Giddings and Kennedy, on the ‘technological imaginary’). Duguid identifies supersession as one of the key tropes in discussions of new media. The reductive idea that each new form subsumes and replaces its predecessor means that videogames are, to some extent, bound up in the same set of tensions that undermine the longevity of all new media. Chun rightly notes that, in contrast with more open terms like multimedia, ‘new media’ has always been somewhat problematic. Unaccommodating, ‘it portrayed other media as old or dead; it converged rather than multiplied; it did not efface itself in favor of a happy if redundant plurality’ (1). The very newness of new media and of videogames as the apotheosis of the interactivity and multimodality they promise (Newman, "In Search"), their gleam and shine, is quickly tarnished as they are replaced by ever-newer, ever more exciting, capable and ‘revolutionary’ technologies whose promise and moment in the limelight is, in turn, equally fleeting. As Franzen has noted, obsolescence and the trail of abandoned, superseded systems is a natural, even planned-for, product of an infatuation with the newness of new media. For Kline et al., the obsession with obsolescence leads to the characterisation of the videogames industry as a ‘perpetual innovation economy’ whose institutions ‘devote a growing share of their resources to the continual alteration and upgrading of their products. However, it is my contention here that the supersessionary tendency exerts a more serious impact on videogames than some other media partly because the apparently natural logic of obsolescence and technological progress goes largely unchecked and partly because there remain few institutions dedicated to considering and acting upon game preservation. The simple fact, as Lowood et al. have noted, is that material damage is being done as a result of this manufactured sense of continual progress and immediate, irrefutable obsolescence. By focusing on the upcoming new release and the preview of what is yet to come; by exciting gamers about what is in development and demonstrating the manifest ways in which the sheen of the new inevitably tarnishes the old. That which is replaced is fit only for the bargain bin or the budget-priced collection download, and as such, it is my position that we are systematically undermining and perhaps even eradicating the possibility of a thorough and well-documented history for videogames. This is a situation that we at the National Videogame Archive, along with colleagues in the emerging field of game preservation (e.g. the International Game Developers Association Game Preservation Special Interest Group, and the Keeping Emulation Environments Portable project) are, naturally, keen to address. Chief amongst our concerns is better understanding how it has come to be that, in 2009, game studies scholars and colleagues from across the memory and heritage sectors are still only at the beginning of the process of considering game preservation. The IGDA Game Preservation SIG was founded only five years ago and its ‘White Paper’ (Lowood et al.) is just published. Surprisingly, despite the importance of videogames within popular culture and the emergence and consolidation of the industry as a potent creative force, there remains comparatively little academic commentary or investigation into the specific situation and life-cycles of games or the demands that they place upon archivists and scholars of digital histories and cultural heritage. As I hope to demonstrate in this essay, one of the key tasks of the project of game preservation is to draw attention to the consequences of the concentration, even fetishisation, of the next generation, the new and the forthcoming. The focus on what I have termed ‘the lure of the imminent’ (e.g. Newman, Playing), the fixation on not only the present but also the as-yet-unreleased next generation, has contributed to the normalisation of the discourses of technological advancement and the inevitability and finality of obsolescence. The conflation of gameplay pleasure and cultural import with technological – and indeed, usually visual – sophistication gives rise to a context of endless newness, within which there appears to be little space for the ‘outdated’, the ‘superseded’ or the ‘old’. In a commercial and cultural space in which so little value is placed upon anything but the next game, we risk losing touch with the continuities of development and the practices of play while simultaneously robbing players and scholars of the critical tools and resources necessary for contextualised appreciation and analysis of game form and aesthetics, for instance (see Monnens, "Why", for more on the value of preserving ‘old’ games for analysis and scholarship). Moreover, we risk losing specific games, platforms, artefacts and products as they disappear into the bargain bucket or crumble to dust as media decay, deterioration and ‘bit rot’ (Monnens, "Losing") set in. Space does not here permit a discussion of the scope and extent of the preservation work required (for instance, the NVA sets its sights on preserving, documenting, interpreting and exhibiting ‘videogame culture’ in its broadest sense and recognises the importance of videogames as more than just code and as enmeshed within complex networks of productive, consumptive and performative practices). Neither is it my intention to discuss here the specific challenges and numerous issues associated with archival and exhibition tools such as emulation which seek to rebirth code on up-to-date, manageable, well-supported hardware platforms but which are frequently insensitive to the specificities and nuances of the played experience (see Newman, "On Emulation", for some further notes on videogame emulation, archiving and exhibition and Takeshita’s comments in Nutt on the technologies and aesthetics of glitches, for instance). Each of these issues is vitally important and will, doubtless become a part of the forthcoming research agenda for game preservation scholars. My focus here, however, is rather more straightforward and foundational and though it is deliberately controversial, it is my hope that its casts some light over some ingrained assumptions about videogames and the magnitude and urgency of the game preservation project. Videogames Are Disappearing? At a time when retailers’ shelves struggle under the weight of newly-released titles and digital distribution systems such as Steam, the PlayStation Network, Xbox Live Marketplace, WiiWare, DSiWare et al bring new ways to purchase and consume playable content, it might seem strange to suggest that videogames are disappearing. In addition to what we have perhaps come to think of as the ‘usual suspects’ in the hardware and software publishing marketplace, over the past year or so Apple have, unexpectedly and perhaps even surprising themselves, carved out a new gaming platform with the iPhone/iPod Touch and have dramatically simplified the notoriously difficult process of distributing mobile content with the iTunes App Store. In the face of this apparent glut of games and the emergence and (re)discovery of new markets with the iPhone, Wii and Nintendo DS, videogames seem an ever more a vital and visible part of popular culture. Yet, for all their commercial success and seemingly penetration the simple fact is that they are disappearing. And at an alarming rate. Addressing the IGDA community of game developers and producers, Henry Lowood makes the point with admirable clarity (see also Ruggill and McAllister): If we fail to address the problems of game preservation, the games you are making will disappear, perhaps within a few decades. You will lose access to your own intellectual property, you will be unable to show new developers the games you designed or that inspired you, and you may even find it necessary to re-invent a bunch of wheels. (Lowood et al. 1) For me, this point hit home most persuasively a few years ago when, along with Iain Simons, I was invited by the British Film Institute to contribute a book to their ‘Screen Guides’ series. 100 Videogames (Newman and Simons) was an intriguing prospect that provided us with the challenge and opportunity to explore some of the key moments in videogaming’s forty year history. However, although the research and writing processes proved to be an immensely pleasurable and rewarding experience that we hope culminated in an accessible, informative volume offering insight into some well-known (and some less-well known) games, the project was ultimately tinged with a more than a little disappointment and frustration. Assuming our book had successfully piqued the interest of our readers into rediscovering games previously played or perhaps investigating games for the first time, what could they then do? Where could they go to find these games in order to experience their delights (or their flaws and problems) at first hand? Had our volume been concerned with television or film, as most of the Screen Guides are, then online and offline retailers, libraries, and even archives for less widely-available materials, would have been obvious ports of call. For the student of videogames, however, the choices are not so much limited as practically non-existant. It is only comparatively recently that videogame retailers have shifted away from an almost exclusive focus on new releases and the zeitgeist platforms towards a recognition of old games and systems through the creation of the ‘pre-owned’ marketplace. The ‘pre-owned’ transaction is one in which old titles may be traded in for cash or against the purchase of new releases of hardware or software. Surely, then, this represents the commercial viability of classic games and is a recognition on the part of retail that the new release is not the only game in town. Yet, if we consider more carefully the ‘pre-owned’ model, we find a few telling points. First, there is cold economic sense to the pre-owned business model. In their financial statements for FY08, ‘GAME revealed that the service isn’t just a key part of its offer to consumers, but its also represents an ‘attractive’ gross margin 39 per cent.’ (French). Second, and most important, the premise of the pre-owned business as it is communicated to consumers still offers nothing but primacy to the new release. That one would trade-in one’s old games in order to consume these putatively better new ones speaks eloquently in the language of obsolesce and what Dovey and Kennedy have called the ‘technological imaginary’. The wire mesh buckets of old, pre-owned games are not displayed or coded as treasure troves for the discerning or completist collector but rather are nothing more than bargain bins. These are not classic games. These are cheap games. Cheap because they are old. Cheap because they have had their day. This is a curious situation that affects videogames most unfairly. Of course, my caricature of the videogame retailer is still incomplete as a good deal of the instantly visible shopfloor space is dedicated neither to pre-owned nor new releases but rather to displays of empty boxes often sporting unfinalised, sometimes mocked-up, boxart flaunting titles available for pre-order. Titles you cannot even buy yet. In the videogames marketplace, even the present is not exciting enough. The best game is always the next game. Importantly, retail is not alone in manufacturing this sense of dissatisfaction with the past and even the present. The specialist videogames press plays at least as important a role in reinforcing and normalising the supersessionary discourse of instant obsolescence by fixing readers’ attentions and expectations on the just-visible horizon. Examining the pages of specialist gaming publications reveals them to be something akin to Futurist paeans dedicating anything from 70 to 90% of their non-advertising pages to previews, interviews with developers about still-in-development titles (see Newman, Playing, for more on the specialist gaming press’ love affair with the next generation and the NDA scoop). Though a small number of publications specifically address retro titles (e.g. Imagine Publishing’s Retro Gamer), most titles are essentially vehicles to promote current and future product lines with many magazines essentially operating as delivery devices for cover-mounted CDs/DVDs offering teaser videos or playable demos of forthcoming titles to further whet the appetite. Manufacturing a sense of excitement might seem wholly natural and perhaps even desirable in helping to maintain a keen interest in gaming culture but the effect of the imbalance of popular coverage has a potentially deleterious effect on the status of superseded titles. Xbox World 360’s magnificently-titled ‘Anticip–O–Meter’ ™ does more than simply build anticipation. Like regular features that run under headings such as ‘The Next Best Game in The World Ever is…’, it seeks to author not so much excitement about the imminent release but a dissatisfaction with the present with which unfavourable comparisons are inevitably drawn. The current or previous crop of (once new, let us not forget) titles are not simply superseded but rather are reinvented as yardsticks to judge the prowess of the even newer and unarguably ‘better’. As Ashton has noted, the continual promotion of the impressiveness of the next generation requires a delicate balancing act and a selective, institutionalised system of recall and forgetting that recovers the past as a suite of (often technical) benchmarks (twice as many polygons, higher resolution etc.) In the absence of formalised and systematic collecting, these obsoleted titles run the risk of being forgotten forever once they no longer serve the purpose of demonstrating the comparative advancement of the successors. The Future of Videogaming’s Past Even if we accept the myriad claims of game studies scholars that videogames are worthy of serious interrogation in and of themselves and as part of a multifaceted, transmedial supersystem, we might be tempted to think that the lack of formalised collections, archival resources and readily available ‘old/classic’ titles at retail is of no great significance. After all, as Jones has observed, the videogame player is almost primed to undertake this kind of activity as gaming can, at least partly, be understood as the act and art of collecting. Games such as Animal Crossing make this tendency most manifest by challenging their players to collect objects and artefacts – from natural history through to works of visual art – so as to fill the initially-empty in-game Museum’s cases. While almost all videogames from The Sims to Katamari Damacy can be considered to engage their players in collecting and collection management work to some extent, Animal Crossing is perhaps the most pertinent example of the indivisibility of the gamer/archivist. Moreover, the permeability of the boundary between the fan’s collection of toys, dolls, posters and the other treasured objects of merchandising and the manipulation of inventories, acquisitions and equipment lists that we see in the menus and gameplay imperatives of videogames ensures an extensiveness and scope of fan collecting and archival work. Similarly, the sociality of fan collecting and the value placed on private hoarding, public sharing and the processes of research ‘…bridges to new levels of the game’ (Jones 48). Perhaps we should be as unsurprised that their focus on collecting makes videogames similar to eBay as we are to the realisation that eBay with its competitiveness, its winning and losing states, and its inexorable countdown timer, is nothing if not a game? We should be mindful, however, of overstating the positive effects of fandom on the fate of old games. Alongside eBay’s veneration of the original object, p2p and bittorrent sites reduce the videogame to its barest. Quite apart from the (il)legality of emulation and videogame ripping and sharing (see Conley et al.), the existence of ‘ROMs’ and the technicalities of their distribution reveals much about the peculiar tension between the interest in old games and their putative cultural and economic value. (St)ripped down to the barest of code, ROMs deny the gamer the paratextuality of the instruction manual or boxart. In fact, divorced from its context and robbed of its materiality, ROMs perhaps serve to make the original game even more distant. More tellingly, ROMs are typically distributed by the thousand in zipped files. And so, in just a few minutes, entire console back-catalogues – every game released in every territory – are available for browsing and playing on a PC or Mac. The completism of the collections allows detailed scrutiny of differences in Japanese versus European releases, for instance, and can be seen as a vital investigative resource. However, that these ROMs are packaged into collections of many thousands speaks implicitly of these games’ perceived value. In a similar vein, the budget-priced retro re-release collection helps to diminish the value of each constituent game and serves to simultaneously manufacture and highlight the manifestly unfair comparison between these intriguingly retro curios and the legitimately full-priced games of now and next. Customer comments at Amazon.co.uk demonstrate the way in which historical and technological comparisons are now solidly embedded within the popular discourse (see also Newman 2009b). Leaving feedback on Sega’s PS3/Xbox 360 Sega MegaDrive Ultimate Collection customers berate the publisher for the apparently meagre selection of titles on offer. Interestingly, this charge seems based less around the quality, variety or range of the collection but rather centres on jarring technological schisms and a clear sense of these titles being of necessarily and inevitably diminished monetary value. Comments range from outraged consternation, ‘Wtf, only 40 games?’, ‘I wont be getting this as one disc could hold the entire arsenal of consoles and games from commodore to sega saturn(Maybe even Dreamcast’ through to more detailed analyses that draw attention to the number of bits and bytes but that notably neglect any consideration of gameplay, experientiality, cultural significance or, heaven forbid, fun. “Ultimate” Collection? 32Mb of games on a Blu-ray disc?…here are 40 Megadrive games at a total of 31 Megabytes of data. This was taking the Michael on a DVD release for the PS2 (or even on a UMD for the PSP), but for a format that can store 50 Gigabytes of data, it’s an insult. Sega’s entire back catalogue of Megadrive games only comes to around 800 Megabytes - they could fit that several times over on a DVD. The ultimate consequence of these different but complementary attitudes to games that fix attentions on the future and package up decontextualised ROMs by the thousand or even collections of 40 titles on a single disc (selling for less than half the price of one of the original cartridges) is a disregard – perhaps even a disrespect – for ‘old’ games. Indeed, it is this tendency, this dominant discourse of inevitable, natural and unimpeachable obsolescence and supersession, that provided one of the prime motivators for establishing the NVA. As Lowood et al. note in the title of the IGDA Game Preservation SIG’s White Paper, we need to act to preserve and conserve videogames ‘before it’s too late’.ReferencesAshton, D. ‘Digital Gaming Upgrade and Recovery: Enrolling Memories and Technologies as a Strategy for the Future.’ M/C Journal 11.6 (2008). 13 Jun 2009 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/86›.Buffa, C. ‘How to Fix Videogame Journalism.’ GameDaily 20 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/how-to-fix-videogame-journalism/69202/?biz=1›. ———. ‘Opinion: How to Become a Better Videogame Journalist.’ GameDaily 28 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-how-to-become-a-better-videogame-journalist/69236/?biz=1. ———. ‘Opinion: The Videogame Review – Problems and Solutions.’ GameDaily 2 Aug. 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-the-videogame-review-problems-and-solutions/69257/?biz=1›. ———. ‘Opinion: Why Videogame Journalism Sucks.’ GameDaily 14 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-why-videogame-journalism-sucks/69180/?biz=1›. Cook, Sarah, Beryl Graham, and Sarah Martin eds. Curating New Media, Gateshead: BALTIC, 2002. Duguid, Paul. ‘Material Matters: The Past and Futurology of the Book.’ In Gary Nunberg, ed. The Future of the Book. Berkeley, CA: University of California Press, 1996. 63–101. French, Michael. 'GAME Reveals Pre-Owned Trading Is 18% of Business.’ MCV 22 Apr. 2009. 13 Jun 2009 ‹http://www.mcvuk.com/news/34019/GAME-reveals-pre-owned-trading-is-18-per-cent-of-business›. Giddings, Seth, and Helen Kennedy. ‘Digital Games as New Media.’ In J. Rutter and J. Bryce, eds. Understanding Digital Games. London: Sage. 129–147. Gillen, Kieron. ‘The New Games Journalism.’ Kieron Gillen’s Workblog 2004. 13 June 2009 ‹http://gillen.cream.org/wordpress_html/?page_id=3›. Jones, S. The Meaning of Video Games: Gaming and Textual Strategies, New York: Routledge, 2008. Kerr, A. The Business and Culture of Digital Games. London: Sage, 2006. Lister, Martin, John Dovey, Seth Giddings, Ian Grant and Kevin Kelly. New Media: A Critical Introduction. London and New York: Routledge, 2003. Lowood, Henry, Andrew Armstrong, Devin Monnens, Zach Vowell, Judd Ruggill, Ken McAllister, and Rachel Donahue. Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Monnens, Devin. ‘Why Are Games Worth Preserving?’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. ———. ‘Losing Digital Game History: Bit by Bit.’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Newman, J. ‘In Search of the Videogame Player: The Lives of Mario.’ New Media and Society 4.3 (2002): 407-425.———. ‘On Emulation.’ The National Videogame Archive Research Diary, 2009. 13 June 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/on-emulation/›. ———. ‘Our Cultural Heritage – Available by the Bucketload.’ The National Videogame Archive Research Diary, 2009. 10 Apr. 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/our-cultural-heritage-available-by-the-bucketload/›. ———. Playing with Videogames, London: Routledge, 2008. ———, and I. Simons. 100 Videogames. London: BFI Publishing, 2007. Nutt, C. ‘He Is 8-Bit: Capcom's Hironobu Takeshita Speaks.’ Gamasutra 2008. 13 June 2009 ‹http://www.gamasutra.com/view/feature/3752/›. Radd, D. ‘Gaming 3.0. Sony’s Phil Harrison Explains the PS3 Virtual Community, Home.’ Business Week 9 Mar. 2007. 13 June 2009 ‹http://www.businessweek.com/innovate/content/mar2007/id20070309_764852.htm?chan=innovation_game+room_top+stories›. Ruggill, Judd, and Ken McAllister. ‘What If We Do Nothing?’ Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009. ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. 16-19.
APA, Harvard, Vancouver, ISO, and other styles
46

Glitsos, Laura. "From Rivers to Confetti: Reconfigurations of Time through New Media Narratives." M/C Journal 22, no. 6 (December 4, 2019). http://dx.doi.org/10.5204/mcj.1584.

Full text
Abstract:
IntroductionIn the contemporary West, experiences of time are shaped by—and inextricably linked to—the nature of media production and consumption. In Derrida and Steigler’s estimation, teletechnologies bring time “into play” and thus produce time as an “artifact”, that is, a knowable product (3). How and why time becomes “artifactually” produced, according to these thinkers, is a result of the various properties of media production; media ensure that “gestures” (which can be understood here as the cultural moments marked as significant in some way, especially public ones) are registered. Being so, time is constrained, “formatted, initialised” by the matrix of the media system (3). Subsequently, because the media apparatus undergirds the Western imaginary, so too, the media apparatus undergirds the Western concept of time. We can say, in the radically changing global mediascape then, digital culture performs and generates ontological shifts that rewrite the relationship between media, time, and experience. This point lends itself to the significance of the role of both new media platforms and new media texts in reconfiguring understandings between past, present, and future timescapes.There are various ways in which new media texts and platforms work upon experiences of time. In the following, I will focus on just one of these ways: narrativity. By examining a ‘new media’ text, I elucidate how new media narratives imagine timescapes that are constructed through metaphors of ‘confetti’ or ‘snow’, as opposed to more traditional lineal metaphors like ‘rivers’ or ‘streams’ (see Augustine Sedgewick’s “Against Flows” for more critical thinking on the relationship between history, narrative, and the ‘flows’ metaphor). I focus on the revisioning of narrative structure in the Netflix series The Haunting of Hill House (2018) from its original form in the 1959 novel by Shirley Jackson. The narrative revisioning from the novel to the televisual both demonstrates and manifests emergent conceptualisations of time through the creative play of temporal multi-flows, which are contemporaneous yet fragmented.The first consideration is the shift in textual format. However, the translocation of the narrative from a novel to a televisual text is important, but not the focus here. Added to this, I deliberately move toward a “general narrative analysis” (Cobley 28), which has the advantage of focusing onmechanisms which may be integral to linguistically or visually-based genres without becoming embroiled in parochial questions to do with the ‘effectiveness’ of given modes, or the relative ‘value’ of different genres. This also allows narrative analysis to track the development of a specified process as well as its embodiment in a range of generic and technological forms. (Cobley 28)It should be also be noted from the outset that I am not suggesting that fragmented narrative constructions and representations were never imagined or explored prior to this new media age. Quite the contrary if we think of Modernist writers such as Virginia Woolf (Lodwick; Haggland). Rather, it is to claim that this abstraction is emerging in the mainstream entertainment media in greater contest with the dominant and more historically entrenched version of ‘time as a construct’ that is characterised through Realist narratology as linear and flowing only one way. As I will explore below, the reasons for this are largely related to shifts in everyday media consumption brought about by digital culture. There are two reasons why I specifically utilise Netflix’s series The Haunting of Hill House as a fulcrum from which to lever arguments about new media and the contemporary experience of time. First, as a web series, it embodies some of the pertinent conventions of the digital media landscape, both diegetically and also through practices of production and consumption by way of new time-shifting paradigms (see Leaver). I focus on the former in this article, but the latter is fruitful ground for critical consideration. For example, Netflix itself, as a platform, has somewhat destabilised normative temporal routines, such as in the case of ‘binge-watching’ where audiences ‘lose’ time similarly to gamblers in the casino space. Second, the fact that there are two iterations of the same story—one a novel and one a televisual text—provide us with a comparative benchmark from which to make further assertions about the changing nature of media and time from the mid-century to a post-millennium digital mediascape. Though it should be noted, my discussion will focus on the nature and quality of the contemporary framework, and I use the 1959 novel as a frame of reference only rather than examining its rich tapestry in its own right (for critique on the novel itself, see Wilson; see Roberts).Media and the Production of Time-SenseThere is a remarkable canon of literature detailing the relationship between media and the production of time, which can help us place this discussion in a theoretical framework. I am limited by space, but I will engage with some of the most pertinent material to set out a conceptual map. Markedly, from here, I refer to the Western experience of time as a “time-sense” following E.P. Thompson’s work (80). Following Thompson’s language, I use the term “time-sense” to refer to “our inward notation of time”, characterised by the rhythms of our “technological conditioning” systems, whether those be the forces of labour, media, or otherwise (80). Through the textual analysis of Hill House to follow, I will offer ways in which the technological conditioning of the new media system both constructs and shapes time-sense in terms related to a constellation of moments, or, to use a metaphor from the Netflix series itself, like “confetti” or “snow” (“Silence Lay Steadily”).However, in discussing the production of time-sense through new media mechanisms, note that time-sense is not an abstraction but is still linked to our understandings of the literal nature of time-space. For example, Alvin Toffler explains that, in its most simple construction, “Time can be conceived as the intervals during which events occur” (21). However, we must be reminded that events must first occur within the paradigm of experience. That is to say that matters of ‘duration’ cannot be unhinged from the experiential or phenomenological accounts of those durations, or in Toffler’s words, in an echo of Thompson, “Man’s [sic] perception of time is closely linked with his internal rhythms” (71). In the 1970s, Toffler commented upon the radical expansion of global systems of communications that produces the “twin forces of acceleration and transience”, which “alter the texture of existence, hammering our lives and psyches into new and unfamiliar shapes” (18). This simultaneous ‘speeding up’ (which he calls acceleration) and sense of ‘skipping’ (which he calls transience) manifest in a range of modern experiences which disrupt temporal contingencies. Nearly two decades after Toffler, David Harvey commented upon the Postmodern’s “total acceptance of ephemerality, fragmentation, discontinuity, and the chaotic” (44). Only a decade ago, Terry Smith emphasised that time-sense had become even more characterised by the “insistent presentness of multiple, often incompatible temporalities” (196). Netflix had not even launched in Australia and New Zealand until 2015, as well as a host of other time-shifting media technologies which have emerged in the past five years. As a result, it behooves us to revaluate time-sense with this emergent field of production.That being said, entertainment media have always impressed itself upon our understanding of temporal flows. Since the dawn of cinema in the late 19th century, entertainment media have been pivotal in constructing, manifesting, and illustrating time-sense. This has largely (but not exclusively) been in relation to the changing nature of narratology and the ways that narrative produces a sense of temporality. Helen Powell points out that the very earliest cinema, such as the Lumière Brothers’ short films screened in Paris, did not embed narrative, rather, “the Lumières’ actualities captured life as it happened with all its contingencies” (2). It is really only with the emergence of classical mainstream Hollywood that narrative became central, and with it new representations of “temporal flow” (2). Powell tells us that “the classical Hollywood narrative embodies a specific representation of temporal flow, rational and linear in its construction” reflecting “the standardised view of time introduced by the onset of industrialisation” (Powell 2). Of course, as media production and trends change, so does narrative structure. By the late 20th century, new approaches to narrative structure manifest in tropes such as ‘the puzzle film,’ as an example, which “play with audiences” expectations of conventional roles and storytelling through the use of the unreliable narrator and the fracturing of linearity. In doing so, they open up wider questions of belief, truth and reliability” (Powell 4). Puzzle films which might be familiar to the reader are Memento (2001) and Run Lola Run (1999), each playing with the relationship between time and memory, and thus experiences of contemporaneity. The issue of narrative in the construction of temporal flow is therefore critically linked to the ways that mediatic production of narrative, in various ways, reorganises time-sense more broadly. To examine this more closely, I now turn to Netflix’s The Haunting of Hill House.Narratology and Temporal FlowNetflix’s revision of The Haunting of Hill House reveals critical insights into the ways in which media manifest the nature and quality of time-sense. Of course, the main difference between the 1959 novel and the Netflix web series is the change of the textual format from a print text to a televisual text distributed on an Internet streaming platform. This change performs what Marie-Laure Ryan calls “transfictionality across media” (385). There are several models through which transfictionality might occur and thus transmogrify textual and narratival parametres of a text. In the case of The Haunting of Hill House, the Netflix series follows the “displacement” model, which means it “constructs essentially different versions of the protoworld, redesigning its structure and reinventing its story” (Doležel 206). For example, in the 2018 television remake, the protoworld from the original novel retains integrity in that it conveys the story of a group of people who are brought to a mansion called Hill House. In both versions of the protoworld, the discombobulating effects of the mansion work upon the group dynamics until a final break down reveals the supernatural nature of the house. However, in ‘displacing’ the original narrative for adaptation to the web series, the nature of the group is radically reshaped (from a research contingent to a nuclear family unit) and the events follow radically different temporal contingencies.More specifically, the original 1959 novel utilises third-person limited narration and follows a conventional linear temporal flow through which events occur in chronological order. This style of storytelling is often thought about in metaphorical terms by way of ‘rivers’ or ‘streams,’ that is, flowing one-way and never repeating the same configuration (very much unlike the televisual text, in which some scenes are repeated to punctuate various time-streams). Sean Cubitt has examined the relationship between this conventional narrative structure and time sensibility, stating thatthe chronological narrative proposes to us a protagonist who always occupies a perpetual present … as a point moving along a line whose dimensions have however already been mapped: the protagonist of the chronological narrative is caught in a story whose beginning and end have already been determined, and which therefore constructs story time as the unfolding of destiny rather than the passage from past certainty into an uncertain future. (4)I would map Cubitt’s characterisation onto the original Hill House novel as representative of a mid-century textual artifact. Although Modernist literature (by way of Joyce, Woolf, Eliot, and so forth) certainly ‘played’ with non-linear or multi-linear narrative structures, in relation to time-sense, Christina Chau reminds us that Modernity, as a general mood, was very much still caught up in the idea that “time that moves in a linear fashion with the future moving through the present and into the past” (26). Additionally, even though flashbacks are utilised in the original novel, they are revealed using the narrative convention of ‘memories’ through the inner dialogue of the central character, thus still occurring in the ‘present’ of the novel’s timescape and still in keeping with a ‘one-way’ trajectory. Most importantly, the original novel follows what I will call one ‘time-stream’, in that events unfold, and are conveyed through, one temporal flow.In the Netflix series, there are obvious (and even cardinal) changes which reorganise the entire cast of characters as well as the narrative structure. In fact, the very process of returning to the original novel in order to produce a televisual remake says something about the nature of time-sense in itself, which is further sophisticated by the recognition of Netflix as a ‘streaming service’. That is, Netflix encapsulates this notion of ‘rivers-on-demand’ which overlap with each other in the context of the contemporaneous and persistent ‘now’ of digital culture. Marie-Laure Ryan suggests that “the proliferation of rewrites … is easily explained by the sense of pastness that pervades Postmodern culture and by the fixation of contemporary thought with the textual nature of reality” (386). While the Netflix series remains loyal to the mood and basic premise (i.e., that there is a haunted house in which characters endure strange happenings and enter into psycho-drama), the series instead uses fractured narrative convention through which three time-streams are simultaneously at work (although one time-stream is embedded in another and therefore its significance is ‘hidden’ to the viewer until the final episode), which we will examine now.The Time-Streams of Hill HouseIn the Netflix series, the central time-stream is, at first, ostensibly located in the characters’ ‘present’. I will call this time-stream A. (As a note to the reader here, there are spoilers for those who have not watched the Netflix series.) The viewer assumes they are, from the very first scene, following the ‘present’ time-stream in which the characters are adults. This is the time-stream in which the series opens, however, only for the first minute of viewing. After around one minute of viewing time, we already enter into a second time-stream. Even though both the original novel and the TV series begin with the same dialogue, the original novel continues to follow one time-stream, while the TV series begins to play with contemporaneous action by manifesting a second time-stream (following a series of events from the characters past) running in parallel action to the first time-stream. This narrative revisioning resonates with Toffler’s estimation of shifting nature of time-sense in the later twentieth century, in which he cites thatindeed, not only do contemporary events radiate instantaneously—now we can be said to be feeling the impact of all past events in a new way. For the past is doubling back on us. We are caught in what might be called a ‘time skip’. (16)In its ‘displacement’ model, the Hill House televisual remake points to this ongoing fascination with, and re-actualisation of, the exaggerated temporal discrepancies in the experience of contemporary everyday life. The Netflix Hill House series constructs a dimensional timescape in which the timeline ‘skips’ back and forth (not only for the viewer but also the characters), and certain spaces (such as the Red Room) are only permeable to some characters at certain times.If we think about Toffler’s words here—a doubling back, or, a time-skip—we might be pulled toward ever more recent incarnations of this effect. In Helen Powell’s investigation of the relationship between narrative and time-sense, she insists that “new media’s temporalities offer up the potential to challenge the chronological mode of temporal experience” (152). Sean Cubitt proposes that with the intensification of new media “we enter a certain, as yet inchoate, mode of time. For all the boasts of instantaneity, our actual relations with one another are mediated and as such subject to delays: slow downloads, periodic crashes, cache clearances and software uploads” (10). Resultingly, we have myriad temporal contingencies running at any one time—some slow, frustrating, mundane, in ‘real-time’ and others rapid to the point of instantaneous, or even able to pull the past into the present (through the endless trove of archived media on the web) and again into other mediatic dimensions such as virtual reality. To wit, Powell writes that “narrative, in mirroring these new temporal relations must embody fragmentation, discontinuity and incomplete resolution” (153). Fragmentation, discontinuity, and incompleteness are appropriate ways to think through the Hill House’s narrative revision and the ways in which it manifests some of these time-sensibilities.The notion of a ‘time-skip’ is an appropriate way to describe the transitions between the three temporal flows occurring simultaneously in the Hill House televisual remake. Before being comfortably seated in any one time-stream, the viewer is translocated into a second time-stream that runs parallel to it (almost suggesting a kind of parallel dimension). So, we begin with the characters as adults and then almost immediately, we are also watching them as children with the rapid emergence of this second time-stream. This ‘second time-stream’ conveys the events of ‘the past’ in which the central characters are children, so I will call this time-stream B. While time-stream B conveys the scenes in which the characters are children, the scenes are not necessarily in chronological order.The third time-stream is the spectral-stream, or time-stream C. However, the viewer is not fully aware that there is a totally separate time stream at play (the audience is made to think that this time-stream is the product of mere ghost-sightings). This is until the final episode, which completes the narrative ‘puzzle’. That is, the third time-stream conveys the events which are occurring simultaneously in both of the two other time-streams. In a sense, time-stream C, the spectral stream, is used to collapse the ontological boundaries of the former two time-streams. Throughout the early episodes, this time-stream C weaves in and out of time-streams A and B, like an intrusive time-stream (intruding upon the two others until it manifests on its own in the final episode). Time-stream C is used to create a 'puzzle' for the viewer in that the viewer does not fully understand its total significance until the puzzle is completed in the final episode. This convention, too, says something about the nature of time-sense as it shifts and mutates with mediatic production. This echoes back to Powell’s discussion of the ‘puzzle’ trend, which, as I note earlier, plays with “audiences’ expectations of conventional roles and storytelling through the use of the unreliable narrator and the fracturing of linearity” which serves to “open up wider questions of belief, truth and reliability” (4). Similarly, the skipping between three time-streams to build the Hill House puzzle manifests the ever-complicating relationships of time-management experiences in everyday life, in which pasts, presents, and futures impinge upon one another and interfere with each other.Critically, in terms of plot, time-stream B (in which the characters are little children) opens with the character Nell as a small child of 5 or 6 years of age. She appears to have woken up from a nightmare about The Bent Neck Lady. This vision traumatises Nell, and she is duly comforted in this scene by the characters of the eldest son and the father. This provides crucial exposition for the viewer: We are told that these ‘visitations’ from The Bent Neck Lady are a recurring trauma for the child-Nell character. It is important to note that, while these scenes may be mistaken for simple memory flashbacks, it becomes clearer throughout the series that this time-stream is not tied to any one character’s memory but is a separate storyline, though critical to the functioning of the other two. Moreover, the Bent Neck Lady recurs as both (apparent) nightmares and waking visions throughout the course of Nell’s life. It is in Episode Five that we realise why.The reason why The Bent Neck Lady always appears to Nell is that she is Nell. We learn this at the end of Episode Five when the storyline finally conveys how Nell dies in the House, which is by hanging from a noose tied to the mezzanine in the Hill House foyer. As Nell drops from the mezzanine attached to this noose, her neck snaps—she is The Bent Neck Lady. However, Nell does not just drop to the end of the noose. She continues to drop five more times back into the other two time streams. Each time Nell drops, she drops into a different moment in time (and each time the neck snapping is emphasised). The first drop she appears to herself in a basement. The second drop she appears to herself on the road outside the car while she is with her brother. The third is during (what we have been told) is a kind of sleep paralysis. The fourth and fifth drops she appears to herself as the small child on two separate occasions—both of which we witness with her in the first episode. So not only is Nell journeying through time, the audience is too. The viewer follows Nell’s journey through her ‘time-skip’. The result of the staggered but now conjoined time-streams is that we come to realise that Nell is, in fact, haunting herself—and the audience now understands they have followed this throughout not as a ghost-sighting but as a ‘future’ time-stream impinging on another.In the final episode of season one, the siblings are confronted by Ghost-Nell in the Red Room. This is important because it is in this Red Room through which all time-streams coalesce. The Red Room exists dimensionally, cutting across disparate spaces and times—it is the spatial representation of the spectral time-stream C. It is in this final episode, and in this spectral dimension, that all the three time-streams collapse upon each other and complete the narrative ‘puzzle’ for the viewer. The temporal flow of the spectral dimension, time-stream C, interrupts and interferes with the temporal flow of the former two—for both the characters in the text and viewing audience.The collapse of time-streams is produced through a strategic dialogic structure. When Ghost-Nell appears to the siblings in the Red Room, her first line of dialogue is a non-sequitur. Luke emerges from his near-death experience and points to Nell, to which Nell replies: “I feel a little clearer just now. We have. All of us have” ("Silence Lay Steadily"). Nell’s dialogue continues but, eventually, she returns to the same statement, almost like she is running through a cyclic piece of text. She states again, “We have. All of us have.” However, this time around, the phrase is pre-punctuated by Shirley’s claim that she feels as though she had been in the Red Room before. Nell’s dialogue and the dialogue of the other characters suddenly align in synchronicity. The audience now understands that Nell’s very first statement, “We have. All of us have” is actually a response to the statement that Shirley had not yet made. This narrative convention emphasises the ‘confetti-like’ nature of the construction of time here. Confetti is, after all, sheets of paper that have been cut into pieces, thrown into the air, and then fallen out of place. Similarly, the narrative makes sense as a whole but feels cut into pieces and realigned, if only momentarily. When Nell then loops back through the same dialogue, it finally appears in synch and thus makes sense. This signifies that the time-streams are now merged.The Ghost of Nell has travelled through (and in and out of) each separate time-stream. As a result, Ghost-Nell understands the nature of the Red Room—it manifests a slippage of timespace that each of the siblings had entered during their stay at the Hill House mansion. It is with this realisation that Ghost-Nell explains:Everything’s been out of order. Time, I mean. I thought for so long that time was like a line, that ... our moments were laid out like dominoes, and that they ... fell, one into another and on it went, just days tipping, one into the next, into the next, in a long line between the beginning ... and the end.But I was wrong. It’s not like that at all. Our moments fall around us like rain. Or... snow. Or confetti. (“Silence Lay Steadily”)This brings me to the titular concern: The emerging abstraction of time as a mode of layering and fracturing, a mode performed through this analogy of ‘confetti’ or ‘snow’. The Netflix Hill House revision rearranges time constructs so that any one moment of time may be accessed, much like scrolling back and forth (and in and out) of social media feeds, Internet forums, virtual reality programs and so forth. Each moment, like a flake of ‘snow’ or ‘confetti’ litters the timespace matrix, making an infinite tapestry that exists dimensionally. In the Hill House narrative, all moments exist simultaneously and accessing each moment at any point in the time-stream is merely a process of perception.ConclusionNetflix is optimised as a ‘streaming platform’ which has all but ushered in the era of ‘time-shifting’ predicated on geospatial politics (see Leaver). The current media landscape offers instantaneity, contemporaneity, as well as, arbitrary boundedness on the basis of geopolitics, which Tama Leaver refers to as the “tyranny of digital distance”. Therefore, it is fitting that Netflix’s revision of the Hill House narrative is preoccupied with time as well as spectrality. Above, I have explored just some of the ways that the televisual remake plays with notions of time through a diegetic analysis.However, we should take note that even in its production and consumption, this series, to quote Graham Meikle and Sherman Young, is embedded within “the current phase of television [that] suggests contested continuities” (67). Powell problematises the time-sense of this media apparatus further by reminding us that “there are three layers of temporality contained within any film image: the time of registration (production); the time of narration (storytelling); and the time of its consumption (viewing)” (3-4). Each of these aspects produces what Althusser and Balibar have called a “peculiar time”, that is, “different levels of the whole as developing ‘in the same historical time’ … relatively autonomous and hence relatively independent, even in its dependence, of the ‘times’ of the other levels” (99). When we think of the layers upon layers of different time ‘signatures’ which converge in Hill House as a textual artifact—in its production, consumption, distribution, and diegesis—the nature of contemporary time reveals itself as complex but also fleeting—hard to hold onto—much like snow or confetti.ReferencesAlthusser, Louis, and Étienne Balibar. Reading Capital. London: NLB, 1970.Cobley, Paul. Narrative. Hoboken: Taylor and Francis, 2013.Cubitt, S. “Spreadsheets, Sitemaps and Search Engines.” New Screen Media: Cinema/Art/Narrative. Eds. Martin Rieser and Andrea Zapp. London: BFI, 2002. 3-13.Derrida, Jacques, and Bernard Stiegler. Echographies of Television: Filmed Interviews. Massachusetts: Polity Press, 2002.Doležel, Lubomir. Heterocosmica: Fiction and Possible Worlds. Baltimore: Johns Hopkins UP, 1999.Hägglund, Martin. Dying for Time: Proust, Woolf, Nabokov. Cambridge: Harvard UP, 2012.Hartley, Lodwick. “Of Time and Mrs. Woolf.” The Sewanee Review 47.2 (1939): 235-241.Harvey, David. Condition of Postmodernity: An Enquiry into the Origins of Cultural Change. Oxford: Blackwell, 1989.Jackson, Shirley. The Haunting of Hill House. New York: Viking, 1959.Laurie-Ryan Marie. “Transfictionality across Media.” Theorizing Narrativity. Eds. John Pier, García Landa, and José Angel. Berlin: Walter de Gruyter, 2008. 385-418.Leaver, Tama. “Watching Battlestar Galactica in Australia and the Tyranny of Digital Distance.” Media International Australia 126 (2008): 145-154.Meikle, George, and Sherman Young. “Beyond Broadcasting? TV For the Twenty-First Century.” Media International Australia 126 (2008): 67-70.Powell, Helen. Stop the Clocks! Time and Narrative in Cinema. London: I.B. Tauris, 2012.Roberts, Brittany. “Helping Eleanor Come Home: A Reassessment of Shirley Jackson’s The Haunting of Hill House.” The Irish Journal of Gothic and Horror Studies 16 (2017): 67-93.Smith, Terry. What Is Contemporary Art? Chicago: U of Chicago P, 2009.The Haunting of Hill House. Mike Flanagan. Amblin Entertainment, 2018.Thompson, E.P. “Time, Work-Discipline, and Industrial Capitalism.” Past and Present 38.1 (1967): 56-97.Toffler, Alvin. Future Shock. New York: Bantam Books, 1971.Wilson, Michael T. “‘Absolute Reality’ and the Role of the Ineffable in Shirley Jackson’s The Haunting of Hill House.” Journal of Popular Culture 48.1 (2015): 114-123.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography