Siga este enlace para ver otros tipos de publicaciones sobre el tema: Database management. Computational grids (Computer systems).

Artículos de revistas sobre el tema "Database management. Computational grids (Computer systems)"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Database management. Computational grids (Computer systems)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Jie, Wei, Tianyi Zang, Terence Hung, Stephen J. Turner y Wentong Cai. "Information Management for Computational Grids". International Journal of Web Services Research 2, n.º 3 (julio de 2005): 69–82. http://dx.doi.org/10.4018/jwsr.2005070103.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Mingyi, Patrick Martin, Wendy Powley y Jianjun Chen. "Workload Management in Database Management Systems: A Taxonomy". IEEE Transactions on Knowledge and Data Engineering 30, n.º 7 (1 de julio de 2018): 1386–402. http://dx.doi.org/10.1109/tkde.2017.2767044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Arias-Londoño, Andrés, Oscar Danilo Montoya y Luis Fernando Grisales-Noreña. "A Chronological Literature Review of Electric Vehicle Interactions with Power Distribution Systems". Energies 13, n.º 11 (11 de junio de 2020): 3016. http://dx.doi.org/10.3390/en13113016.

Texto completo
Resumen
In the last decade, the deployment of electric vehicles (EVs) has been largely promoted. This development has increased challenges in the power systems in the context of planning and operation due to the massive amount of recharge needed for EVs. Furthermore, EVs may also offer new opportunities and can be used to support the grid to provide auxiliary services. In this regard, and considering the research around EVs and power grids, this paper presents a chronological background review of EVs and their interactions with power systems, particularly electric distribution networks, considering publications from the IEEE Xplore database. The review is extended from 1973 to 2019 and is developed via systematic classification using key categories that describe the types of interactions between EVs and power grids. These interactions are in the framework of the power quality, study of scenarios, electricity markets, demand response, demand management, power system stability, Vehicle-to-Grid (V2G) concept, and optimal location of battery swap and charging stations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Wei, Guiyi, Yun Ling, Athanasios V. Vasilakos, Bin Xiao y Yao Zheng. "PIVOT: An adaptive information discovery framework for computational grids". Information Sciences 180, n.º 23 (diciembre de 2010): 4543–56. http://dx.doi.org/10.1016/j.ins.2010.07.022.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Puustjärvi, Juha. "Distributed management of transactions in heterogeneous distributed database systems". BIT 31, n.º 3 (septiembre de 1991): 406–20. http://dx.doi.org/10.1007/bf01933259.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sakauchi, Masao y Yutaka Ohsawa. "Pattern Data representation and management in image database systems". Systems and Computers in Japan 17, n.º 1 (1986): 83–91. http://dx.doi.org/10.1002/scj.4690170110.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Sarhan, Amany, Ahmed I. Saleh y Amr M. Hamed. "A reliable-adaptive scheduler for computational grids with failure recovery and rescheduling mechanisms". International Journal of Grid and Utility Computing 2, n.º 1 (2011): 59. http://dx.doi.org/10.1504/ijguc.2011.039981.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hegde, Sujay N., H. K. Krishnappa, M. A. Rajan y Srinivas D B. "An efficient greedy task scheduling algorithm for heterogeneous inter-dependent tasks on computational grids". International Journal of Grid and Utility Computing 1, n.º 1 (2020): 1. http://dx.doi.org/10.1504/ijguc.2020.10026377.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Srinivas, D. B., Sujay N. Hegde, M. A. Rajan y H. K. Krishnappa. "An efficient greedy task scheduling algorithm for heterogeneous inter-dependent tasks on computational grids". International Journal of Grid and Utility Computing 11, n.º 5 (2020): 587. http://dx.doi.org/10.1504/ijguc.2020.110059.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ni, Jiacai, Guoliang Li, Lijun Wang, Jianhua Feng, Jun Zhang y Lei Li. "Adaptive Database Schema Design for Multi-Tenant Data Management". IEEE Transactions on Knowledge and Data Engineering 26, n.º 9 (septiembre de 2014): 2079–93. http://dx.doi.org/10.1109/tkde.2013.94.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Madduri, H., S. S. B. Shi, R. Baker, N. Ayachitula, L. Shwartz, M. Surendra, C. Corley, M. Benantar y S. Patel. "A configuration management database architecture in support of IBM Service Management". IBM Systems Journal 46, n.º 3 (2007): 441–57. http://dx.doi.org/10.1147/sj.463.0441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Wang, Yunsen y Alexander Kogan. "Cloud-Based In-Memory Columnar Database Architecture for Continuous Audit Analytics". Journal of Information Systems 34, n.º 2 (2 de agosto de 2019): 87–107. http://dx.doi.org/10.2308/isys-52531.

Texto completo
Resumen
ABSTRACT This study introduces the database architecture that manages data in the main physical memory using columnar format. It proposes a conceptual framework for applying the in-memory columnar database systems to support high-speed continuous audit analytics. To evaluate the proposed framework, this study develops a prototype and conducts simulation tests. The test results show high computational efficiency and effectiveness of the in-memory columnar database relative to the conventional ERP system. Furthermore, the deployment of the in-memory columnar database to the cloud shows the promise of scaling up the in-memory columnar database for continuous audit analytics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hara, Takahiro, Kaname Harumoto, Masahiko Tsukamoto y Shojiro Nishio. "Location management for database migration in ATM networks". Systems and Computers in Japan 28, n.º 9 (agosto de 1997): 35–45. http://dx.doi.org/10.1002/(sici)1520-684x(199708)28:9<35::aid-scj5>3.0.co;2-l.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Lee, Younho, Heeyoul Kim, Yongsu Park y Hyunsoo Yoon. "An efficient delegation protocol with delegation traceability in the X.509 proxy certificate environment for computational grids". Information Sciences 178, n.º 14 (julio de 2008): 2968–82. http://dx.doi.org/10.1016/j.ins.2008.03.010.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Rahman, Ashiqur Md y Rashedur M. Rahman. "CAPM Indexed Hybrid E-Negotiation for Resource Allocation in Grid Computing". International Journal of Grid and High Performance Computing 5, n.º 2 (abril de 2013): 72–91. http://dx.doi.org/10.4018/jghpc.2013040105.

Texto completo
Resumen
Computational Grids are a promising platform for executing large-scale resource intensive applications. This paper identifies challenges in managing resources in a Grid computing environment and proposes computational economy as a metaphor for effective management of resources and application scheduling. It identifies distributed resource management challenges and requirements of economy-based Grid systems, and proposes an economy based negotiation system protocol for cooperative and competitive trading of resources. Dynamic pricing for services and good level of Pareto optimality make auctions more attractive for resource allocation over other economic models. In a complex Grid environment, the communication demand can become a bottleneck; that is, a number of messages need to be exchanged for matching suitable service providers and consumers. The Fuzzy Trust integrated hybrid Capital Asset Pricing Model (CAPM) shows the higher user centric satisfaction and provides the equilibrium relationship between the expected return and risk on investments. This paper also presents an analysis on the communication requirements and the necessity of the CAPMAuction in Grid environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Nakashima, Yusei, Noriaki Daito y Satoru Fujita. "Integrated expert system with object-oriented database management system". Systems and Computers in Japan 23, n.º 11 (1992): 29–40. http://dx.doi.org/10.1002/scj.4690231103.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Lehotay-Kéry, Péter, Tamás Tarczali y Attila Kiss. "P System–Based Clustering Methods Using NoSQL Databases". Computation 9, n.º 10 (24 de septiembre de 2021): 102. http://dx.doi.org/10.3390/computation9100102.

Texto completo
Resumen
Models of computation are fundamental notions in computer science; consequently, they have been the subject of countless research papers, with numerous novel models proposed even in recent years. Amongst a multitude of different approaches, many of these methods draw inspiration from the biological processes observed in nature. P systems, or membrane systems, make an analogy between the communication in computing and the flow of information that can be perceived in living organisms. These systems serve as a basis for various concepts, ranging from the fields of computational economics and robotics to the techniques of data clustering. In this paper, such utilization of these systems—membrane system–based clustering—is taken into focus. Considering the growing number of data stored worldwide, more and more data have to be handled by clustering algorithms too. To solve this issue, bringing these methods closer to the data, their main element provides several benefits. Database systems equip their users with, for instance, well-integrated security features and more direct control over the data itself. Our goal is if the type of the database management system is given, e.g., NoSQL, but the corporation or the research team can choose which specific database management system is used, then we give a perspective, how the algorithms written like this behave in such an environment, so that, based on this, a more substantiated decision can be made, meaning which database management system should be connected to the system. For this purpose, we discover the possibilities of a clustering algorithm based on P systems when used alongside NoSQL database systems, that are designed to manage big data. Variants over two competing databases, MongoDB and Redis, are evaluated and compared to identify the advantages and limitations of using such a solution in these systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Dadam, P. y V. Linnemann. "Advanced Information Management (AIM): Advanced database technology for integrated applications". IBM Systems Journal 28, n.º 4 (1989): 661–81. http://dx.doi.org/10.1147/sj.284.0661.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

de Almeida Cunha, Jéssica Gabriela, Vinícius Loti de Lima y Thiago Alves de Queiroz. "Grids for cutting and packing problems: a study in the 2D knapsack problem". 4OR 18, n.º 3 (24 de septiembre de 2019): 293–339. http://dx.doi.org/10.1007/s10288-019-00419-9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Telford, R., R. Horman, S. Lightstone, N. Markov, S. O'Connell y G. Lohman. "Usability and design considerations for an autonomic relational database management system". IBM Systems Journal 42, n.º 4 (2003): 568–81. http://dx.doi.org/10.1147/sj.424.0568.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Koukoutsis, Elias, Constantin Papaodysseus, George Tsavdaridis, Nikolaos V. Karadimas, Athanasios Ballis, Eirini Mamatsi y Athanasios Rafail Mamatsis. "Design Limitations, Errors and Hazards in Creating Decision Support Platforms with Large- and Very Large-Scale Data and Program Cores". Algorithms 13, n.º 12 (14 de diciembre de 2020): 341. http://dx.doi.org/10.3390/a13120341.

Texto completo
Resumen
Recently, very large-scale decision support systems (DSSs) have been developed, which tackle very complex problems, associated with very extensive and polymorphic information, which probably is geographically highly dispersed. The management, updating, modification and upgrading of the data and program core of such an information system is, as a rule, a very difficult task, which encompasses many hazards and risks. The purpose of the present work was (a) to list the more significant of these hazards and risks and (b) to introduce a new general methodology for designing decision support (DS) systems that are robust and circumvent these risks. The core of this new approach was the introduction of a meta-database, called teleological, on the base of which management, updating, modification, reduction, growth and upgrading of the system may be safely and efficiently achieved. The very same teleological meta-database can be used for the construction of a sound decision support system, incorporating elements of a previous one at a future stage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zoun, Roman, Kay Schallert, David Broneske, Ivayla Trifonova, Xiao Chen, Robert Heyer, Dirk Benndorf y Gunter Saake. "An Investigation of Alternatives to Transform Protein Sequence Databases to a Columnar Index Schema". Algorithms 14, n.º 2 (11 de febrero de 2021): 59. http://dx.doi.org/10.3390/a14020059.

Texto completo
Resumen
Mass spectrometers enable identifying proteins in biological samples leading to biomarkers for biological process parameters and diseases. However, bioinformatic evaluation of the mass spectrometer data needs a standardized workflow and system that stores the protein sequences. Due to its standardization and maturity, relational systems are a great fit for storing protein sequences. Hence, in this work, we present a schema for distributed column-based database management systems using a column-oriented index to store sequence data. In order to achieve a high storage performance, it was necessary to choose a well-performing strategy for transforming the protein sequence data from the FASTA format to the new schema. Therefore, we applied an in-memory map, HDDmap, database engine, and extended radix tree and evaluated their performance. The results show that our proposed extended radix tree performs best regarding memory consumption and runtime. Hence, the radix tree is a suitable data structure for transforming protein sequences into the indexed schema.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Tsuruoka, Kunitoshi, Yutaka Kimura, Misa Namiuchi y Yoshitaka Yasumura. "Development of the PERCIO object-oriented database management system and future research issues". Systems and Computers in Japan 28, n.º 3 (marzo de 1997): 13–23. http://dx.doi.org/10.1002/(sici)1520-684x(199703)28:3<13::aid-scj2>3.0.co;2-t.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Meng, Qian, Jianfeng Ma, Kefei Chen, Yinbin Miao y Tengfei Yang. "Comparable Encryption Scheme over Encrypted Cloud Data in Internet of Everything". Security and Communication Networks 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6430850.

Texto completo
Resumen
User authentication has been widely deployed to prevent unauthorized access in the new era of Internet of Everything (IOE). When user passes the legal authentication, he/she can do series of operations in database. We mainly concern issues of data security and comparable queries over ciphertexts in IOE. In traditional database, a Short Comparable Encryption (SCE) scheme has been widely used by authorized users to conduct comparable queries over ciphertexts, but existing SCE schemes still incur high storage and computational overhead as well as economic burden. In this paper, we first propose a basic Short Comparable Encryption scheme based on sliding window method (SCESW), which can significantly reduce computational and storage burden as well as enhance work efficiency. Unfortunately, as the cloud service provider is a semitrusted third party, public auditing mechanism needs to be furnished to protect data integrity. To further protect data integrity and reduce management overhead, we present an enhanced SCESW scheme based on position-aware Merkle tree, namely, PT-SCESW. Security analysis proves that PT-SCESW and SCESW schemes can guarantee completeness and weak indistinguishability in standard model. Performance evaluation indicates that PT-SCESW scheme is efficient and feasible in practical applications, especially for smarter and smaller computing devices in IOE.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Thomer, Andrea K. y Karen M. Wickett. "Relational data paradigms: What do we learn by taking the materiality of databases seriously?" Big Data & Society 7, n.º 1 (enero de 2020): 205395172093483. http://dx.doi.org/10.1177/2053951720934838.

Texto completo
Resumen
Although databases have been well-defined and thoroughly discussed in the computer science literature, the actual users of databases often have varying definitions and expectations of this essential computational infrastructure. Systems administrators and computer science textbooks may expect databases to be instantiated in a small number of technologies (e.g., relational or graph-based database management systems), but there are numerous examples of databases in non-conventional or unexpected technologies, such as spreadsheets or other assemblages of files linked through code. Consequently, we ask: How do the materialities of non-conventional databases differ from or align with the materialities of conventional relational systems? What properties of the database do the creators of these artifacts invoke in their rhetoric describing these systems—or in the data models underlying these digital objects? To answer these questions, we conducted a close analysis of four non-conventional scientific databases. By examining the materialities of information representation in each case, we show how scholarly communication regimes shape database materialities— and how information organization paradigms shape scholarly communication. These cases show abandonment of certain constraints of relational database construction alongside maintenance of some key relational data organization strategies. We discuss the implications that these relational data paradigms have for data use, preservation, and sharing, and discuss the need to support a plurality of data practices and paradigms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

PUTZER, ALOIS. "Software Engineering, Data Modeling and Databases in Physics Experiments". International Journal of Modern Physics C 02, n.º 01 (marzo de 1991): 115–31. http://dx.doi.org/10.1142/s0129183191000123.

Texto completo
Resumen
Software engineering methods and especially data modeling techniques are key factors in the design of present-day software packages. Since from the software perspective the most important aspect is the data management, the proper design of data structures both in the application programs and for database systems is essential. In the area of database systems we are still in the transition phase between home-grown packages and commercial products. A first series of general purpose packages have been developed to avoid duplication of work. In view of the forthcoming experiments at future colliders a world-wide agreement on a common data definition language and a common database management system will become an important issue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

MORGAN, NELSON, HERVÉ BOURLARD, STEVE RENALS, MICHAEL COHEN y HORACIO FRANCO. "HYBRID NEURAL NETWORK/HIDDEN MARKOV MODEL SYSTEMS FOR CONTINUOUS SPEECH RECOGNITION". International Journal of Pattern Recognition and Artificial Intelligence 07, n.º 04 (agosto de 1993): 899–916. http://dx.doi.org/10.1142/s0218001493000455.

Texto completo
Resumen
MultiLayer Perceptrons (MLP) are an effective family of algorithms for the smooth estimation of highly-dimensioned probability density functions that are useful in continuous speech recognition. Hidden Markov Models (HMM) provide a structure for the mapping of a temporal sequence of acoustic vectors to a generating sequence of states. For HMMs that are independent of phonetic context, the MLP approaches have consistently provided significant improvements (once we learned how to use them). Recently, these results have been extended to context-dependent models. In this paper, after having reviewed the basic principles of our hybrid HMM/MLP approach, we describe a series of experiments with continuous speech recognition. The hybrid methods directly trade off computational complexity for reduced requirements of memory and memory bandwidth. Results are presented on the widely used Resource Management speech database that is distributed by the National Institute of Standards and Technology. These results demonstrate performance that is at least as good as any other reported continuous speech recognition system (for this task).
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Pal, Soumitra y Teresa M. Przytycka. "Bioinformatics pipeline using JUDI: Just Do It!" Bioinformatics 36, n.º 8 (27 de diciembre de 2019): 2572–74. http://dx.doi.org/10.1093/bioinformatics/btz956.

Texto completo
Resumen
Abstract Summary Large-scale data analysis in bioinformatics requires pipelined execution of multiple software. Generally each stage in a pipeline takes considerable computing resources and several workflow management systems (WMS), e.g. Snakemake, Nextflow, Common Workflow Language, Galaxy, etc. have been developed to ensure optimum execution of the stages across two invocations of the pipeline. However, when the pipeline needs to be executed with different settings of parameters, e.g. thresholds, underlying algorithms, etc. these WMS require significant scripting to ensure an optimal execution. We developed JUDI on top of DoIt, a Python based WMS, to systematically handle parameter settings based on the principles of database management systems. Using a novel modular approach that encapsulates a parameter database in each task and file associated with a pipeline stage, JUDI simplifies plug-and-play of the pipeline stages. For a typical pipeline with n parameters, JUDI reduces the number of lines of scripting required by a factor of O(n). With properly designed parameter databases, JUDI not only enables reproducing research under published values of parameters but also facilitates exploring newer results under novel parameter settings. Availability and implementation https://github.com/ncbi/JUDI Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

PANCAKE, CHERRI M. "USABILITY ISSUES IN DEVELOPING TOOLS FOR THE GRID — AND HOW VISUAL REPRESENTATIONS CAN HELP". Parallel Processing Letters 13, n.º 02 (junio de 2003): 189–206. http://dx.doi.org/10.1142/s0129626403001239.

Texto completo
Resumen
Initial tools developed for grid administrators and users have built on the technology and representational techniques of large parallel systems. Like their predecessors, grid tools must cope with extreme variations in scale, rapidly evolving hardware and software environments, and the competing demands of operating systems and middleware. Computational grids present several unique challenges, however, that go well beyond the lessons we have learned from parallel and distributed tools: the volatile nature of grid resources, their extreme heterogeneity, and the lack of coordinated management. Because they define a new and unfamiliar computing environment, there is a significant human challenge as well. Grid users will be extremely diverse, including resource providers, resource managers, users of data and derived data products, etc., as well as application developers. The future usability of the grid will depend on how well grid tools can capture information on grid resources and synthesize a higher-level perspective that helps users make sense of this complex new environment. This article identifies the tool requirements that will have most impact on the usability. It then explores recent advances in information visualization, demonstrating that many of the techniques grid tools will need already exist in preliminary form. A series of examples illustrates how those techniques can be applied to portray grid landscapes (graphical representations of grid resources, activities, behavior, costs, etc.) in useful and meaningful ways.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Biibosunov, Bolotbek y Jenish Beksulanov. "Information technologies for landslides and mudflows research". E3S Web of Conferences 177 (2020): 06005. http://dx.doi.org/10.1051/e3sconf/202017706005.

Texto completo
Resumen
This article presents the results of research using computer technology and mathematical modeling in relation to hydrodynamic processes that determine such natural disasters as landslides and mudflows common in the territory of the Kyrgyz Republic. A specialized website is proposed, which contains the results of scientific research on natural and man-made disasters and exogenous geological processes (EGP). The following systems were used as the main database management systems (DBMS): MS Access, My SQL and PostgreSQL. Thus, the main means of developing computer programs and computational procedures are Delphi, Python, Visual Basic, Java and JavaScript. Web technologies and the following software tools were used to design and create the site: Python, JavaScript, PhP and HTML. Modern level of scientific research presumes and obliges development and using of new information technologies. In this regard there was defined a problem on mathematical modelling and information technologies using for research and forecasting of EGP on the territory of Kyrgyzstan. There are proposed hydrodynamic models and numerical methods of their solution. Information system is developed for landslides, mudflows, and other EGP types, typical for Kyrgyzstan.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Algarni, Abdullah M., Vijey Thayananthan y Yashwant K. Malaiya. "Quantitative Assessment of Cybersecurity Risks for Mitigating Data Breaches in Business Systems". Applied Sciences 11, n.º 8 (19 de abril de 2021): 3678. http://dx.doi.org/10.3390/app11083678.

Texto completo
Resumen
The evaluation of data breaches and cybersecurity risks has not yet been formally addressed in modern business systems. There has been a tremendous increase in the generation, usage and consumption of industrial and business data as a result of smart and computational intensive software systems. This has resulted in an increase in the attack surface of these cyber systems. Consequently, there has been a consequent increase in the associated cybersecurity risks. However, no significant studies have been conducted that examine, compare, and evaluate the approaches used by the risk calculators to investigate the data breaches. The development of an efficient cybersecurity solution allows us to mitigate the data breaches threatened by the cybersecurity risks such as cyber-attacks against database storage, processing and management. In this paper, we develop a comprehensive, formal model that estimates the two components of security risks: breach cost and the likelihood of a data breach within 12 months. The data used in this model are taken from the industrial business report, which provides the necessary information collected and the calculators developed by the major organizations in the field. This model integrated with the cybersecurity solution uses consolidated factors that have a significant impact on the data breach risk. We propose mathematical models of how the factors impact the cost and the likelihood. These models allow us to conclude that results obtained through the models mitigate the data breaches in the potential and future business system dynamically.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Luscombe, N. M., D. Greenbaum y M. Gerstein. "What is Bioinformatics? A Proposed Definition and Overview of the Field". Methods of Information in Medicine 40, n.º 04 (2001): 346–58. http://dx.doi.org/10.1055/s-0038-1634431.

Texto completo
Resumen
Summary Background: The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Objectives: Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Methods: Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying “informatics” techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Results and Conclusions: Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (eg expression data). Additional information includes the text of scientific papers and “relationship data” from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Michalewicz, Zbigniew y Alvin Yeo. "A Good Normal Form for Relational Databases". Fundamenta Informaticae 12, n.º 2 (1 de abril de 1989): 129–38. http://dx.doi.org/10.3233/fi-1989-12202.

Texto completo
Resumen
In the conceptual design of relational databases one of the main goals is to create a conceptual scheme, which minimize redundancies and eliminate deletion and addition anomalies, i.e. to create relation schemes in some good normal form. The study of relational databases has produced a host of normal forms: 2NF, 3NF, BCNF, Elementary-Key Normal Form, 4NF, Weak 4NF, PJ/NF, DK/NF, LTKNF, (3,3)NF, etc. There are two features which characterize these normal forms. First, they consider each relation separately. We believe that a normal form (which reflects the goodness of the conceptual design) should be related to the whole conceptual scheme. Second, the usefullness of all normal forms in relational database design have been based on the assumption that a data definition language (DDL) of a database management system (DBMS) is able to enforce key dependencies. However, different DDLs have different capabilities in defining constraints. In this paper we will discuss the design of conceptual relational schemes in general. We will also define a good normal form (GNF) which requires a minimally rich DDL; this normal form is based only on a primitive concept of constraints. We will not, however, discuss the normalization process itself – how one might, if possible, convert a relation scheme that is not in some normal form into a collection of relation schemes each of which is in that normal form.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

FERREIRA, RENATO, TAHSIN KURC, MICHAEL BEYNON, CHIALIN CHANG, ALAN SUSSMAN y JOEL SALTZ. "OBJECT-RELATIONAL QUERIES INTO MULTIDIMENSIONAL DATABASES WITH THE ACTIVE DATA REPOSITORY". Parallel Processing Letters 09, n.º 02 (junio de 1999): 173–95. http://dx.doi.org/10.1142/s0129626499000190.

Texto completo
Resumen
As computational power and storage capacity increase, processing and analyzing large volumes of multi-dimensional datasets play an increasingly important role in many domains of scientific research. Scientific applications that make use of very large scientific datasets have several important characteristics: datasets consist of complex data and are usually multi-dimensional; applications usually retrieve a subset of all the data available in the dataset; various application-specific operations are performed on the data items retrieved. Such applications can be supported by object-relational database management systems (OR-DBMSs). In addition to providing functionality to define new complex datatypes and user-defined functions, an OR-DBMS for scientific datasets should contain runtime support that will provide optimized storage for very large datasets and an execution environment for user-defined functions involving expensive operations. In this paper we describe an infrastructure, the Active Data Repository (ADR), which provides framework for building databases that enables integration of storage, retrieval and processing of multi-dimensional datasets on a parallel machine. The system architecture of ADR provides the functionality required from runtime support for an OR-DBMS that stores and processes scientific multi-dimensional datasets. We present the system architecture of the ADR, and experimental performance results for three applications implemented using ADR.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Rashid, Muhammad, Muhammad Attique Khan, Majed Alhaisoni, Shui-Hua Wang, Syed Rameez Naqvi, Amjad Rehman y Tanzila Saba. "A Sustainable Deep Learning Framework for Object Recognition Using Multi-Layers Deep Features Fusion and Selection". Sustainability 12, n.º 12 (19 de junio de 2020): 5037. http://dx.doi.org/10.3390/su12125037.

Texto completo
Resumen
With an overwhelming increase in the demand of autonomous systems, especially in the applications related to intelligent robotics and visual surveillance, come stringent accuracy requirements for complex object recognition. A system that maintains its performance against a change in the object’s nature is said to be sustainable and it has become a major area of research for the computer vision research community in the past few years. In this work, we present a sustainable deep learning architecture, which utilizes multi-layer deep features fusion and selection, for accurate object classification. The proposed approach comprises three steps: (1) By utilizing two deep learning architectures, Very Deep Convolutional Networks for Large-Scale Image Recognition and Inception V3, it extracts features based on transfer learning, (2) Fusion of all the extracted feature vectors is performed by means of a parallel maximum covariance approach, and (3) The best features are selected using Multi Logistic Regression controlled Entropy-Variances method. For verification of the robust selected features, the Ensemble Learning method named Subspace Discriminant Analysis is utilized as a fitness function. The experimental process is conducted using four publicly available datasets, including Caltech-101, Birds database, Butterflies database and CIFAR-100, and a ten-fold validation process which yields the best accuracies of 95.5%, 100%, 98%, and 68.80% for the datasets respectively. Based on the detailed statistical analysis and comparison with the existing methods, the proposed selection method gives significantly more accuracy. Moreover, the computational time of the proposed selection method is better for real-time implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Nikulchev, Evgeny, Dmitry Ilin y Alexander Gusev. "Technology Stack Selection Model for Software Design of Digital Platforms". Mathematics 9, n.º 4 (4 de febrero de 2021): 308. http://dx.doi.org/10.3390/math9040308.

Texto completo
Resumen
The article is dedicated to the development of a mathematical model and methodology for evaluating the effectiveness of integrating information technology solutions into digital platforms using virtual simulation infrastructures. The task of selecting a stack of technologies is formulated as the task of selecting elements from sets of possible solutions. This allows us to develop a mathematically unified approach to evaluating the effectiveness of different solutions, such as choosing programming languages, choosing Database Management System (DBMS), choosing operating systems and data technologies, and choosing the frameworks used. Introduced technology compatibility operation and decomposition of the evaluation of the efficiency of the technology stack at the stages of the life cycle of the digital platform development allowed us to reduce the computational complexity of the formation of the technology stack. A methodology based on performance assessments for experimental research in a virtual software-configurable simulation environment has been proposed. The developed solution allows the evaluation of the performance of the digital platform before its final implementation, while reducing the cost of conducting an experiment to assess the characteristics of the digital platform. It is proposed to compare the characteristics of digital platform efficiency based on the use of fuzzy logic, providing the software developer with an intuitive tool to support decision-making on the inclusion of the solution in the technology stack.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Yu, Li, Zaifang Zhang y Jin Shen. "Dynamic customer preference analysis for product portfolio identification using sequential pattern mining". Industrial Management & Data Systems 117, n.º 2 (13 de marzo de 2017): 365–81. http://dx.doi.org/10.1108/imds-12-2015-0496.

Texto completo
Resumen
Purpose In the initial stage of product design, product portfolio identification (PPI) aims to translate customer needs (CNs) into product specifications (PSs). This is an essential task, since understanding what customers really want is at the center of product design. However, design information is incomplete and design knowledge is minimal during this stage. Furthermore, PPI is often a confusing and frustrating task, especially when customer preferences are changing rapidly. To facilitate the task, the purpose of this paper is to capture the time-sensitive mapping relationship between CNs and PSs. Design/methodology/approach This paper proposes a design sequential pattern mining model to uncover implicit but valuable knowledge from chronological transaction records. First, CNs and PSs from these records are transformed and connected according to the transaction time. Second, procedures such as litemset generation, data transformation and pattern mining are conducted based on the AprioriAll algorithm. Third, the uncovered patterns are modified and applied by engineers. Findings Using the retrieved patterns, engineers can keep up with the dynamics of customer preferences with regard to different PSs. Research limitations/implications Computational experiments on a case study of customization of desktop computers show that the proposed method is capable of extracting useful sequential patterns from a design database. Originality/value Considering the times tamps of the transactions, a sequential pattern mining-based method is proposed to extract valuable patterns. These patterns can help engineers identify market trends and the correlation among PSs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Wu, Pan, Zilin Huang, Yuzhuang Pian, Lunhui Xu, Jinlong Li y Kaixun Chen. "A Combined Deep Learning Method with Attention-Based LSTM Model for Short-Term Traffic Speed Forecasting". Journal of Advanced Transportation 2020 (23 de noviembre de 2020): 1–15. http://dx.doi.org/10.1155/2020/8863724.

Texto completo
Resumen
Short-term traffic speed prediction is a promising research topic in intelligent transportation systems (ITSs), which also plays an important role in the real-time decision-making of traffic control and guidance systems. However, the urban traffic speed has strong temporal, spatial correlation and the characteristic of complex nonlinearity and randomness, which makes it challenging to accurately and efficiently forecast short-term traffic speeds. We investigate the relevant literature and found that although most methods can achieve good prediction performance with the complete sample data, when there is a certain missing rate in the database, it is difficult to maintain accuracy with these methods. Recent studies have shown that deep learning methods, especially long short-term memory (LSTM) models, have good results in short-term traffic flow prediction. Furthermore, the attention mechanism can properly assign weights to distinguish the importance of traffic time sequences, thereby further improving the computational efficiency of the prediction model. Therefore, we propose a framework for short-term traffic speed prediction, including data preprocessing module and short-term traffic prediction module. In the data preprocessing module, the missing traffic data are repaired to provide a complete dataset for subsequent prediction. In the prediction module, a combined deep learning method that is an attention-based LSTM (ATT-LSTM) model for predicting short-term traffic speed on urban roads is proposed. The proposed framework was applied to the urban road network in Nanshan District, Shenzhen, Guangdong Province, China, with a 30-day traffic speed dataset (floating car data) used as the experimental sample. Results show that the proposed method outperforms other deep learning algorithms (such as recurrent neural network (RNN) and convolutional neural network (CNN)) in terms of both calculating efficiency and prediction accuracy. The attention mechanism can significantly reduce the error of the LSTM model (up to 12.4%) and improves the prediction performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Sokolovska, Zoia y Oleksii Dudnyk. "Devising a technology for managing outsourcing IT-projects with the application of fuzzy logic". Eastern-European Journal of Enterprise Technologies 2, n.º 3 (110) (30 de abril de 2021): 52–65. http://dx.doi.org/10.15587/1729-4061.2021.224529.

Texto completo
Resumen
An outsourcing IT project management model has been developed. The proposed model features taking into account the specifics of project management processes at outsourcing IT companies in terms of the uncertainty of the external and internal environment of their operation. The model is based on the stage-gate project management framework with fuzzy logic tools. The proposed modification of the fuzzy inference mechanism makes it possible to refuse to save the intermediate results which reduce the load on the database and create the possibility of using semantic networks. The technology of expert consultations was demonstrated by the example of decision-making regarding the assessment of the current status of the IT projects accepted by the outsourcing company for development. Dynamic nature and cyclical management of the portfolio of IT projects involves constant monitoring of the results of implementation with an appropriate regular portfolio reforming. The model was developed to improve the efficiency of the software development sub-process and minimize the negative consequences of financial dependence on the customer. The application software developed on the basis of the model of management of outsourcing IT projects and modification of the fuzzy inference mechanism has found practical application and was implemented in the computational practice of HYS Enterprise B.V. outsourcing IT company. Testing of the program shell has shown positive results in the course of solving the tasks peculiar to concrete stages of IT project management. The proposed structure and composition of the fuzzy knowledgebase of the expert shell are quite typical in terms of IT outsourcing problems. It is expedient to use the developed model at outsourcing IT companies in the process of project portfolio management
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Senthilnayaki, B., K. Venkatalakshmi y A. Kannan. "An Ontology Based Framework for Intelligent Web Based e-Learning". International Journal of Intelligent Information Technologies 11, n.º 2 (abril de 2015): 23–39. http://dx.doi.org/10.4018/ijiit.2015040102.

Texto completo
Resumen
E-Learning is a fast, just-in-time, and non-linear learning process, which is now widely applied in distributed and dynamic environments such as the World Wide Web. Ontology plays an important role in capturing and disseminating the real world knowledge for effective human computer interactions. However, engineering of domain ontologies is very labor intensive and time consuming. Some machine learning methods have been explored for automatic or semi-automatic discovery of domain ontologies. Nevertheless, both the accuracy and the computational efficiency of these methods need to be improved. While constructing large scale ontology for real-world applications such as e-learning, the ability to monitor the progress of students' learning performance is a critical issue. In this paper, a system is proposed for analyzing students' knowledge level obtained using Kolb's classification based on the students level of understanding and their learning style using cluster analysis. This system uses fuzzy logic and clustering algorithms to arrange their documents according to the level of their performance. Moreover, a new domain ontology discovery method is proposed uses contextual information of the knowledge sources from the e-Learning domain. This proposed system constructs ontology to provide an effective assistance in e-Learning. The proposed ontology discovery method has been empirically tested in an e-Learning environment for teaching the subject Database Management Systems. The salient contributions of this paper are the use of Jaccard Similarity measure and K-Means clustering algorithm for clustering of learners and the use of ontology for concept understanding and learning style identification. This helps in adaptive e-learning by providing suitable suggestions for decision making and it uses decision rules for providing intelligent e-Learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Dogru, A. H., H. A. Sunaidi, L. S. Fung, W. A. Habiballah, N. Al-Zamel y K. G. Li. "A Parallel Reservoir Simulator for Large-Scale Reservoir Simulation". SPE Reservoir Evaluation & Engineering 5, n.º 01 (1 de febrero de 2002): 11–23. http://dx.doi.org/10.2118/75805-pa.

Texto completo
Resumen
Summary A new parallel, black-oil-production reservoir simulator (Powers**) has been developed and fully integrated into the pre- and post-processing graphical environment. Its primary use is to simulate the giant oil and gas reservoirs of the Middle East using millions of cells. The new simulator has been created for parallelism and scalability, with the aim of making megacell simulation a day-to-day reservoir-management tool. Upon its completion, the parallel simulator was validated against published benchmark problems and other industrial simulators. Several giant oil-reservoir studies have been conducted with million-cell descriptions. This paper presents the model formulation, parallel linear solver, parallel locally refined grids, and parallel well management. The benefits of using megacell simulation models are illustrated by a real field example used to confirm bypassed oil zones and obtain a history match in a short time period. With the new technology, preprocessing, construction, running, and post-processing of megacell models is finally practical. A typical history- match run for a field with 30 to 50 years of production takes only a few hours. Introduction With the development of early parallel computers, the attractive speed of these computers got the attention of oil industry researchers. Initial questions were concentrated along these lines:Can one develop a truly parallel reservoir-simulator code?What type of hardware and programming languages should be chosen? Contrary to seismic, it is well known that reservoir simulator algorithms are not naturally parallel; they are more recursive, and variables display a strong dependency on each other (strong coupling and nonlinearity). This poses a big challenge for the parallelization. On the other hand, if one could develop a parallel code, the speed of computations would increase by at least an order of magnitude; as a result, many large problems could be handled. This capability would also aid our understanding of the fluid flow in a complex reservoir. Additionally, the proper handling of the reservoir heterogeneities should result in more realistic predictions. The other benefit of megacell description is the minimization of upscaling effects and numerical dispersion. The megacell simulation has a natural application in simulating the world's giant oil and gas reservoirs. For example, a grid size of 50 m or less is used widely for the small and medium-size reservoirs in the world. In contrast, many giant reservoirs in the Middle East use a gridblock size of 250 m or larger; this easily yields a model with more than 1 million cells. Therefore, it is of specific interest to have megacell description and still be able to run fast. Such capability is important for the day-to-day reservoir management of these fields. This paper is organized as follows: the relevant work in the petroleum-reservoir-simulation literature has been reviewed. This will be followed by the description of the new parallel simulator and the presentation of the numerical solution and parallelism strategies. (The details of the data structures, well handling, and parallel input/output operations are placed in the appendices). The main text also contains a brief description of the parallel linear solver, locally refined grids, and well management. A brief description of megacell pre- and post-processing is presented. Next, we address performance and parallel scalability; this is a key section that demonstrates the degree of parallelization of the simulator. The last section presents four real field simulation examples. These example cases cover all stages of the simulator and provide actual central processing unit (CPU) execution time for each case. As a byproduct, the benefits of megacell simulation are demonstrated by two examples: locating bypassed oil zones, and obtaining a quicker history match. Details of each section can be found in the appendices. Previous Work In the 1980s, research on parallel-reservoir simulation had been intensified by the further development of shared-memory and distributed- memory machines. In 1987, Scott et al.1 presented a Multiple Instruction Multiple Data (MIMD) approach to reservoir simulation. Chien2 investigated parallel processing on sharedmemory computers. In early 1990, Li3 presented a parallelized version of a commercial simulator on a shared-memory Cray computer. For the distributed-memory machines, Wheeler4 developed a black-oil simulator on a hypercube in 1989. In the early 1990s, Killough and Bhogeswara5 presented a compositional simulator on an Intel iPSC/860, and Rutledge et al.6 developed an Implicit Pressure Explicit Saturation (IMPES) black-oil reservoir simulator for the CM-2 machine. They showed that reservoir models over 2 million cells could be run on this type of machine with 65,536 processors. This paper stated that computational speeds in the order of 1 gigaflop in the matrix construction and solution were achievable. In mid-1995, more investigators published reservoir-simulation papers that focused on distributed-memory machines. Kaarstad7 presented a 2D oil/water research simulator running on a 16384 processor MasPar MP-2 machine. He showed that a model problem using 1 million gridpoints could be solved in a few minutes of computer time. Rame and Delshad8 parallelized a chemical flooding code (UTCHEM) and tested it on a variety of systems for scalability. This paper also included test results on Intel iPSC/960, CM-5, Kendall Square, and Cray T3D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

FICHTE, JOHANNES K., MARKUS HECHER, PATRICK THIER y STEFAN WOLTRAN. "Exploiting Database Management Systems and Treewidth for Counting". Theory and Practice of Logic Programming, 12 de marzo de 2021, 1–30. http://dx.doi.org/10.1017/s147106842100003x.

Texto completo
Resumen
Abstract Bounded treewidth is one of the most cited combinatorial invariants in the literature. It was also applied for solving several counting problems efficiently. A canonical counting problem is #Sat, which asks to count the satisfying assignments of a Boolean formula. Recent work shows that benchmarking instances for #Sat often have reasonably small treewidth. This paper deals with counting problems for instances of small treewidth. We introduce a general framework to solve counting questions based on state-of-the-art database management systems (DBMSs). Our framework takes explicitly advantage of small treewidth by solving instances using dynamic programming (DP) on tree decompositions (TD). Therefore, we implement the concept of DP into a DBMS (PostgreSQL), since DP algorithms are already often given in terms of table manipulations in theory. This allows for elegant specifications of DP algorithms and the use of SQL to manipulate records and tables, which gives us a natural approach to bring DP algorithms into practice. To the best of our knowledge, we present the first approach to employ a DBMS for algorithms on TDs. A key advantage of our approach is that DBMSs naturally allow for dealing with huge tables with a limited amount of main memory (RAM).
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Abdennebi, Anes, Anıl Elakaş, Fatih Taşyaran, Erdinç Öztürk, Kamer Kaya y Sinan Yıldırım. "Machine learning‐based load distribution and balancing in heterogeneous database management systems". Concurrency and Computation: Practice and Experience, 22 de septiembre de 2021. http://dx.doi.org/10.1002/cpe.6641.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Palominos, Fredi Edgardo, Felisa Córdova, Claudia Durán y Bryan Nuñez. "A Simpler and Semantic Multidimensional Database Query Language to Facilitate Access to Information in Decision-making". International Journal of Computers Communications & Control 15, n.º 4 (8 de junio de 2020). http://dx.doi.org/10.15837/ijccc.2020.4.3900.

Texto completo
Resumen
OLAP and multidimensional database technology have contributed significantly to speed up and build confidence in the effectiveness of methodologies based on the use of management indicators in decision-making, industry, production, and services. Although there are a wide variety of tools related to the OLAP approach, many implementations are performed in relational database systems (R-OLAP). So, all interrogation actions are performed through queries that must be reinterpreted in the SQL language. This translation has several consequences because SQL language is based on a mixture of relational algebra and tuple relational calculus, which conceptually responds to the logic of the relational data model, very different from the needs of the multidimensional databases. This paper presents a multidimensional query language that allows expressing multidimensional queries directly over ROLAP databases. The implementation of the multidimensional query language will be done through a middleware that is responsible for mapping the queries, hiding the translation to a layer of software not noticeable to the end-user. Currently, progress has been made in the definition of a language where through a key statement, called aggregate, it is possible to execute the typical multidimensional operators which represent an important part of the most frequent operations in this type of database.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Paul, P. K. "Artificial Intelligence & Cloud Computing in Environmental Systems—Towards Healthy & Sustainable Development". International Journal of Inclusive Development 6, n.º 1 (20 de junio de 2020). http://dx.doi.org/10.30954/2454-4132.1.2020.10.

Texto completo
Resumen
Environment is a big venture and aspects and also conveys as a field of study and practice such as Environment Science, Environment Studies, Environment Engineering, Environment Management, etc. Environmental Informatics is another important subject and emerging regarding the IT and Computing solutions in the environment. The merging of environmental areas and Informatics areas are commonly known as Environmental Informatics. Environmental Informatics is the best way for solving technology related issues with educated manpower and further, it uses various kinds of tools, techniques and sub-technologies of Computing and Information Technology in Environment, Ecology and Biological Sciences. The technologies such as Database Technology, Networking Technology, Multimedia Technology, Web Technology, Software Technology are most common and useful in Environment and Ecology related issues, activities and problem solving. In recent past other emerging technologies like Cloud Computing, Artificial Intelligence, Big Data Analytics, Computational Intelligence, Human Computer Interaction, etc. are also being widely used. This paper is dedicated to basic review on Environmental Informatics including its nature, feature, and functions with special reference to the applications of Artificial Intelligence and Cloud Computing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Li, Teng, Hyo-Sang Shin y Antonios Tsourdos. "A sample decreasing threshold greedy-based algorithm for big data summarisation". Journal of Big Data 8, n.º 1 (9 de febrero de 2021). http://dx.doi.org/10.1186/s40537-021-00416-y.

Texto completo
Resumen
AbstractAs the scale of datasets used for big data applications expands rapidly, there have been increased efforts to develop faster algorithms. This paper addresses big data summarisation problems using the submodular maximisation approach and proposes an efficient algorithm for maximising general non-negative submodular objective functions subject to k-extendible system constraints. Leveraging a random sampling process and a decreasing threshold strategy, this work proposes an algorithm, named Sample Decreasing Threshold Greedy (SDTG). The proposed algorithm obtains an expected approximation guarantee of $$\frac{1}{1+k}-\epsilon $$ 1 1 + k - ϵ for maximising monotone submodular functions and of $$\frac{k}{(1+k)^2}-\epsilon $$ k ( 1 + k ) 2 - ϵ in non-monotone cases with expected computational complexity of $$O\left(\frac{n}{(1+k)\epsilon }\ln \frac{r}{\epsilon }\right)$$ O n ( 1 + k ) ϵ ln r ϵ . Here, r is the largest size of feasible solutions, and $$\epsilon \in \left(0, \frac{1}{1+k}\right)$$ ϵ ∈ 0 , 1 1 + k is an adjustable designing parameter for the trade-off between the approximation ratio and the computational complexity. The performance of the proposed algorithm is validated and compared with that of benchmark algorithms through experiments with a movie recommendation system based on a real database.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Huber, Sebastiaan P., Spyros Zoupanos, Martin Uhrin, Leopold Talirz, Leonid Kahle, Rico Häuselmann, Dominik Gresch et al. "AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and data provenance". Scientific Data 7, n.º 1 (8 de septiembre de 2020). http://dx.doi.org/10.1038/s41597-020-00638-4.

Texto completo
Resumen
Abstract The ever-growing availability of computing power and the sustained development of advanced computational methods have contributed much to recent scientific progress. These developments present new challenges driven by the sheer amount of calculations and data to manage. Next-generation exascale supercomputers will harden these challenges, such that automated and scalable solutions become crucial. In recent years, we have been developing AiiDA (aiida.net), a robust open-source high-throughput infrastructure addressing the challenges arising from the needs of automated workflow management and data provenance recording. Here, we introduce developments and capabilities required to reach sustained performance, with AiiDA supporting throughputs of tens of thousands processes/hour, while automatically preserving and storing the full data provenance in a relational database making it queryable and traversable, thus enabling high-performance data analytics. AiiDA’s workflow language provides advanced automation, error handling features and a flexible plugin model to allow interfacing with external simulation software. The associated plugin registry enables seamless sharing of extensions, empowering a vibrant user community dedicated to making simulations more robust, user-friendly and reproducible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Jethani, Suneel y Robbie Fordyce. "Darkness, Datafication, and Provenance as an Illuminating Methodology". M/C Journal 24, n.º 2 (27 de abril de 2021). http://dx.doi.org/10.5204/mcj.2758.

Texto completo
Resumen
Data are generated and employed for many ends, including governing societies, managing organisations, leveraging profit, and regulating places. In all these cases, data are key inputs into systems that paradoxically are implemented in the name of making societies more secure, safe, competitive, productive, efficient, transparent and accountable, yet do so through processes that monitor, discipline, repress, coerce, and exploit people. (Kitchin, 165) Introduction Provenance refers to the place of origin or earliest known history of a thing. It refers to the custodial history of objects. It is a term that is commonly used in the art-world but also has come into the language of other disciplines such as computer science. It has also been applied in reference to the transactional nature of objects in supply chains and circular economies. In an interview with Scotland’s Institute for Public Policy Research, Adam Greenfield suggests that provenance has a role to play in the “establishment of reliability” given that a “transaction or artifact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parities” (Lawrence). Recent debates on the unrecognised effects of digital media have convincingly argued that data is fully embroiled within capitalism, but it is necessary to remember that data is more than just a transactable commodity. One challenge in bringing processes of datafication into critical light is how we understand what happens to data from its point of acquisition to the point where it becomes instrumental in the production of outcomes that are of ethical concern. All data gather their meaning through relationality; whether acting as a representation of an exterior world or representing relations between other data points. Data objectifies relations, and despite any higher-order complexities, at its core, data is involved in factualising a relation into a binary. Assumptions like these about data shape reasoning, decision-making and evidence-based practice in private, personal and economic contexts. If processes of datafication are to be better understood, then we need to seek out conceptual frameworks that are adequate to the way that data is used and understood by its users. Deborah Lupton suggests that often we give data “other vital capacities because they are about human life itself, have implications for human life opportunities and livelihoods, [and] can have recursive effects on human lives (shaping action and concepts of embodiment ... selfhood [and subjectivity]) and generate economic value”. But when data are afforded such capacities, the analysis of its politics also calls for us to “consider context” and “making the labour [of datafication] visible” (D’Ignazio and Klein). For Jenny L. Davis, getting beyond simply thinking about what data affords involves bringing to light how continually and dynamically to requests, demands, encourages, discourages, and refuses certain operations and interpretations. It is in this re-orientation of the question from what to how where “practical analytical tool[s]” (Davis) can be found. Davis writes: requests and demands are bids placed by technological objects, on user-subjects. Encourage, discourage and refuse are the ways technologies respond to bids user-subjects place upon them. Allow pertains equally to bids from technological objects and the object’s response to user-subjects. (Davis) Building on Lupton, Davis, and D’Ignazio and Klein, we see three principles that we consider crucial for work on data, darkness and light: data is not simply a technological object that exists within sociotechnical systems without having undergone any priming or processing, so as a consequence the data collecting entity imposes standards and way of imagining data before it comes into contact with user-subjects; data is not neutral and does not possess qualities that make it equivalent to the things that it comes to represent; data is partial, situated, and contingent on technical processes, but the outcomes of its use afford it properties beyond those that are purely informational. This article builds from these principles and traces a framework for investigating the complications arising when data moves from one context to another. We draw from the “data provenance” as it is applied in the computing and informational sciences where it is used to query the location and accuracy of data in databases. In developing “data provenance”, we adapt provenance from an approach that solely focuses on technical infrastructures and material processes that move data from one place to another and turn to sociotechnical, institutional, and discursive forces that bring about data acquisition, sharing, interpretation, and re-use. As data passes through open, opaque, and darkened spaces within sociotechnical systems, we argue that provenance can shed light on gaps and overlaps in technical, legal, ethical, and ideological forms of data governance. Whether data becomes exclusive by moving from light to dark (as has happened with the removal of many pages and links from Facebook around the Australian news revenue-sharing bill), or is publicised by shifting from dark to light (such as the Australian government releasing investigative journalist Andie Fox’s welfare history to the press), or even recontextualised from one dark space to another (as with genetic data shifting from medical to legal contexts, or the theft of personal financial data), there is still a process of transmission here that we can assess and critique through provenance. These different modalities, which guide data acquisition, sharing, interpretation, and re-use, cascade and influence different elements and apparatuses within data-driven sociotechnical systems to different extents depending on context. Attempts to illuminate and make sense of these complex forces, we argue, exposes data-driven practices as inherently political in terms of whose interests they serve. Provenance in Darkness and in Light When processes of data capture, sharing, interpretation, and re-use are obscured, it impacts on the extent to which we might retrospectively examine cases where malpractice in responsible data custodianship and stewardship has occurred, because it makes it difficult to see how things have been rendered real and knowable, changed over time, had causality ascribed to them, and to what degree of confidence a decision has been made based on a given dataset. To borrow from this issue’s concerns, the paradigm of dark spaces covers a range of different kinds of valences on the idea of private, secret, or exclusive contexts. We can parallel it with the idea of ‘light’ spaces, which equally holds a range of different concepts about what is open, public, or accessible. For instance, in the use of social data garnered from online platforms, the practices of academic researchers and analysts working in the private sector often fall within a grey zone when it comes to consent and transparency. Here the binary notion of public and private is complicated by the passage of data from light to dark (and back to light). Writing in a different context, Michael Warner complicates the notion of publicness. He observes that the idea of something being public is in and of itself always sectioned off, divorced from being fully generalisable, and it is “just whatever people in a given context think it is” (11). Michael Hardt and Antonio Negri argue that publicness is already shadowed by an idea of state ownership, leaving us in a situation where public and private already both sit on the same side of the propertied/commons divide as if the “only alternative to the private is the public, that is, what is managed and regulated by states and other governmental authorities” (vii). The same can be said about the way data is conceived as a public good or common asset. These ideas of light and dark are useful categorisations for deliberately moving past the tensions that arise when trying to qualify different subspecies of privacy and openness. The problem with specific linguistic dyads of private vs. public, or open vs. closed, and so on, is that they are embedded within legal, moral, technical, economic, or rhetorical distinctions that already involve normative judgements on whether such categories are appropriate or valid. Data may be located in a dark space for legal reasons that fall under the legal domain of ‘private’ or it may be dark because it has been stolen. It may simply be inaccessible, encrypted away behind a lost password on a forgotten external drive. Equally, there are distinctions around lightness that can be glossed – the openness of Open Data (see: theodi.org) is of an entirely separate category to the AACS encryption key, which was illegally but enthusiastically shared across the internet in 2007 to the point where it is now accessible on Wikipedia. The language of light and dark spaces allows us to cut across these distinctions and discuss in deliberately loose terms the degree to which something is accessed, with any normative judgments reserved for the cases themselves. Data provenance, in this sense, can be used as a methodology to critique the way that data is recontextualised from light to dark, dark to light, and even within these distinctions. Data provenance critiques the way that data is presented as if it were “there for the taking”. This also suggests that when data is used for some or another secondary purpose – generally for value creation – some form of closure or darkening is to be expected. Data in the public domain is more than simply a specific informational thing: there is always context, and this contextual specificity, we argue, extends far beyond anything that can be captured in a metadata schema or a licensing model. Even the transfer of data from one open, public, or light context to another will evoke new degrees of openness and luminosity that should not be assumed to be straightforward. And with this a new set of relations between data-user-subjects and stewards emerges. The movement of data between public and private contexts by virtue of the growing amount of personal information that is generated through the traces left behind as people make use of increasingly digitised services going about their everyday lives means that data-motile processes are constantly occurring behind the scenes – in darkness – where it comes into the view, or possession, of third parties without obvious mechanisms of consent, disclosure, or justification. Given that there are “many hands” (D’Iganzio and Klein) involved in making data portable between light and dark spaces, equally there can be diversity in the approaches taken to generate critical literacies of these relations. There are two complexities that we argue are important for considering the ethics of data motility from light to dark, and this differs from the concerns that we might have when we think about other illuminating tactics such as open data publishing, freedom-of-information requests, or when data is anonymously leaked in the public interest. The first is that the terms of ethics must be communicable to individuals and groups whose data literacy may be low, effectively non-existent, or not oriented around the objective of upholding or generating data-luminosity as an element of a wider, more general form of responsible data stewardship. Historically, a productive approach to data literacy has been finding appropriate metaphors from adjacent fields that can help add depth – by way of analogy – to understanding data motility. Here we return to our earlier assertion that data is more than simply a transactable commodity. Consider the notion of “giving” and “taking” in the context of darkness and light. The analogy of giving and taking is deeply embedded into the notion of data acquisition and sharing by virtue of the etymology of the word data itself: in Latin, “things having been given”, whereby in French données, a natural gift, perhaps one that is given to those that attempt capture for the purposes of empiricism – representation in quantitative form is a quality that is given to phenomena being brought into the light. However, in the contemporary parlance of “analytics” data is “taken” in the form of recording, measuring, and tracking. Data is considered to be something valuable enough to give or take because of its capacity to stand in for real things. The empiricist’s preferred method is to take rather than to accept what is given (Kitchin, 2); the data-capitalist’s is to incentivise the act of giving or to take what is already given (or yet to be taken). Because data-motile processes are not simply passive forms of reading what is contained within a dataset, the materiality and subjectivity of data extraction and interpretation is something that should not be ignored. These processes represent the recontextualisation of data from one space to another and are expressed in the landmark case of Cambridge Analytica, where a private research company extracted data from Facebook and used it to engage in psychometric analysis of unknowing users. Data Capture Mechanism Characteristics and Approach to Data Stewardship Historical Information created, recorded, or gathered about people of things directly from the source or a delegate but accessed for secondary purposes. Observational Represents patterns and realities of everyday life, collected by subjects by their own choice and with some degree of discretion over the methods. Third parties access this data through reciprocal arrangement with the subject (e.g., in exchange for providing a digital service such as online shopping, banking, healthcare, or social networking). Purposeful Data gathered with a specific purpose in mind and collected with the objective to manipulate its analysis to achieve certain ends. Integrative Places less emphasis on specific data types but rather looks towards social and cultural factors that afford access to and facilitate the integration and linkage of disparate datasets Table 1: Mechanisms of Data Capture There are ethical challenges associated with data that has been sourced from pre-existing sets or that has been extracted from websites and online platforms through scraping data and then enriching it through cleaning, annotation, de-identification, aggregation, or linking to other data sources (tab. 1). As a way to address this challenge, our suggestion of “data provenance” can be defined as where a data point comes from, how it came into being, and how it became valuable for some or another purpose. In developing this idea, we borrow from both the computational and biological sciences (Buneman et al.) where provenance, as a form of qualitative inquiry into data-motile processes, centres around understanding the origin of a data point as part of a broader almost forensic analysis of quality and error-potential in datasets. Provenance is an evaluation of a priori computational inputs and outputs from the results of database queries and audits. Provenance can also be applied to other contexts where data passes through sociotechnical systems, such as behavioural analytics, targeted advertising, machine learning, and algorithmic decision-making. Conventionally, data provenance is based on understanding where data has come from and why it was collected. Both these questions are concerned with the evaluation of the nature of a data point within the wider context of a database that is itself situated within a larger sociotechnical system where the data is made available for use. In its conventional sense, provenance is a means of ensuring that a data point is maintained as a single source of truth (Buneman, 89), and by way of a reproducible mechanism which allows for its path through a set of technical processes, it affords the assessment of a how reliable a system’s output might be by sheer virtue of the ability for one to retrace the steps from point A to B. “Where” and “why” questions are illuminating because they offer an ends-and-means view of the relation between the origins and ultimate uses of a given data point or set. Provenance is interesting when studying data luminosity because means and ends have much to tell us about the origins and uses of data in ways that gesture towards a more accurate and structured research agenda for data ethics that takes the emphasis away from individual moral patients and reorients it towards practices that occur within information management environments. Provenance offers researchers seeking to study data-driven practices a similar heuristic to a journalist’s line of questioning who, what, when, where, why, and how? This last question of how is something that can be incorporated into conventional models of provenance that make it useful in data ethics. The question of how data comes into being extends questions of power, legality, literacy, permission-seeking, and harm in an entangled way and notes how these factors shape the nature of personal data as it moves between contexts. Forms of provenance accumulate from transaction to transaction, cascading along, as a dataset ‘picks up’ the types of provenance that have led to its creation. This may involve multiple forms of overlapping provenance – methodological and epistemological, legal and illegal – which modulate different elements and apparatuses. Provenance, we argue is an important methodological consideration for workers in the humanities and social sciences. Provenance provides a set of shared questions on which models of transparency, accountability, and trust may be established. It points us towards tactics that might help data-subjects understand privacy in a contextual manner (Nissenbaum) and even establish practices of obfuscation and “informational self-defence” against regimes of datafication (Brunton and Nissenbaum). Here provenance is not just a declaration of what means and ends of data capture, sharing, linkage, and analysis are. We sketch the outlines of a provenance model in table 2 below. Type Metaphorical frame Dark Light What? The epistemological structure of a database determines the accuracy of subsequent decisions. Data must be consistent. What data is asked of a person beyond what is strictly needed for service delivery. Data that is collected for a specific stated purpose with informed consent from the data-subject. How does the decision about what to collect disrupt existing polities and communities? What demands for conformity does the database make of its subjects? Where? The contents of a database is important for making informed decisions. Data must be represented. The parameters of inclusion/exclusion that create unjust risks or costs to people because of their inclusion or exclusion in a dataset. The parameters of inclusion or exclusion that afford individuals representation or acknowledgement by being included or excluded from a dataset. How are populations recruited into a dataset? What divides exist that systematically exclude individuals? Who? Who has access to data, and how privacy is framed is important for the security of data-subjects. Data access is political. Access to the data by parties not disclosed to the data-subject. Who has collected the data and who has or will access it? How is the data made available to those beyond the data subjects? How? Data is created with a purpose and is never neutral. Data is instrumental. How the data is used, to what ends, discursively, practically, instrumentally. Is it a private record, a source of value creation, the subject of extortion or blackmail? How the data was intended to be used at the time that it was collected. Why? Data is created by people who are shaped by ideological factors. Data has potential. The political rationality that shapes data governance with regard to technological innovation. The trade-offs that are made known to individuals when they contribute data into sociotechnical systems over which they have limited control. Table 2: Forms of Data Provenance Conclusion As an illuminating methodology, provenance offers a specific line of questioning practices that take information through darkness and light. The emphasis that it places on a narrative for data assets themselves (asking what when, who, how, and why) offers a mechanism for traceability and has potential for application across contexts and cases that allows us to see data malpractice as something that can be productively generalised and understood as a series of ideologically driven technical events with social and political consequences without being marred by perceptions of exceptionality of individual, localised cases of data harm or data violence. References Brunton, Finn, and Helen Nissenbaum. "Political and Ethical Perspectives on Data Obfuscation." Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. Eds. Mireille Hildebrandt and Katja de Vries. New York: Routledge, 2013. 171-195. Buneman, Peter, Sanjeev Khanna, and Wang-Chiew Tan. "Data Provenance: Some Basic Issues." International Conference on Foundations of Software Technology and Theoretical Computer Science. Berlin: Springer, 2000. Davis, Jenny L. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge: MIT Press, 2020. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge: MIT Press, 2020. Hardt, Michael, and Antonio Negri. Commonwealth. Cambridge: Harvard UP, 2009. Kitchin, Rob. "Big Data, New Epistemologies and Paradigm Shifts." Big Data & Society 1.1 (2014). Lawrence, Matthew. “Emerging Technology: An Interview with Adam Greenfield. ‘God Forbid That Anyone Stopped to Ask What Harm This Might Do to Us’. Institute for Public Policy Research, 13 Oct. 2017. <https://www.ippr.org/juncture-item/emerging-technology-an-interview-with-adam-greenfield-god-forbid-that-anyone-stopped-to-ask-what-harm-this-might-do-us>. Lupton, Deborah. "Vital Materialism and the Thing-Power of Lively Digital Data." Social Theory, Health and Education. Eds. Deana Leahy, Katie Fitzpatrick, and Jan Wright. London: Routledge, 2018. Nissenbaum, Helen F. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2020. Warner, Michael. "Publics and Counterpublics." Public Culture 14.1 (2002): 49-90.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Anaya, Ananya. "Minimalist Design in the Age of Archive Fever". M/C Journal 24, n.º 4 (24 de agosto de 2021). http://dx.doi.org/10.5204/mcj.2794.

Texto completo
Resumen
In a listicle on becomingminimalist.com, Joshua Becker argues that advances in personal computing have contributed to the growing popularity of the minimalist lifestyle. Becker explains that computational media can efficiently absorb physical artefacts like books, photo albums, newspapers, clocks, calendars, and more. In Nawapol Thamrongrattanarit’s Happy Old Year (2019, ฮาวทูทิ้ง ทิ้งอย่างไร..ไม่ให้เหลือเธอ) the protagonist Jean also argues that material possessions are wasteful and unnecessary in the era of cloud storage. In the film, she redesigns her old-fashioned and messy childhood home to create a minimalist home office. In decluttering their material possessions through a partial reliance on computational storage, Jean and Becker conveniently dispense with the materiality of informational infrastructures and digital archives. Informational technology’s ever-growing capacity for storage and circulation also intensify anxieties about clutter. During our online interactions, we inadvertently leave an amassing trail of metadata behind that allows algorithms to “personalise” our interfaces. Consequently, our interfaces are “cluttered” with recommendations that range from toothpaste to news, movies, clothes, and more, based on a narrow and homophilic comparison of datasets. Notably, this hypertrophic trail of digital clutter threatens to overrepresent and blur personal identities. By mindfully reducing excessive consumption and discarding wasteful possessions, our personal spaces can become tidy and coherent. On the other hand, there is little that individuals can do to control nonhuman forms of digital accumulation and the datafied archives that meticulously record and store our activities on a micro-temporal scale. In this essay, I explore archive fever as the prosthetic externalisation of memory across physical and digital spaces. Paying close attention to Sianne Ngai’s work on vernacular aesthetic categories and Susanna Paasonen’s exploration of equivocal affective sensations, I study how advocates of minimalist design seek to recuperate our fraught capacities for affective experience in the digital era. In particular, I examine how Thamrongrattanarit problematises minimalist design, prosthetic memory, and the precarious materiality of digital media in Happy Old Year and Mary Is Happy, Mary Is Happy (2013, แมรี่ อีส แฮปปี้, แมรี่ อีส แฮปปี้). Transmedial Minimalist Networks and Empty Spaces Marie Kondo famously teaches us how to segregate objects that spark joy from material possessions that can be discarded (Kondo). The KonMari method has a strong transmedial presence with Kondo’s bestselling books, her blog and online store, a Netflix series, and sticky memes that feature her talking about objects that do not spark joy. It is interesting to note the rising popularity of prescriptive minimalist lifestyle blogs that utilise podcasts, video essays, tutorials, apps, and more to guide the mindful selection of essential material possessions from waste. Personal minimalism is presented as an antidote to late capitalist clutter as self-help gurus appear across our computational devices teach us how we can curb our carbon footprints and reduce consumerist excess. Yet, as noted by Katherine Hayles, maximal networked media demands a form of hyper-attention that implicates us in multiple information streams at once. There is a tension between the overwhelming simultaneity in the viewing experience of transmedial minimalist lifestyle networks and the rhetoric of therapeutic selection espoused in their content. In their ethnographic work with minimalists, Eun Jeong Cheon and Norman Makoto Su explore how mindfully constructed empty spaces can serve as a resource for technological design (Cheon and Su). Cheon and Su note how empty spaces possess a symbolic and functional value for their respondents. Decluttered empty spaces offer a sensuous experience for minimalists in coherently representing their identity and serve as a respite from congested and busy cities. Furthermore, empty spaces transform the home into a meaningful site of reflection about people’s objects and values as minimalists actively work to reduce their ownership of physical artefacts and the space that material possessions occupy in their homes and minds: the notion of gazing upon empty spaces is not simply about reading or processing information for minimalists. Rather, gazing gives minimalists, a visual indicator of their identity, progress, and values. (Cheon and Su 10) Instead of seeking to fill and augment empty space, Cheon and Su ask what it might mean to design technology that appreciates the absence of information and the limitation of space. The Interestingness of “Total Design and Internet Plenitude” Sianne Ngai argues that in a world where we are constantly hailed as aesthetic subjects, our aesthetic experiences grow increasingly fragile and ineffectual (Ngai 2015). Ngai further contends that late capitalism makes the elite exaggeration of the autonomy of art (at auction houses, mega-exhibitions, biennales, and more) concurrently possible with the hyper-aestheticisation of everyday life. The increase in inconsequential aesthetic experiences mirrors a larger habituation to aesthetic novelty along with the loss of the traditional friction between art and the commodity form: in tandem with these seismic changes to longstanding ideas of art’s vocation, weaker aesthetic categories crop up everywhere, testifying in their very proliferation to how, in a world of “total design and Internet plenitude”, aesthetic experience while less rarefied also becomes less intense. (Ngai 21) Ngai offers us the cute, interesting, and zany as the key vernacular categories that describe aesthetic experience in “the hyper-commodified, information-saturated, and performance-driven conditions of late-capitalist culture” (1). Aesthetic experience no longer subscribes to an exceptionally single feeling but is located at the ambiguous mixture of mundane affect. Susanna Paasonen notes how Ngai’s analysis of an everyday aesthetic experience that is complex and equivocal helps explain how seemingly contradictory and irreconcilable affective tensions might in fact be mutually co-dependent with each other (Paasonen). By critiquing the broad and binary generalisations about addiction and networked technologies, Paasonen emphasises the ambivalent and fleeting nature of affective formation in the era of networked media. Significantly, Paasonen explores how ubiquitous networked infrastructures bind us in dynamic sensations of attention and distraction, control and helplessness, and boredom and interest. For Ngai, the interesting is a “low, often hard-to-register flicker of affect accompanying our recognition of minor differences from a norm” (18). There is a discord between knowledge and feeling (and cognition and perception) at the heart of the interesting. We are drawn to the interesting object after noticing something peculiar about it and yet, we are simultaneously at a loss of knowledge about the exact contents of that peculiarity. The "interesting" is embodied in the seriality of constant circulation and a temporal experience of in-betweenness and anticipation in a paradoxical era of routinised novelty. Ngai notes how in the 1960s, many minimalist conceptual artists were preoccupied with tracking the movement of objects and information by transport and communication technologies. In offering a representation of networks of circulation, “merely interesting” conceptual art disseminates information about itself and makes technologies of distribution central to its process of production. The interesting is a pervasive aesthetic judgment that also explains our affectively complex rapport with information in the context of networked technologies. Acclimatised to the repetitive tempos of internet browsing and circular refreshing, Paasonen notes we often oscillate between boredom and interest during our usage of networked media. As Ngai explains, the interesting is “a discursive aesthetic about difference in the form of information and the pathways of its movement and exchange” (1). It is then “interesting” to explore how Thamrongrattanarit tracks the circulation of information and the pathways of transmedial exchange across Twitter and cinema in Mary Is Happy, Mary Is Happy. Digital Memory in MIHMIH Mary Is Happy, Mary Is Happy is adapted from a set of 410 consecutive tweets by Twitter user @marymaloney. The film instantiates the phatic, ephemeral flow of a Twitter feed through its deadpan and episodic narrative. The titular protagonist Mary is a fickle-headed high-school senior trying to design a minimalist yearbook for her school to preserve their important memories. Yet, the sudden entry of an autocratic principal forces her to follow the school administration’s arbitrary demands and curtail her artistic instincts. Ultimately, Mary produces a thick yearbook that is filled with hagiographic information about the anonymous principal. Thamrongrattanarit offers cheeky commentary about Thailand’s authoritarian royalist democracy where the combination of sudden coups and unquestioning obedience has fostered a peculiar environment of political amnesia. Hagiographic and bureaucratic informational overload is presented as an important means to sustain this combination of veneration and paranoia. @marymaloney’s haphazard tweets are superimposed in the film as intertitles and every scene also draws inspiration from the tweet displayed in an offhand manner. We see Mary swiftly do several random and unexplained things like purchase jellyfishes, sleep through a sudden trip to Paris, rob a restaurant, and more in rapid succession. The viewer is overwhelmed because of a synchronised engagement with two different informational currents. We simultaneously read the tweet and watch the scene. The durational tension between knowing and feeling draws our attention to the friction between conceptual interpretation and sensory perception. Like the conceptual artists of the 1960s, Thamrongrattanarit also shows “information in the act of being circulated” (Ngai 157). Throughout the film, we see Mary and her best friend Suri walk along emptied railway tracks that figuratively represent the routes of informational circulation across networked technologies. With its quirky vignettes and episodic narrative progression, MIHMIH closely mirrors Paasonen’s description of microevents and microflow-like movement on social media. The film also features several abrupt and spectacular “microshocks” that interrupt the narrative’s linear flow. For example, there is a running gag about Mary’s cheap and malfunctioning phone frequently exploding in the film while she is on a call. The repetitive explosions provide sudden jolts of deadpan humour. Notably, Mary also mentions how she uses bills of past purchases to document her daily thoughts rather than a notebook to save paper. The tweets are visually represented through the overwhelming accumulation of tiny bills that Mary often struggles to arrange in a coherent pattern. Thamrongrattanarit draws our attention to the fraught materiality of digital memory and microblogging that does not align with neat and orderly narrativisation. By encouraging a constant expression of thoughts within its distinctive character limit, Twitter promotes minimal writing and maximal fragmentation. Paasonen argues that our networked technologies take on a prosthetic function by externalising memory in their databases. This prosthetic reserve of datafied memory is utilised by the algorithmic unconscious of networked media for data mining. Our capacities for simultaneous multichannel attention and distraction are increasingly subsumed by capital’s novel forms of value extraction. Mary’s use of bills to document her diary takes on another “interesting” valence here as Thamrongrattanarit connects the circulation of information on social media with monetary transactions and the accumulation of debt. While memory in common parlance is normally associated with acts of remembrance and commemoration, digital memory refers to an address for storage and retrieval. Wendy Chun argues that software conflates storage with memory as the computer stores files in its memory (Chun). Furthermore, digital memory only endures through ephemeral processes of regeneration and degeneration. Even as our computational devices move towards planned obsolescence, digital memory paradoxically promises perpetual storage. The images of dusty and obsolete computers in MIHMIH recall the materiality of the devices whose databases formerly stored many prosthetic memories. For Wolfgang Ernst, digital archives displace cultural memory from a literary-based narrativised framework to a calculative and mathematical one as digital media environments increasingly control how a culture remembers. As Jussi Parikka notes “we are miniarchivists ourselves in this information society, which could be more aptly called an information management society” (2). While traditional archives required the prudent selection and curation of important objects that will be preserved for future use on a macro temporal scale, the Internet is an agglomerative storage and retrieval database that records information on a micro temporal scale. The proliferation of agglomerative mini archives also create anxieties about clutter where the miniarchivists of the “information-management society” must contend with the effects of our ever-expanding digital trail. It is useful to note how processes of selection and curation that remain central to minimalist decluttering can be connected with the design of a personal archive. Ernst further argues that digital memory cannot be visualised as a place where objects lay in static rest but is better understood as a collection of mini archives in motion that become perceptible because of dynamic signal-based processing. In MIHMIH, memory inscription is associated with the “minimalist” yearbook that Mary was trying to create along with the bills where she documents her tweets/thoughts. At one point, Mary tries to carefully arrange her overflowing bills across her wall in a pattern to make sense of her growing emotional crisis. Yet, she is overwhelmed by the impossibility of this task. Networked media’s storage of prosthetic memory also makes self-representation ambiguous and messy. As a result, Mary’s story does align with cathartic and linear narrativisation but a messy agglomerative database. Happy Old Year: Decluttering to Mend Prosthetic Memories Kylie Cardell argues that the KonMari method connects tidiness to the self-conscious design of a curated personal archive. Marie Kondo associates decluttering with self-representation. "As Kondo is acutely aware, making memories is not simply about recuperating and preserving symbolic objects of the past, but is a future-oriented process that positions subjects in a peculiar way" (Cardell 2). This narrative formation of personal identity involves carefully storing a limited number of physical artefacts that will spark joy for the future self. Yet, we must segregate these affectively charged objects from clutter. Kondo encourages us to make intuitive judgments of conviction by overcoming ambivalent feelings and attachments about the past that are distributed over a wide set of material possessions. Notably, this form of decluttering involves archiving the prosthetic memories that dwell in our (analogue) material possessions. In Happy Old Year, Jean struggles to curate her personal archive as she becomes painfully aware of the memories that reside in her belongings. Interestingly, the film’s Thai title loosely translates as “How to Dump”. Jean has an urgent deadline to declutter her home so that it can be designed into a minimalist home office. Nevertheless, she gradually realises that she cannot coldly “dump” all her things and decides to return some of the borrowed objects to her estranged friends. This form of decluttering helps assuage her guilt about letting go of the past and allows her to (awkwardly and) elegantly honour her prosthetic memories. HOY reverses the clichéd before-after progression of events since we begin with the minimalist home and go back in flashbacks to observe its inundated and messy state. HOY’s after-before narrative along with its peculiar title that substitutes ‘new’ with ‘old’ alludes to the clashing temporalities that Jean is caught up within. She is conflicted between deceptive nostalgic remembrance and her desire to start over with a minimalist-blank slate that is purged of her past regrets. In many remarkable moments, HOY instantiates movement on computational screens to mirror digital media’s dizzying speeds of circulation and storage. Significantly, the film begins with the machinic perspective of a phone screen capturing a set of minimalist designs from a book. Jean refuses to purchase (and store) the whole book since she only requires a few images that can be preserved in her phone’s memory. As noted in the introduction, minimalist organisation can effectively draw on computational storage to declutter physical spaces. In another subplot, Jean is forced to retrieve a photo that she took years ago for a friend. She grudgingly searches through a box of CDs (a cumbersome storage device in the era of clouds) but ultimately finds the image in her ex-boyfriend Aim’s hard disk. As she browses through a folder titled 2013, her hesitant clicks display a montage of happy and intimate moments that the couple shared together. Aim notes how the computer often behaves like a time machine. Unlike Aim, Jean did not carefully organise and store her prosthetic memories and was even willing to discard the box of CDs that were emblematic of defunct and wasteful accumulation. Speaking about how memory is externalised in digital storage, Thamrongrattanarit notes: for me, in the digital era, we just changed the medium, but human relationships stay the same. … It’s just more complicated because we can communicate from a distance, we can store a ton of memories, which couldn’t have ever happened in the past. (emphasis added) When Jean “dumped” Aim to move to Sweden, she blocked him across channels of networked communicational media to avoid any sense of ambient intimacy between them. In digitising our prosthetic memories and maintaining a sense of “connected presence” across social media, micro temporal databases have made it nearly impossible to erase and forget our past actions. Minimalist organisation might help us craft a coherent and stable representation of personal identity through meticulous decluttering. Yet, late-capitalist clutter takes on a different character in our digital archives where the algorithmic unconscious of networked media capitalises on prosthetic storage to make personal identity ambiguous and untidy. It is interesting to note that Jean initially gets in touch with Aim to return his old camera and apologise for their sudden breakup. The camera can record events to “freeze” them in time and space. Later in the film, Jean discovers a happy family photo that makes her reconsider whether she has been too harsh on her father because of how he “dumped” her family. Yet, Jean bitterly finds that her re-evaluation of her material possessions and their dated prosthetic memories is deceptive. In overidentifying with the frozen images and her affectively charged material possessions, she is misled by the overwhelming plenitude of nostalgic remembrance. Ultimately, Jean must “dump” all her things instead of trying to tidy up the jumbled temporal frictions. In the final sequence of HOY, Jean lies to her friend Pink about her relationship with Aim. She states that they are on good terms. Jean then unfriends Aim on Facebook, yet again rupturing any possibility of phatic and ambient intimacy between them. As they sit before her newly emptied house, Pink notes how Jean can do a lot with this expanded space. In a tight close-up, Jean gazes at her empty space with an ambiguous yet pained expression. Her plan to cathartically purge her regrets and fraught memories by recuperating her prosthetic memories failed. With the remnants of her past self expunged as clutter, Jean is left with a set of empty spaces that will eventually resemble the blank slate that we see at the beginning of the film. The new year and blank slate signify a fresh beginning for her future self. However, this reverse transition from a minimalist blank slate to her chaotically inundated childhood home frames a set of deeply equivocal affective sensations. Nonetheless, Jean must mislead Pink to sustain the notion of tidy and narrativised coherence that equivocally masks her fragmented sense of an indefinable loss. Conclusion MIHMIH and HOY explore the unresolvable and conflicting affective tensions that arise in an ecosystem of all-pervasive networked media. Paasonen argues that our ability to control networked technologies concurrently fosters our mundane and prosthetic dependency on them. Both Jean and Mary seek refuge in the simplicity of minimalist design to wrestle control over their overstimulating spaces and to tidy up their personal narratives. It is important to examine contemporary minimalist networks in conjunction with affective formation and aesthetic experience in the era of “total design and internet plenitude”. In an information-management society where prosthetic memories haunt our physical and digital spaces, minimalist decluttering becomes a form of personal archiving that simultaneously empowers unambiguous aesthetic feeling and linear and stable autobiographical representation. The neatness of minimalist decluttering conjugates with an ideal self that can resolve ambivalent affective attachments about the past and have a coherent vision for the future. Yet, we cannot sort the clutter that resides in digital memory’s micro temporal archives and drastically complicates our personal narratives. Significantly, the digital self is not compatible with neat and orderly narrativisation but instead resembles an unstable and agglomerative database. References Cardell, Kylie. “Modern Memory-Making: Marie Kondo, Online Journaling, and the Excavation, Curation, and Control of Personal Digital Data.” a/b: Auto/Biography Studies 32.3 (2017): 499–517. DOI: 10.1080/08989575.2017.1337993. Cheon, Eun Jeong, and Norman Makoto Su. “The Value of Empty Space for Design.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018. DOI: 10.1145/3173574.3173623. Ernst, Wolfgang, and Jussi Parikka. Digital Memory and the Archive. U of Minnesota P, 2013. Happy Old Year. Dir. Nawapol Thamrongrattanarit. Happy Ending Film, 2019. Hayles, N. Katherine. “How We Read: Close, Hyper, Machine.” ADE Bulletin (2010): 62-79. DOI: 10.1632/ade.150.62. Kondo, Marie. The Life-Changing Magic of Tidying Up. Ten Speed Press, 2010. Kyong, Chun Wendy Hui. Programmed Visions: Software and Memory. MIT P, 2013. Mankowski, Lukasz. “Interview with Nawapol Thamrongrattanarit: Happy Old Year Is Me in 100% for the First Time.” Asian Movie Pulse, 9 Feb. 2020. <http://asianmoviepulse.com/2020/02/interview-with-nawapol-thamrongrattanarit-2/>. Mary Is Happy, Mary Is Happy. Dir. Nawapol Thamrongrattanarit. Pop Pictures, 2013. Ngai, Sianne. Our Aesthetic Categories: Zany, Cute, Interesting. Harvard UP, 2015. Paasonen, Susanna. Dependent, Distracted, Bored: Affective Formations in Networked Media. MIT P, 2021. Stephens, Paul. The Poetics of Information Overload: From Gertrude Stein to Conceptual Writing. U of Minnesota P, 2015.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Arnold, Bruce y Margalit Levin. "Ambient Anomie in the Virtualised Landscape? Autonomy, Surveillance and Flows in the 2020 Streetscape". M/C Journal 13, n.º 2 (3 de mayo de 2010). http://dx.doi.org/10.5204/mcj.221.

Texto completo
Resumen
Our thesis is that the city’s ambience is now an unstable dialectic in which we are watchers and watched, mirrored and refracted in a landscape of iPhone auteurs, eTags, CCTV and sousveillance. Embrace ambience! Invoking Benjamin’s spirit, this article does not seek to limit understanding through restriction to a particular theme or theoretical construct (Buck-Morss 253). Instead, it offers snapshots of interactions at the dawn of the postmodern city. That bricolage also engages how people appropriate, manipulate, disrupt and divert urban spaces and strategies of power in their everyday life. Ambient information can both liberate and disenfranchise the individual. This article asks whether our era’s dialectics result in a new personhood or merely restate the traditional spectacle of ‘bright lights, big city’. Does the virtualized city result in ambient anomie and satiation or in surprise, autonomy and serendipity? (Gumpert 36) Since the steam age, ambience has been characterised in terms of urban sound, particularly the alienation attributable to the individual’s experience as a passive receptor of a cacophony of sounds – now soft, now loud, random and recurrent–from the hubbub of crowds, the crash and grind of traffic, the noise of industrial processes and domestic activity, factory whistles, fire alarms, radio, television and gramophones (Merchant 111; Thompson 6). In the age of the internet, personal devices such as digital cameras and iPhones, and urban informatics such as CCTV networks and e-Tags, ambience is interactivity, monitoring and signalling across multiple media, rather than just sound. It is an interactivity in which watchers observe the watched observing them and the watched reshape the fabric of virtualized cities merely by traversing urban precincts (Hillier 295; De Certeau 163). It is also about pervasive although unevenly distributed monitoring of individuals, using sensors that are remote to the individual (for example cameras or tag-readers mounted above highways) or are borne by the individual (for example mobile phones or badges that systematically report the location to a parent, employer or sex offender register) (Holmes 176; Savitch 130). That monitoring reflects what Doel and Clark characterized as a pervasive sense of ambient fear in the postmodern city, albeit fear that like much contemporary anxiety is misplaced–you are more at risk from intimates than from strangers, from car accidents than terrorists or stalkers–and that is ahistorical (Doel 13; Scheingold 33). Finally, it is about cooption, with individuals signalling their identity through ambient advertising: wearing tshirts, sweatshirts, caps and other apparel that display iconic faces such as Obama and Monroe or that embody corporate imagery such as the Nike ‘Swoosh’, Coca-Cola ‘Ribbon’, Linux Penguin and Hello Kitty feline (Sayre 82; Maynard 97). In the postmodern global village much advertising is ambient, rather than merely delivered to a device or fixed on a billboard. Australian cities are now seas of information, phantasmagoric environments in which the ambient noise encountered by residents and visitors comprises corporate signage, intelligent traffic signs, displays at public transport nodes, shop-window video screens displaying us watching them, and a plethora of personal devices showing everything from the weather to snaps of people in the street or neighborhood satellite maps. They are environments through which people traverse both as persons and abstractions, virtual presences on volatile digital maps and in online social networks. Spectacle, Anomie or Personhood The spectacular city of modernity is a meme of communication, cultural and urban development theory. It is spectacular in the sense that of large, artificial, even sublime. It is also spectacular because it is built around the gaze, whether the vistas of Hausmann’s boulevards, the towers of Manhattan and Chicago, the shopfront ‘sea of light’ and advertising pillars noted by visitors to Weimar Berlin or the neon ‘neo-baroque’ of Las Vegas (Schivelbusch 114; Fritzsche 164; Ndalianis 535). In the year 2010 it aspires to 2020 vision, a panoptic and panspectric gaze on the part of governors and governed alike (Kullenberg 38). In contrast to the timelessness of Heidegger’s hut and the ‘fixity’ of rural backwaters, spectacular cities are volatile domains where all that is solid continues to melt into air with the aid of jackhammers and the latest ‘new media’ potentially result in a hypereality that make it difficult to determine what is real and what is not (Wark 22; Berman 19). The spectacular city embodies a dialectic. It is anomic because it induces an alienation in the spectator, a fatigue attributable to media satiation and to a sense of being a mere cog in a wheel, a disempowered and readily-replaceable entity that is denied personhood–recognition as an autonomous individual–through subjection to a Fordist and post-Fordist industrial discipline or the more insidious imprisonment of being ‘a housewife’, one ant in a very large ant hill (Dyer-Witheford 58). People, however, are not automatons: they experience media, modernity and urbanism in different ways. The same attributes that erode the selfhood of some people enhance the autonomy and personhood of others. The spectacular city, now a matrix of digits, information flows and opportunities, is a realm in which people can subvert expectations and find scope for self-fulfillment, whether by wearing a hoodie that defeats CCTV or by using digital technologies to find and associate with other members of stigmatized affinity groups. One person’s anomie is another’s opportunity. Ambience and Virtualisation Eighty years after Fritz Lang’s Metropolis forecast a cyber-sociality, digital technologies are resulting in a ‘virtualisation’ of social interactions and cities. In post-modern cityscapes, the space of flows comprises an increasing number of electronic exchanges through physically disjointed places (Castells 2002). Virtualisation involves supplementation or replacement of face-to-face contact with hypersocial communication via new media, including SMS, email, blogging and Facebook. In 2010 your friends (or your boss or a bully) may always be just a few keystrokes away, irrespective of whether it is raining outside, there is a public transport strike or the car is in for repairs (Hassan 69; Baron 215). Virtualisation also involves an abstraction of bodies and physical movements, with the information that represents individual identities or vehicles traversing the virtual spaces comprised of CCTV networks (where viewers never encounter the person or crowd face to face), rail ticketing systems and road management systems (x e-Tag passed by this tag reader, y camera logged a specific vehicle onto a database using automated number-plate recognition software) (Wood 93; Lyon 253). Surveillant Cities Pervasive anxiety is a permanent and recurrent feature of urban experience. Often navigated by an urgency to control perceived disorder, both physically and through cultivated dominant theory (early twentieth century gendered discourses to push women back into the private sphere; ethno-racial closure and control in the Black Metropolis of 1940s Chicago), history is punctuated by attempts to dissolve public debate and infringe minority freedoms (Wilson 1991). In the Post-modern city unprecedented technological capacity generates a totalizing media vector whose plausible by-product is the perception of an ambient menace (Wark 3). Concurrent faith in technology as a cost-effective mechanism for public management (policing, traffic, planning, revenue generation) has resulted in emergence of the surveillant city. It is both a social and architectural fabric whose infrastructure is dotted with sensors and whose people assume that they will be monitored by private/public sector entities and directed by interactive traffic management systems – from electronic speed signs and congestion indicators through to rail schedule displays –leveraging data collected through those sensors. The fabric embodies tensions between governance (at its crudest, enforcement of law by police and their surrogates in private security services) and the soft cage of digital governmentality, with people being disciplined through knowledge that they are being watched and that the observation may be shared with others in an official or non-official shaming (Parenti 51; Staples 41). Encounters with a railway station CCTV might thus result in exhibition of the individual in court or on broadcast television, whether in nightly news or in a ‘reality tv’ crime expose built around ‘most wanted’ footage (Jermyn 109). Misbehaviour by a partner might merely result in scrutiny of mobile phone bills or web browser histories (which illicit content has the partner consumed, which parts of cyberspace has been visited), followed by a visit to the family court. It might instead result in digital viligilantism, with private offences being named and shamed on electronic walls across the global village, such as Facebook. iPhone Auteurism Activists have responded to pervasive surveillance by turning the cameras on ‘the watchers’ in an exercise of ‘sousveillance’ (Bennett 13; Huey 158). That mirroring might involve the meticulous documentation, often using the same geospatial tools deployed by public/private security agents, of the location of closed circuit television cameras and other surveillance devices. One outcome is the production of maps identifying who is watching and where that watching is taking place. As a corollary, people with anxieties about being surveilled, with a taste for street theatre or a receptiveness to a new form of urban adventure have used those maps to traverse cities via routes along which they cannot be identified by cameras, tags and other tools of the panoptic sort, or to simply adopt masks at particular locations. In 2020 can anyone aspire to be a protagonist in V for Vendetta? (iSee) Mirroring might take more visceral forms, with protestors for example increasingly making a practice of capturing images of police and private security services dealing with marches, riots and pickets. The advent of 3G mobile phones with a still/video image capability and ongoing ‘dematerialisation’ of traditional video cameras (ie progressively cheaper, lighter, more robust, less visible) means that those engaged in political action can document interaction with authority. So can passers-by. That ambient imaging, turning the public gaze on power and thereby potentially redefining the ‘public’ (given that in Australia the community has been embodied by the state and discourse has been mediated by state-sanctioned media), poses challenges for media scholars and exponents of an invigorated civil society in which we are looking together – and looking at each other – rather than bowling alone. One challenge for consumers in construing ambient media is trust. Can we believe what we see, particularly when few audiences have forensic skills and intermediaries such as commercial broadcasters may privilege immediacy (the ‘breaking news’ snippet from participants) over context and verification. Social critics such as Baudelaire and Benjamin exalt the flaneur, the free spirit who gazed on the street, a street that was as much a spectacle as the theatre and as vibrant as the circus. In 2010 the same technologies that empower citizen journalism and foster a succession of velvet revolutions feed flaneurs whose streetwalking doesn’t extend beyond a keyboard and a modem. The US and UK have thus seen emergence of gawker services, with new media entrepreneurs attempting to build sustainable businesses by encouraging fans to report the location of celebrities (and ideally provide images of those encounters) for the delectation of people who are web surfing or receiving a tweet (Burns 24). In the age of ambient cameras, where the media are everywhere and nowhere (and micro-stock photoservices challenge agencies such as Magnum), everyone can join the paparazzi. Anyone can deploy that ambient surveillance to become a stalker. The enthusiasm with which fans publish sightings of celebrities will presumably facilitate attacks on bodies rather than images. Information may want to be free but so, inconveniently, do iconoclasts and practitioners of participatory panopticism (Dodge 431; Dennis 348). Rhetoric about ‘citizen journalism’ has been co-opted by ‘old media’, with national broadcasters and commercial enterprises soliciting still images and video from non-professionals, whether for free or on a commercial basis. It is a world where ‘journalists’ are everywhere and where responsibility resides uncertainly at the editorial desk, able to reject or accept offerings from people with cameras but without the industrial discipline formerly exercised through professional training and adherence to formal codes of practice. It is thus unsurprising that South Australia’s Government, echoed by some peers, has mooted anti-gawker legislation aimed at would-be auteurs who impede emergency services by stopping their cars to take photos of bushfires, road accidents or other disasters. The flipside of that iPhone auteurism is anxiety about the public gaze, expressed through moral panics regarding street photography and sexting. Apart from a handful of exceptions (notably photography in the Sydney Opera House precinct, in the immediate vicinity of defence facilities and in some national parks), Australian law does not prohibit ‘street photography’ which includes photographs or videos of streetscapes or public places. Despite periodic assertions that it is a criminal offence to take photographs of people–particularly minors–without permission from an official, parent/guardian or individual there is no general restriction on ambient photography in public spaces. Moral panics about photographs of children (or adults) on beaches or in the street reflect an ambient anxiety in which danger is associated with strangers and strangers are everywhere (Marr 7; Bauman 93). That conceptualisation is one that would delight people who are wholly innocent of Judith Butler or Andrea Dworkin, in which the gaze (ever pervasive, ever powerful) is tantamount to a violation. The reality is more prosaic: most child sex offences involve intimates, rather than the ‘monstrous other’ with the telephoto lens or collection of nastiness on his iPod (Cossins 435; Ingebretsen 190). Recognition of that reality is important in considering moves that would egregiously restrict legitimate photography in public spaces or happy snaps made by doting relatives. An ambient image–unposed, unpremeditated, uncoerced–of an intimate may empower both authors and subjects when little is solid and memory is fleeting. The same caution might usefully be applied in considering alarms about sexting, ie creation using mobile phones (and access by phone or computer monitor) of intimate images of teenagers by teenagers. Australian governments have moved to emulate their US peers, treating such photography as a criminal offence that can be conceptualized as child pornography and addressed through permanent inclusion in sex offender registers. Lifelong stigmatisation is inappropriate in dealing with naïve or brash 12 and 16 year olds who have been exchanging intimate images without an awareness of legal frameworks or an understanding of consequences (Shafron-Perez 432). Cameras may be everywhere among the e-generation but legal knowledge, like the future, is unevenly distributed. Digital Handcuffs Generations prior to 2008 lost themselves in the streets, gaining individuality or personhood by escaping the surveillance inherent in living at home, being observed by neighbours or simply surrounded by colleagues. Streets offered anonymity and autonomy (Simmel 1903), one reason why heterodox sexuality has traditionally been negotiated in parks and other beats and on kerbs where sex workers ply their trade (Dalton 375). Recent decades have seen a privatisation of those public spaces, with urban planning and digital technologies imposing a new governmentality on hitherto ambient ‘deviance’ and on voyeuristic-exhibitionist practice such as heterosexual ‘dogging’ (Bell 387). That governmentality has been enforced through mechanisms such as replacement of traditional public toilets with ‘pods’ that are conveniently maintained by global service providers such as Veolia (the unromantic but profitable rump of former media & sewers conglomerate Vivendi) and function as billboards for advertising groups such as JC Decaux. Faces encountered in the vicinity of the twenty-first century pissoir are thus likely to be those of supermodels selling yoghurt, low interest loans or sportsgear – the same faces sighted at other venues across the nation and across the globe. Visiting ‘the mens’ gives new meaning to the word ambience when you are more likely to encounter Louis Vuitton and a CCTV camera than George Michael. George’s face, or that of Madonna, Barack Obama, Kevin 07 or Homer Simpson, might instead be sighted on the tshirts or hoodies mentioned above. George’s music might also be borne on the bodies of people you see in the park, on the street, or in the bus. This is the age of ambient performance, taken out of concert halls and virtualised on iPods, Walkmen and other personal devices, music at the demand of the consumer rather than as rationed by concert managers (Bull 85). The cost of that ambience, liberation of performance from time and space constraints, may be a Weberian disenchantment (Steiner 434). Technology has also removed anonymity by offering digital handcuffs to employees, partners, friends and children. The same mobile phones used in the past to offer excuses or otherwise disguise the bearer’s movement may now be tied to an observer through location services that plot the person’s movement across Google Maps or the geospatial information of similar services. That tracking is an extension into the private realm of the identification we now take for granted when using taxis or logistics services, with corporate Australia for example investing in systems that allow accurate determination of where a shipment is located (on Sydney Harbour Bridge? the loading dock? accompanying the truck driver on unauthorized visits to the pub?) and a forecast of when it will arrive (Monmonier 76). Such technologies are being used on a smaller scale to enforce digital Fordism among the binary proletariat in corporate buildings and campuses, with ‘smart badges’ and biometric gateways logging an individual’s movement across institutional terrain (so many minutes in the conference room, so many minutes in the bathroom or lingering among the faux rainforest near the Vice Chancellery) (Bolt). Bright Lights, Blog City It is a truth universally acknowledged, at least by right-thinking Foucauldians, that modernity is a matter of coercion and anomie as all that is solid melts into air. If we are living in an age of hypersocialisation and hypercapitalism – movies and friends on tap, along with the panoptic sorting by marketers and pervasive scrutiny by both the ‘information state’ and public audiences (the million people or one person reading your blog) that is an inevitable accompaniment of the digital cornucopia–we might ask whether everyone is or should be unhappy. This article began by highlighting traditional responses to the bright lights, brashness and excitement of the big city. One conclusion might be that in 2010 not much has changed. Some people experience ambient information as liberating; others as threatening, productive of physical danger or of a more insidious anomie in which personal identity is blurred by an ineluctable electro-smog. There is disagreement about the professionalism (for which read ethics and inhibitions) of ‘citizen media’ and about a culture in which, as in the 1920s, audiences believe that they ‘own the image’ embodying the celebrity or public malefactor. Digital technologies allow you to navigate through the urban maze and allow officials, marketers or the hostile to track you. Those same technologies allow you to subvert both the governmentality and governance. You are free: Be ambient! References Baron, Naomi. Always On: Language in an Online and Mobile World. New York: Oxford UP, 2008. Bauman, Zygmunt. Liquid Modernity. Oxford: Polity Press, 2000. Bell, David. “Bodies, Technologies, Spaces: On ‘Dogging’.” Sexualities 9.4 (2006): 387-408. Bennett, Colin. The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge: MIT Press, 2008. Berman, Marshall. All That Is Solid Melts into Air: The Experience of Modernity. London: Verso, 2001. Bolt, Nate. “The Binary Proletariat.” First Monday 5.5 (2000). 25 Feb 2010 ‹http://131.193.153.231/www/issues/issue5_5/bolt/index.html›. Buck-Morss, Susan. The Dialectics of Seeing: Walter Benjamin and the Arcades Project. Cambridge: MIT Press, 1991. Bull, Michael. Sounding Out the City: Personal Stereos and the Management of Everyday Life. Oxford: Berg, 2003. Bull, Michael. Sound Moves: iPod Culture and the Urban Experience. London: Routledge, 2008 Burns, Kelli. Celeb 2.0: How Social Media Foster Our Fascination with Popular Culture. Santa Barbara: ABC-CLIO, 2009. Castells, Manuel. “The Urban Ideology.” The Castells Reader on Cities and Social Theory. Ed. Ida Susser. Malden: Blackwell, 2002. 34-70. Cossins, Anne, Jane Goodman-Delahunty, and Kate O’Brien. “Uncertainty and Misconceptions about Child Sexual Abuse: Implications for the Criminal Justice System.” Psychiatry, Psychology and the Law 16.4 (2009): 435-452. Dalton, David. “Policing Outlawed Desire: ‘Homocriminality’ in Beat Spaces in Australia.” Law & Critique 18.3 (2007): 375-405. De Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California P, 1984. Dennis, Kingsley. “Keeping a Close Watch: The Rise of Self-Surveillance and the Threat of Digital Exposure.” The Sociological Review 56.3 (2008): 347-357. Dodge, Martin, and Rob Kitchin. “Outlines of a World Coming into Existence: Pervasive Computing and the Ethics of Forgetting.” Environment & Planning B: Planning & Design 34.3 (2007): 431-445. Doel, Marcus, and David Clarke. “Transpolitical Urbanism: Suburban Anomaly and Ambient Fear.” Space & Culture 1.2 (1998): 13-36. Dyer-Witheford, Nick. Cyber-Marx: Cycles and Circuits of Struggle in High Technology Capitalism. Champaign: U of Illinois P, 1999. Fritzsche, Peter. Reading Berlin 1900. Cambridge: Harvard UP, 1998. Gumpert, Gary, and Susan Drucker. “Privacy, Predictability or Serendipity and Digital Cities.” Digital Cities II: Computational and Sociological Approaches. Berlin: Springer, 2002. 26-40. Hassan, Robert. The Information Society. Cambridge: Polity Press, 2008. Hillier, Bill. “Cities as Movement Economies.” Intelligent Environments: Spatial Aspects of the Information Revolution. Ed. Peter Drioege. Amsterdam: Elsevier, 1997. 295-342. Holmes, David. “Cybercommuting on an Information Superhighway: The Case of Melbourne’s CityLink.” The Cybercities Reader. Ed. Stephen Graham. London: Routledge, 2004. 173-178. Huey, Laura, Kevin Walby, and Aaron Doyle. “Cop Watching in the Downtown Eastside: Exploring the Use of CounterSurveillance as a Tool of Resistance.” Surveillance and Security: Technological Politics and Power in Everyday Life. Ed. Torin Monahan. London: Routledge, 2006. 149-166. Ingebretsen, Edward. At Stake: Monsters and the Rhetoric of Fear in Public Culture. Chicago: U of Chicago P, 2001. iSee. “Now More Than Ever”. 20 Feb 2010 ‹http://www.appliedautonomy.com/isee/info.html›. Jackson, Margaret, and Julian Ligertwood. "Identity Management: Is an Identity Card the Solution for Australia?” Prometheus 24.4 (2006): 379-387. Jermyn, Deborah. Crime Watching: Investigating Real Crime TV. London: IB Tauris, 2007. Kullenberg, Christopher. “The Social Impact of IT: Surveillance and Resistance in Present-Day Conflicts.” FlfF-Kommunikation 1 (2009): 37-40. Lyon, David. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. London: Routledge, 2003. Marr, David. The Henson Case. Melbourne: Text, 2008. Maynard, Margaret. Dress and Globalisation. Manchester: Manchester UP, 2004. Merchant, Carolyn. The Columbia Guide to American Environmental History. New York: Columbia UP, 2002. Monmonier, Mark. “Geolocation and Locational Privacy: The ‘Inside’ Story on Geospatial Tracking’.” Privacy and Technologies of Identity: A Cross-disciplinary Conversation. Ed. Katherine Strandburg and Daniela Raicu. Berlin: Springer, 2006. 75-92. Ndalianis, Angela. “Architecture of the Senses: Neo-Baroque Entertainment Spectacles.” Rethinking Media Change: The Aesthetics of Tradition. Ed. David Thorburn and Henry Jenkins. Cambridge: MIT Press, 2004. 355-374. Parenti, Christian. The Soft Cage: Surveillance in America. New York: Basic Books, 2003. Sayre, Shay. “T-shirt Messages: Fortune or Folly for Advertisers.” Advertising and Popular Culture: Studies in Variety and Versatility. Ed. Sammy Danna. New York: Popular Press, 1992. 73-82. Savitch, Henry. Cities in a Time of Terror: Space, Territory and Local Resilience. Armonk: Sharpe, 2008. Scheingold, Stuart. The Politics of Street Crime: Criminal Process and Cultural Obsession. Philadephia: Temple UP, 1992. Schivelbusch, Wolfgang. Disenchanted Night: The Industrialization of Light in the Nineteenth Century. Berkeley: U of California Press, 1995. Shafron-Perez, Sharon. “Average Teenager or Sex Offender: Solutions to the Legal Dilemma Caused by Sexting.” John Marshall Journal of Computer & Information Law 26.3 (2009): 431-487. Simmel, Georg. “The Metropolis and Mental Life.” Individuality and Social Forms. Ed. Donald Levine. Chicago: University of Chicago P, 1971. Staples, William. Everyday Surveillance: Vigilance and Visibility in Postmodern Life. Lanham: Rowman & Littlefield, 2000. Steiner, George. George Steiner: A Reader. New York: Oxford UP, 1987. Thompson, Emily. The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America. Cambridge: The MIT Press, 2004. Wark, Mackenzie. Virtual Geography: Living with Global Media Events. Bloomington: Indiana UP, 1994. Wilson, Elizabeth. The Sphinx in the City: Urban Life, the Control of Disorder and Women. Berkeley: University of California P, 1991. Wood, David. “Towards Spatial Protocol: The Topologies of the Pervasive Surveillance Society.” Augmenting Urban Spaces: Articulating the Physical and Electronic City. Eds. Allesandro Aurigi and Fiorella de Cindio. Aldershot: Ashgate, 2008. 93-106.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía