To see the other types of publications on this topic, follow the link: On-the-fly code generation.

Journal articles on the topic 'On-the-fly code generation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'On-the-fly code generation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Eckert, Candice, Brian Cham, Pengyi Li, Jing Sun, and Gillian Dobbie. "Linking Design Model with Code." International Journal of Software Engineering and Knowledge Engineering 26, no. 09n10 (2016): 1473–91. http://dx.doi.org/10.1142/s0218194016400131.

Full text
Abstract:
With the growing in size and complexity of modern computer systems, the need for improving the quality at all stages of software development has become a critical issue. The current software production has been largely dependent on manual code development. Despite the slow development process, the errors introduced by the programmers contribute to a substantial portion of defects in the final software product. Model-driven engineering (MDE), despite having many advantages, is often overlooked by programmers due to lack of proper understanding and training in the matter. This paper investigates the advantages and disadvantages of MDE and looks at research results showing the adoption rates of design models. It analyzes different tools used for automated code generation and displays the reasons that led to technical decisions such as the programming language or design model used. In light of the findings, an educational tool, namely Lorini, was developed to provide automated code generation from the design models. The implemented tool consists of a plug-in for the Astah framework aimed at teaching Java programming to students through UML diagrams. It features instantaneous code generation from three types of UML diagrams, code-diagram matching, a feedback panel for error displays and on-the-fly compilation and execution of the resulting program. We also explore the possibility of generating assertion constraints from the design model and use them to verify the implementation. Evaluation of the tool indicated it to be successful with unique educational features and intuitive to use.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Jingjing, Zichao Li, Tao Ge, Irwin King, and Michael R. Lyu. "Text Revision By On-the-Fly Representation Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10956–64. http://dx.doi.org/10.1609/aaai.v36i10.21343.

Full text
Abstract:
Text revision refers to a family of natural language generation tasks, where the source and target sequences share moderate resemblance in surface form but differentiate in attributes, such as text formality and simplicity. Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems, which rely on large-scale parallel training corpus. In this paper, we present an iterative in-place editing approach for text revision, which requires no parallel data. In this approach, we simply fine-tune a pre-trained Transformer with masked language modeling and attribute classification. During inference, the editing at each iteration is realized by two-step span replacement. At the first step, the distributed representation of the text optimizes on the fly towards an attribute function. At the second step, a text span is masked and another new one is proposed conditioned on the optimized representation. The empirical experiments on two typical and important text revision tasks, text formalization and text simplification, show the effectiveness of our approach. It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification, and gains better performance than strong unsupervised methods on text formalization. Our code and model are released at https://github.com/jingjingli01/OREO.
APA, Harvard, Vancouver, ISO, and other styles
3

Franz, Michael. "Open Standards Beyond Java: On the Future of Mobile Code for the Internet." JUCS - Journal of Universal Computer Science 4, no. (5) (1998): 522–33. https://doi.org/10.3217/jucs-004-05-0522.

Full text
Abstract:
At first sight, Java's position as the de-facto standard for portable software distributed across the Internet seems virtually unassailable. Interestingly enough, however, it is surprisingly simple to provide alternatives to the Java platform, using the plug-in mechanism supported by the major commercial World Wide Web browsers. We are currently developing a comprehensive infrastructure for mobile software components. This is a long-term research activity whose primary objectives are not directly related to today's World Wide Web, but which targets future high-performance component-software systems. However, purely as a technology demonstration, we have recently started a small spin-off project called "Juice" with the intent of extending our experimental mobile-code platform into the realm of the commercial Internet. Juice is implemented in the form of a browser plug-in that generates native code on-the-fly. Although our software distribution format and run-time architecture are fundamentally different from Java s, and arguably more advanced, once that the appropriate Juice plug-in has been installed on a Windows PC or a Macintosh computer, end-users can no longer distinguish between applets that are based on Java and those that are based on Juice. The two kinds of applets can even coexist on the same Web-page. This, however, means that Java can in principle be complemented by alternative technologies (or even gradually be displaced by something better) with far fewer complications than most people seem to assume. As dynamic code generation technology matures further, it will become less important which code-distribution format has the largest "market share", many such formats can be supported concurrently. Future executable-content developers may well be able to choose from a wide range of platforms, probably including several dialects of Java itself. Hence, a pattern of "open standards" for mobile code is likely to eventually emerge, in spite of Java's current dominance.
APA, Harvard, Vancouver, ISO, and other styles
4

Jo, Yunki, and Deokjung Lee. "Current capabilities and future developments of Monte Carlo code MCS." EPJ Nuclear Sciences & Technologies 11 (2025): 7. https://doi.org/10.1051/epjn/2025001.

Full text
Abstract:
The Monte Carlo code, MCS was developed at the Ulsan National Institute of Science and Technology (UNIST) in 2011. In the initial development phase, the primary focus was on developing a Monte Carlo code for the high-fidelity multicycle analysis of large-scale power reactors, especially pressurized water reactors. For the power reactor analysis, capabilities including refueling and shuffling of fuel assemblies, on-the-fly Doppler broadening of neutron cross-sections, and multiphysics coupling were implemented in the MCS. Beyond reactor analysis and capabilities, MCS has been developed to extend its applications. The MCS has been used for radiation shielding, group constants generation, sensitivity, uncertainty, and transient analysis. This study provides a general overview of MCS capabilities.
APA, Harvard, Vancouver, ISO, and other styles
5

Kuznetsov, A. A., and D. O. Zakharov. "Deep learning-based models’ application to generating a cryptographic key from a face image." Radiotekhnika, no. 213 (June 16, 2023): 31–40. http://dx.doi.org/10.30837/rt.2023.2.213.03.

Full text
Abstract:
Generating cryptographic keys, such as passwords or pin codes, involves utilizing specialized algorithms that rely on complex mathematical transformations. These keys necessitate secure storage measures and complex distribution and processing mechanisms, which often incur substantial costs. However, an alternative approach emerges, proposing the generation of cryptographic keys based on the user's biometric data. Since one can generate keys "on the fly," there is no longer a requirement for key storage or distribution. These generated keys, derived from biometric information, can be effectively employed for biometric authentication, offering numerous advantages. Additionally, this alternative approach unlocks new possibilities for constructing information infrastructure. By utilizing biometric keys, the initiation of cryptographic algorithms like encryption and digital signatures becomes more streamlined and less burdensome in storing and processing procedures. This paper explores biometric key generation technologies, focusing on applying deep learning models. In particular, we employ convolutional neural networks to extract significant biometric features from human face images as the foundation for subsequent key generation processes. A comprehensive analysis involves extensive experimentation with various deep-learning models. We achieve remarkable results by optimizing the algorithm's parameters, with the False Reject Rate (FRR) and False Acceptance Rate (FAR) approximately equal and less than 10%. With code-based cryptographic extractors’ post-quantum level of security, we ensure the continued protection and integrity of sensitive information within the cryptographic framework.
APA, Harvard, Vancouver, ISO, and other styles
6

Kempeneers, Pieter, Ondrej Pesek, Davide De Marchi, and Pierre Soille. "pyjeo: A Python Package for the Analysis of Geospatial Data." ISPRS International Journal of Geo-Information 8, no. 10 (2019): 461. http://dx.doi.org/10.3390/ijgi8100461.

Full text
Abstract:
A new Python package, pyjeo, that deals with the analysis of geospatial data has been created by the Joint Research Centre (JRC). Adopting the principles of open science, the JRC strives for transparency and reproducibility of results. In this view, it has been decided to release pyjeo as free and open software. This paper describes the design of pyjeo and how its underlying C/C++ library was ported to Python. Strengths and limitations of the design choices are discussed. In particular, the data model that allows the generation of on-the-fly data cubes is of importance. Two uses cases illustrate how pyjeo can contribute to open science. The first is an example of large-scale processing, where pyjeo was used to create a global composite of Sentinel-2 data. The second shows how pyjeo can be imported within an interactive platform for image analysis and visualization. Using an innovative mechanism that interprets Python code within a C++ library on-the-fly, users can benefit from all functions in the pyjeo package. Images are processed in deferred mode, which is ideal for prototyping new algorithms on geospatial data, and assess the suitability of the results created on the fly at any scale and location.
APA, Harvard, Vancouver, ISO, and other styles
7

Popa, Cristina-Elena, Constantin-Cristian Damian, and Daniela Coltuc. "Joint Data Hiding and Partial Encryption of Compressive Sensed Streams." Information 16, no. 7 (2025): 513. https://doi.org/10.3390/info16070513.

Full text
Abstract:
This paper proposes a method to secure Compressive Sensing (CS) streams. It involves protecting part of the measurements with a secret key and inserting code into the remaining measurements. The secret key is generated via a cryptographically secure pseudorandom number generator (CSPRNG) and XORed with the measurements to be inserted. For insertion, we use a reversible data hiding (RDH) scheme, which is a prediction error expansion algorithm modified to match the statistics of CS measurements. The reconstruction from the embedded stream results in a visibly distorted image. The image distortion is controlled by the number of embedded levels. In our tests, embedding on 10 levels results in ≈18 dB distortion for images of 256×256 pixels reconstructed with the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). A particularity of the presented method is on-the-fly insertion, which makes it appropriate for the sequential acquisition of measurements with a single-pixel camera. On-the-fly insertion avoids the buffering of CS measurements for the subsequent standard encryption and generation of a thumbnail image.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Bin, Fan Wu, Xiao Han, et al. "VIGC: Visual Instruction Generation and Correction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 6 (2024): 5309–17. http://dx.doi.org/10.1609/aaai.v38i6.28338.

Full text
Abstract:
The integration of visual encoders and large language models (LLMs) has driven recent progress in multimodal large language models (MLLMs). However, the scarcity of high-quality instruction-tuning data for vision-language tasks remains a challenge. The current leading paradigm, such as LLaVA, relies on language-only GPT-4 to generate data, which requires pre-annotated image captions and detection bounding boxes, suffering from understanding image details. A practical solution to this problem would be to utilize the available multimodal large language models to generate instruction data for vision-language tasks. However, it's worth noting that the currently accessible MLLMs are not as powerful as their LLM counterparts, as they tend to produce inadequate responses and generate false information. As a solution for addressing the current issue, this paper proposes the Visual Instruction Generation and Correction (VIGC) framework that enables multimodal large language models to generate instruction-tuning data and progressively enhance its quality on-the-fly. Specifically, Visual Instruction Generation (VIG) guides the vision-language model to generate diverse instruction-tuning data. To ensure generation quality, Visual Instruction Correction (VIC) adopts an iterative update mechanism to correct any inaccuracies in data produced by VIG, effectively reducing the risk of hallucination. Leveraging the diverse, high-quality data generated by VIGC, we finetune mainstream models and validate data quality based on various evaluations. Experimental results demonstrate that VIGC not only compensates for the shortcomings of language-only data generation methods, but also effectively enhances the benchmark performance. The models, datasets, and code are available at https://opendatalab.github.io/VIGC
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Chenyuan, Yinlin Deng, Runyu Lu, et al. "WhiteFox: White-Box Compiler Fuzzing Empowered by Large Language Models." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 709–35. http://dx.doi.org/10.1145/3689736.

Full text
Abstract:
Compiler correctness is crucial, as miscompilation can falsify program behaviors, leading to serious consequences over the software supply chain. In the literature, fuzzing has been extensively studied to uncover compiler defects. However, compiler fuzzing remains challenging: Existing arts focus on black- and grey-box fuzzing, which generates test programs without sufficient understanding of internal compiler behaviors. As such, they often fail to construct test programs to exercise intricate optimizations. Meanwhile, traditional white-box techniques, such as symbolic execution, are computationally inapplicable to the giant codebase of compiler systems. Recent advances demonstrate that Large Language Models (LLMs) excel in code generation/understanding tasks and even have achieved state-of-the-art performance in black-box fuzzing. Nonetheless, guiding LLMs with compiler source-code information remains a missing piece of research in compiler testing. To this end, we propose WhiteFox, the first white-box compiler fuzzer using LLMs with source-code information to test compiler optimization, with a spotlight on detecting deep logic bugs in the emerging deep learning (DL) compilers. WhiteFox adopts a multi-agent framework: (i) an LLM-based analysis agent examines the low-level optimization source code and produces requirements on the high-level test programs that can trigger the optimization; (ii) an LLM-based generation agent produces test programs based on the summarized requirements. Additionally, optimization-triggering tests are also used as feedback to further enhance the test generation prompt on the fly. Our evaluation on the three most popular DL compilers (i.e., PyTorch Inductor, TensorFlow-XLA, and TensorFlow Lite) shows that WhiteFox can generate high-quality test programs to exercise deep optimizations requiring intricate conditions, practicing up to 8 times more optimizations than state-of-the-art fuzzers. To date, WhiteFox has found in total 101 bugs for the compilers under test, with 92 confirmed as previously unknown and 70 already fixed. Notably, WhiteFox has been recently acknowledged by the PyTorch team, and is in the process of being incorporated into its development workflow. Finally, beyond DL compilers, WhiteFox can also be adapted for compilers in different domains, such as LLVM, where WhiteFox has already found multiple bugs.
APA, Harvard, Vancouver, ISO, and other styles
10

Jalili, Vahid, Matteo Matteucci, Marco Masseroli, and Stefano Ceri. "Explorative visual analytics on interval-based genomic data and their metadata." BMC Bioinformatics 18, no. 1 (2017): 536. https://doi.org/10.1186/s12859-017-1945-9.

Full text
Abstract:
<strong>Background: </strong>With the wide-spreading of public repositories of NGS processed data, the availability of user-friendly and effective tools for data exploration, analysis and visualization is becoming very relevant. These tools enable interactive analytics, an exploratory approach for the seamless "sense-making" of data through on-the-fly integration of analysis and visualization phases, suggested not only for evaluating processing results, but also for designing and adapting NGS data analysis pipelines.<strong>Results: </strong>This paper presents abstractions for supporting the early analysis of NGS processed data and their implementation in an associated tool, named GenoMetric Space Explorer (GeMSE). This tool serves the needs of the GenoMetric Query Language, an innovative cloud-based system for computing complex queries over heterogeneous processed data. It can also be used starting from any text files in standard BED, BroadPeak, NarrowPeak, GTF, or general tab-delimited format, containing numerical features of genomic regions; metadata can be provided as text files in tab-delimited attribute-value format. GeMSE allows interactive analytics, consisting of on-the-fly cycling among steps of data exploration, analysis and visualization that help biologists and bioinformaticians in making sense of heterogeneous genomic datasets. By means of an explorative interaction support, users can trace past activities and quickly recover their results, seamlessly going backward and forward in the analysis steps and comparative visualizations of heatmaps.<strong>Conclusions: </strong>GeMSE effective application and practical usefulness is demonstrated through significant use cases of biological interest. GeMSE is available at http://www.bioinformatics.deib.polimi.it/GeMSE/, and its source code is available at https://github.com/Genometric/GeMSE under GPLv3 open-source license.
APA, Harvard, Vancouver, ISO, and other styles
11

Feng, Tao, and Jinkun Liu. "Optimization Research of Directed Fuzzing Based on AFL." Electronics 11, no. 24 (2022): 4066. http://dx.doi.org/10.3390/electronics11244066.

Full text
Abstract:
Fuzz testing is the process of testing programs by continually producing unique inputs in order to detect and identify security flaws. It is often used in vulnerability mining. The most prevalent fuzzing approach is grey-box fuzzing, which combines lightweight code instrumentation with data-feedback-driven generation of fresh program input seeds. AFL (American Fuzzy Lop) is an outstanding grey-box fuzzing tool that is well known for its quick fork server execution, dependable genetic algorithm, and numerous mutation techniques. AFLGO proposes and executes power scheduling based on a simulated annealing process for a more appropriate energy allocation to seeds, however it is neither reliable nor successful. To tackle this issue, we offer an energy-dynamic scheduling strategy based on the algorithm of the fruit fly. Adjusting the energy of the seeds dynamically controls the production of test cases. The findings demonstrate that the approach suggested in this research can test the target region more rapidly and thoroughly and has a high application value for patch testing and vulnerability replication.
APA, Harvard, Vancouver, ISO, and other styles
12

Bhavana, K. Y., S. Usha, T. V. Mallesh, and G. D. Shiva Raju. "Numerical Simulation of Eco Sustainable Construction and Demolition Waste (CDW) based Geopolymer Concrete Blocks as Coarse Aggregates by using ANSYS Software." IOP Conference Series: Earth and Environmental Science 1327, no. 1 (2024): 012016. http://dx.doi.org/10.1088/1755-1315/1327/1/012016.

Full text
Abstract:
Abstract Recycled Aggregate (RA) reduces the carbon footprint and disposal problem associated with construction and demolition debris. As India is a lower middle country, demolition of existing buildings and constructions is taking place at a rapid pace. According to the Building Materials Promotion Board, India generates 150 million tons of construction and demolition waste annually. A study commissioned by the BBMP estimated CDW generation in Bengaluru 2,500 to 3,000 tons per day in 2022, leading to the problem of CDW disposal. The biggest environmental problem is the rapid manufacture of cement for construction activities, which generates millions of tons of carbon dioxide annually, the greatest environmental concern. Coarse aggregate was made from demolished buildings and other construction debris. The construction and demolition were collected from C&amp;D Waste Traders, sirjala Bengaluru of 10 mm size recycled aggregate using. As the crushed aggregate is porous in nature due to residual slurry, these aggregates were treated in two stages: abrasion to remove residual loose slurry, and then again by immersing the aggregate in fly ash solution of concentration 1:6. Additionally, these cured aggregates are mixed with a strong geopolymer precursor in a mix ratio of 1:4:5 to form 400X200X150 mm block. To evaluate the masonry behavior of concrete blocks the tests are conducted as per the ASTM codes. The Prism Test on Masonry was conducted as per the code ASTM C 1314 and Diagonal Tension (Shear) in Masonry Assemblages as per the code ASTM E519 and this experimental study is compared with the ANSYS Software.
APA, Harvard, Vancouver, ISO, and other styles
13

Gregory, Drobaha, Lisitsin Vladimir, Safoshkina Lyudmila, Morozov Ihor, and Poberezhnyi Andrey. "CONSTRUCTION OF THE EXPERT SYSTEM OF GEO­SPATIAL ANALYSIS THAT EMPLOYS SCENARIOS FOR THE AUTOMATED DATA GENERATION FOR A DIGITAL MAP." Eastern-European Journal of Enterprise Technologies 3, no. 2(99) (2019): 43–50. https://doi.org/10.15587/1729-4061.2019.170620.

Full text
Abstract:
This paper reports a study into the formalization of algorithms for solving problems, the generation of data for digital maps, as well as their implementation, through a set of simple operations that would be intuitively clear to a user who is not a specialist in the field of geoinformation technologies. The approach that has been proposed is based on the construction of typical scenarios for model execution. Such scenarios are edited and adapted to the use of alternative electronic terrain maps. The result of scenario operation is a set of data ‒ layers of a digital map based on the input parameters for the model and the problem-solving algorithms, compiled by an expert. That makes it possible to construct libraries of typical scenarios, to store them centralized, as well as provide a common access to the scenarios, and to exchange data among applications. The result of running a scenario is that the user is provided with a possibility, without writing a programming code, to perform complex operations on processing geographical data and to simulate various processes at an electronic terrain map. A geospatial analysis expert system has been developed, containing both the basic functions for geographical data processing and the high-level specialized models. A tree of decisions is built under a mode of visual construction of a problem-solving algorithm. We have implemented a conveyor of operations at which the data sources in an expert system derived when performing any operation are sent to the input of the next operation. The results of this research could be used in simulation models of military activities, the tasks on photogrammetry in designing the optimal routes to fly over a territory, and as an additional tool for analysis of terrain in geoinformation systems. There is a possibility to expand the functionality of an expert system and to add new types of operations. Thus, there is reason to assert that the process of automatic construction of data for digital maps requires specialized software and highly skilled users of geoinformation systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Gilks, William P., Tanya M. Pennell, Ilona Flis, Matthew T. Webster, and Edward H. Morrow. "Whole genome resequencing of a laboratory-adapted Drosophila melanogaster population sample." F1000Research 5 (November 7, 2016): 2644. http://dx.doi.org/10.12688/f1000research.9912.1.

Full text
Abstract:
As part of a study into the molecular genetics of sexually dimorphic complex traits, we used next-generation sequencing to obtain data on genomic variation in an outbred laboratory-adapted fruit fly (Drosophila melanogaster) population. We successfully resequenced the whole genome of 220 hemiclonal females that were heterozygous for the same Berkeley reference line genome (BDGP6/dm6), and a unique haplotype from the outbred base population (LHM). The use of a static and known genetic background enabled us to obtain sequences from whole genome phased haplotypes. We used a BWA-Picard-GATK pipeline for mapping sequence reads to the dm6 reference genome assembly, at a median depth of coverage of 31X, and have made the resulting data publicly-available in the NCBI Short Read Archive (Accession number SRP058502). We used Haplotype Caller to discover and genotype 1,726,931 small genomic variants (SNPs and indels, &lt;200bp). Additionally we detected and genotyped 167 large structural variants (1-100Kb in size) using GenomeStrip/2.0. Sequence and genotype data are publicly-available at the corresponding NCBI databases: Short Read Archive, dbSNP and dbVar (BioProject PRJNA282591). We have also released the unfiltered genotype data, and the code and logs for data processing and summary statistics (https://zenodo.org/communities/sussex_drosophila_sequencing/).
APA, Harvard, Vancouver, ISO, and other styles
15

Constable, Thomas W., and Geoff Ross. "Trace element leaching in bench-scale recirculating ash transport systems." Canadian Journal of Civil Engineering 13, no. 2 (1986): 233–40. http://dx.doi.org/10.1139/l86-031.

Full text
Abstract:
Fly and bottom ash from coal-fired power generating stations are commonly disposed by transporting the ash in a water slurry to a lagoon. The recently developed "Environmental codes of practice for steam electric power generation" recommend the use of recycled lagoon decant water rather than fresh makeup water for these sluicing operations. To provide background information during the development of these environmental codes of practice, bench-scale studies were conducted to simulate the operation of recirculating bottom ash and combined fly/bottom ash lagoon systems, and data were collected on the concentrations of trace elements in the ash sluice waters. The ashes were obtained from seven Canadian coal-fuelled power generating stations. For most ash systems, the pH of the slurry water remained relatively constant after the first two recirculation cycles, and generally was lower in a bottom ash system than in the corresponding fly/bottom ash system. The major dissolved species in the slurry waters were sulphate, calcium, and sodium. Concentrations in bottom ash systems usually increased linearly with increasing cycles of concentration, whereas concentrations in fly/bottom ash systems generally increased during the first several cycles, then either remained constant or decreased. Scaling was observed only in studies involving fly/bottom ash from stations burning western Canadian coal or a mixture of western Canadian and U.S. bituminous coals. Key words: ash handling, fly ash, lagoons, leaching.
APA, Harvard, Vancouver, ISO, and other styles
16

Patkar, Uday, Priyanshu Singh, Harshit Panse, Shubham Bhavsar, and Chandramani Pandey. "PYTHON FOR WEB DEVELOPMENT." International Journal of Computer Science and Mobile Computing 11, no. 4 (2022): 36–48. http://dx.doi.org/10.47760/ijcsmc.2022.v11i04.006.

Full text
Abstract:
With evolution of web, several competitive languages such as Java, PHP, Python, Ruby are catching the attention of the developers. Recently Python has emerged as a popular and the preferred web programming language, because of its simplicity to code and ease of learning. Being a flexible language, it offers fast development of web-based applications. It offers development using CGI and WSGI. Web development in Python is aided by the powerful frameworks such as Django, web2py, Pyramid, and Flask that Python supports. Thus, Python promises to emerge as one of the preferred choice language for web applications. Web is a rapidly growing repository of resources. Internet is used as a medium for accessing these resources. Web architecture mainly comprises of two entities, namely client and server. Web client is an application (browser) on host machine that urges these resources, and web server is a machine on web that is responsible for fulfilling the request issued by client. Hypertext Transfer Protocol (HTTP) is the most popular protocol used by client and server for web communication. In a static web, browser issues HTTP request to the HTTP server, which searches for the requisite resource in its database and returns it as an HTTP response. To avoid any compatibility issues, every request issued by browser is in form of a URL (Uniform Resource Locator). The URL protocol defines the rules for communication between client and server. It comprise of host name (IP address) which helps in identifying the server system on the web, port number which determines the service (for example, FTP, email service) on the server that should respond to request, and the access path of the resource (web page) on server. The web where responses are already stored in server database in form of static web pages is termed static web. However, response returned by server to the client may be generated on the fly depending upon the request of the client. Web applications offer several benefits over traditional applications that are required to be installed at each host computer that wishes to use them. Web applications do not incur publishing and distribution costs as opposed to traditional applications where the software (applications) were published using CD’s and distributed. They need not be installed at each client host; rather they are placed at a central server and accessed by large number of clients. Since a web application is managed centrally, application updates and data backups can be performed easily. The web applications are easily accessible irrespective of the boundaries of space and time. Since, they are accessed through browser, the platform accessing them is not an issue, and thus they provide cross-platform compatibility. Inspite of above- mentioned advantages, web applications have a few limitations. Internet connectivity and server availability is required for accessing web application through browser. However, accessing them through Internet my take more time as compared to applications installed on host systems. Also, web applications require compatible web browsers. Since they are deployed on web, they are vulnerable to several Internet attacks. Web programming using CGI and WSGI requires building web applications from the scratch by using Python standard libraries. Python provides with web frameworks in the form of packages/ modules that simplify the task of writing application programs. These frameworks lighten tedious job of developers. They support server and client side programming by providing support for several activities such as request interpretation (getting form parameters, handling cookies and sessions), response generation (generating data in HTML or other format such as pdf, excel), and storing data. The web frameworks are further categorized as full- stack and non-full-stack frameworks. Full-stack frameworks provide components for every phase of programming in contrast to non-full-stack frameworks. All the frameworks include templates and data persistence as key ingredients for constructing web. Templates are used to avoid complex code that results when HTML and Python code is mixed in a single file. Templates are HTML files with placeholder for the data depending upon user input. Data persistence deals with storing and retrieving data and maintaining consistency. The data can be stored and maintained using plain text files, relational database engines such as MYSQL, Oracle, or some object-oriented databases. The web framework providing support for WSGI should be preferred. This makes deploying an application easier.
APA, Harvard, Vancouver, ISO, and other styles
17

Patkar, Uday, Priyanshu Singh, Harshit Panse, Shubham Bhavsar, and Chandramani Pandey. "PYTHON FOR WEB DEVELOPMENT." International Journal of Computer Science and Mobile Computing 11, no. 4 (2022): 36–48. http://dx.doi.org/10.47760/ijcsmc.2022.v11i04.006.

Full text
Abstract:
With evolution of web, several competitive languages such as Java, PHP, Python, Ruby are catching the attention of the developers. Recently Python has emerged as a popular and the preferred web programming language, because of its simplicity to code and ease of learning. Being a flexible language, it offers fast development of web-based applications. It offers development using CGI and WSGI. Web development in Python is aided by the powerful frameworks such as Django, web2py, Pyramid, and Flask that Python supports. Thus, Python promises to emerge as one of the preferred choice language for web applications. Web is a rapidly growing repository of resources. Internet is used as a medium for accessing these resources. Web architecture mainly comprises of two entities, namely client and server. Web client is an application (browser) on host machine that urges these resources, and web server is a machine on web that is responsible for fulfilling the request issued by client. Hypertext Transfer Protocol (HTTP) is the most popular protocol used by client and server for web communication. In a static web, browser issues HTTP request to the HTTP server, which searches for the requisite resource in its database and returns it as an HTTP response. To avoid any compatibility issues, every request issued by browser is in form of a URL (Uniform Resource Locator). The URL protocol defines the rules for communication between client and server. It comprise of host name (IP address) which helps in identifying the server system on the web, port number which determines the service (for example, FTP, email service) on the server that should respond to request, and the access path of the resource (web page) on server. The web where responses are already stored in server database in form of static web pages is termed static web. However, response returned by server to the client may be generated on the fly depending upon the request of the client. Web applications offer several benefits over traditional applications that are required to be installed at each host computer that wishes to use them. Web applications do not incur publishing and distribution costs as opposed to traditional applications where the software (applications) were published using CD’s and distributed. They need not be installed at each client host; rather they are placed at a central server and accessed by large number of clients. Since a web application is managed centrally, application updates and data backups can be performed easily. The web applications are easily accessible irrespective of the boundaries of space and time. Since, they are accessed through browser, the platform accessing them is not an issue, and thus they provide cross-platform compatibility. Inspite of above- mentioned advantages, web applications have a few limitations. Internet connectivity and server availability is required for accessing web application through browser. However, accessing them through Internet my take more time as compared to applications installed on host systems. Also, web applications require compatible web browsers. Since they are deployed on web, they are vulnerable to several Internet attacks. Web programming using CGI and WSGI requires building web applications from the scratch by using Python standard libraries. Python provides with web frameworks in the form of packages/ modules that simplify the task of writing application programs. These frameworks lighten tedious job of developers. They support server and client side programming by providing support for several activities such as request interpretation (getting form parameters, handling cookies and sessions), response generation (generating data in HTML or other format such as pdf, excel), and storing data. The web frameworks are further categorized as full- stack and non-full-stack frameworks. Full-stack frameworks provide components for every phase of programming in contrast to non-full-stack frameworks. All the frameworks include templates and data persistence as key ingredients for constructing web. Templates are used to avoid complex code that results when HTML and Python code is mixed in a single file. Templates are HTML files with placeholder for the data depending upon user input. Data persistence deals with storing and retrieving data and maintaining consistency. The data can be stored and maintained using plain text files, relational database engines such as MYSQL, Oracle, or some object-oriented databases. The web framework providing support for WSGI should be preferred. This makes deploying an application easier.
APA, Harvard, Vancouver, ISO, and other styles
18

Davidson, Eva E., and William R. Martin. "On-the-Fly Generation of Differential Resonance Scattering Probability Distribution Functions for Monte Carlo Codes." Nuclear Science and Engineering 187, no. 1 (2017): 1–26. http://dx.doi.org/10.1080/00295639.2017.1294931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Terlizzi, S., and D. Kotlyar. "ON THE FLY PREDICTION OF TH-DEPENDENT SPATIAL MACROSCOPIC CROSS-SECTIONS USING FFT." EPJ Web of Conferences 247 (2021): 02036. http://dx.doi.org/10.1051/epjconf/202124702036.

Full text
Abstract:
Monte Carlo (MC) codes can accurately model neutron transport in nuclear reactors. However, the efficient inclusion of thermal-hydraulic (TH) feedback within the MC calculation sequence is still an open problem, particularly when burnup’s time-evolution must be included in the analysis. For this reason, deterministic codes, leveraging the use of macroscopic cross-sections generated with higher order methods from 2D lattice calculations, are still widely used to perform reduced-order multiphysics analyses. However, traditional cross-sections generation procedures typically decompose the large core problem into multiple assembly-level problems; thus not having the ability to capture inter-nodal effects. Moreover, the pre-generation procedure requires additional pre-computational time to perturb/branch the problem for various operational conditions (e.g. fuel temperature), which, again, is decoupled from the core. In this paper, we propose a new method leveraging the use of Fourier transfer functions to predict the cross-sections distribution due to a variation in TH conditions. The method was tested against a 3D BWR unit-cell problem with realistic density profile and axial fuel heterogeneity. The method was able to compute the mono-energetic cross-sections distribution with maximum error lower than 2%. Insights on the influence of the statistics used to generate the cross-sections on the accuracy of the results is also provided.
APA, Harvard, Vancouver, ISO, and other styles
20

Abu-Hamdeh, Nidal H., Khaled A. Alnefaie, and Majed K. Al-Hajjaj. "Conceptual Design of Solar Powered Unmanned Arial Vehicle." Applied Mechanics and Materials 225 (November 2012): 299–304. http://dx.doi.org/10.4028/www.scientific.net/amm.225.299.

Full text
Abstract:
The solar-powered aircraft represents a major step forward in environmentally friendly vehicle technology. An unmanned aircraft vehicle (UAV) was designed to fly for 24 hours continuously to achieve surveillance at low altitude. It is a lightweight, solar-powered, remotely piloted flying wing aircraft that is demonstrating the technology of applying solar power for long-duration and low-altitude flight. Several programs and codes were used in the designing process of the UAV and generating its layout. A MATLAB computer programming code was written to optimize on various values of aspect ratio (AR) and wingspan (b) after setting the mission requirements and estimating the technological parameters. A program called Java Foil was used to calculate the lift. Another program called RDS was used to obtain the final layout of the aircraft. The great benefit is that the design is general enough to be applied to different values of aspect ratio and wingspan. Moreover, the analytical form of the method allows identifying clear some general principles like the optimization on various values of aspect ratio and wingspan, and the calculation of the lift.
APA, Harvard, Vancouver, ISO, and other styles
21

Demir, Cansu, Ülkü Yetiş, and Kahraman Ünlü. "Identification of waste management strategies and waste generation factors for thermal power plant sector wastes in Turkey." Waste Management & Research: The Journal for a Sustainable Circular Economy 37, no. 3 (2018): 210–18. http://dx.doi.org/10.1177/0734242x18806995.

Full text
Abstract:
Thermal power plants are of great environmental importance in terms of the huge amounts of wastes that they produce. Although there are process-wise differences among these energy production systems, they all depend on the logic of burning out a fuel and obtaining thermal energy to rotate the turbines. Depending on the process modification and the type of fuel burned, the wastes produced in each step of the overall process may change. In this study, the most expected process and non-process wastes stemming from different power generation processes have been identified and given their European Waste Codes. Giving priority to the waste minimization options for the most problematic wastes from thermal power plants, waste management strategies have been defined. In addition, by using the data collected from site visits, from the literature and provided by the Turkish Republic Ministry of Environment and Urbanization, waste generation factor ranges expressed in terms of kilogram of waste per energy produced annually (kg/MWh) have been estimated. As a result, the highest generation was found to be in fly ash (24–63 for imported coal, 200–270 for native coal), bottom ash (1.3–6 for imported coal, 42–87 for native coal) and the desulfurization wastes (7.3–32) produced in coal combustion power plants. The estimated waste generation factors carry an important role in that they aid the authorities to monitor the production wastes declared by the industries.
APA, Harvard, Vancouver, ISO, and other styles
22

Ma, Shuming, Lei Cui, Damai Dai, Furu Wei, and Xu Sun. "LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 6810–17. http://dx.doi.org/10.1609/aaai.v33i01.33016810.

Full text
Abstract:
We introduce the task of automatic live commenting. Live commenting, which is also called “video barrage”, is an emerging feature on online video sites that allows real-time comments from viewers to fly across the screen like bullets or roll at the right side of the screen. The live comments are a mixture of opinions for the video and the chit chats with other comments. Automatic live commenting requires AI agents to comprehend the videos and interact with human viewers who also make the comments, so it is a good testbed of an AI agent’s ability to deal with both dynamic vision and language. In this work, we construct a large-scale live comment dataset with 2,361 videos and 895,929 live comments. Then, we introduce two neural models to generate live comments based on the visual and textual contexts, which achieve better performance than previous neural baselines such as the sequence-to-sequence model. Finally, we provide a retrieval-based evaluation protocol for automatic live commenting where the model is asked to sort a set of candidate comments based on the log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank. Putting it all together, we demonstrate the first “LiveBot”. The datasets and the codes can be found at https://github.com/lancopku/livebot.
APA, Harvard, Vancouver, ISO, and other styles
23

Aliaga, Jose I., Maribel Castillo, Sergio Iserte, Iker Martín-Álvarez, and Rafael Mayo. "A Survey on Malleability Solutions for High-Performance Distributed Computing." Applied Sciences 12, no. 10 (2022): 5231. http://dx.doi.org/10.3390/app12105231.

Full text
Abstract:
Maintaining a high rate of productivity, in terms of completed jobs per unit of time, in High-Performance Computing (HPC) facilities is a cornerstone in the next generation of exascale supercomputers. Process malleability is presented as a straightforward mechanism to address that issue. Nowadays, the vast majority of HPC facilities are intended for distributed-memory applications based on the Message Passing (MP) paradigm. For this reason, many efforts are based on the Message Passing Interface (MPI), the de facto standard programming model. Malleability aims to rescale executions on-the-fly, in other words, reconfigure the number and layout of processes in running applications. Process malleability involves resources reallocation within the HPC system, handling processes of the application, and redistributing data among those processes to resume the execution. This manuscript compiles how different frameworks address process malleability, their main features, their integration in resource management systems, and how they may be used in user codes. This paper is a detailed state-of-the-art devised as an entry point for researchers who are interested in process malleability.
APA, Harvard, Vancouver, ISO, and other styles
24

Lopes, João D., Mário P. Véstias, Rui Policarpo Duarte , Horácio C. Neto, and José T. de Sousa. "Coarse-Grained Reconfigurable Computing with the Versat Architecture." Electronics 10, no. 6 (2021): 669. http://dx.doi.org/10.3390/electronics10060669.

Full text
Abstract:
Reconfigurable computing architectures allow the adaptation of the underlying datapath to the algorithm. The granularity of the datapath elements and data width determines the granularity of the architecture and its programming flexibility. Coarse-grained architectures have shown the right balance between programmability and performance. This paper provides an overview of coarse-grained reconfigurable architectures and describes Versat, a Coarse-Grained Reconfigurable Array (CGRA) with self-generated partial reconfiguration, presented as a case study for better understanding these architectures. Unlike most of the existing approaches, which mainly use pre-compiled configurations, a Versat program can generate and apply myriads of on-the-fly configurations. Partial reconfiguration plays a central role in this approach, as it speeds up the generation of incrementally different configurations. The reconfigurable array has a complete graph topology, which yields unprecedented programmability, including assembly programming. Besides being useful for optimising programs, assembly programming is invaluable for working around post-silicon hardware, software, or compiler issues. Results on core area, frequency, power, and performance running different codes are presented and compared to other implementations.
APA, Harvard, Vancouver, ISO, and other styles
25

Chefer, Hila, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. "Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models." ACM Transactions on Graphics 42, no. 4 (2023): 1–10. http://dx.doi.org/10.1145/3592116.

Full text
Abstract:
Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect , where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes ( e.g. , colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN) , where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite , we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen --- or excite --- their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. Code is available at our project page: https://attendandexcite.github.io/Attend-and-Excite/.
APA, Harvard, Vancouver, ISO, and other styles
26

Glover, D. M. "New doors to open…and so many!" Journal of Cell Science 113, no. 3 (2000): 359–60. http://dx.doi.org/10.1242/jcs.113.3.359.

Full text
Abstract:
The pursuit of science is a wonderful journey of discovery along which there are a myriad of avenues to be explored. There have always been so many objects of fascination, so many questions to ask along the way, so many possibilities to understand new principles, that making the decision about which problem to address and then having the self-discipline to explore it in depth challenge all who practice the art. How then are we, as cell biologists, to cope with the mountain of information that is accumulating as we enter the twenty-first century? We now have the potential to decipher the primary sequences of every single cellular protein for several model organisms. Just how are we to put this information into an intelligible framework for understanding cell physiology? The turn of a century is a time at which we can permit ourselves the luxury of looking backwards as well as forwards. Where were we a century ago, what were the challenges that faced us then and how do these questions relate to our future goals? As a cell biologist standing on the threshold of the twentieth century, one must have had a similar feeling of elation and expectation to that which we have at the present time. The Theory of Cells had been established by Schleiden and Schwan in 1838–1839, and in the following fifty years it had led to unifying ideas about the nature of plants and animals, an understanding of embryonic development, and the mysteries of the fertilisation of the egg and genetic continuity in terms of ‘cellular immortality’. These were truly halcyon days. By the end of the nineteenth century many of the central principles of cell biology were firmly established. Virchow had maintained in 1855 that every cell is the offspring of a pre-existing parent cell, but the realisation that the cell nucleus is essential for this continuity had to wait another 30 years. By this time, Miecher had already made in 1871 his famous discovery of nuclein, a phosphorus-rich substance extracted from preparations of nuclei from sperm and pus cells, and over the next twenty years a spectrum of sophisticated dyes became available that facilitated the visualisation of not only nuclein but also asters, spindle fibres, and microsomal components of cytoplasm in fixed preparations of cells. The centrosome, discovered independently by Flemming in 1875 and Van Beneden in 1876, and named by Boveri in 1888, was already considered to be an autonomous organelle with a central role in cell division. The behaviour of chromosomes, centrosomes, astral fibres and spindle fibres throughout mitosis and meiosis had been described in exquisite detail. Galeotti had even concluded by 1893 that the unequal distribution of chromatin in cancer cells correlates with an inequality of the centrosomes and the development of abnormal spindles - a conclusion reinforced by others over a century later (Pihan et al., 1998; Lingle et al., 1998). It had taken 200 years following Leuwenhoek's first observation of sperm to Hertwig's demonstration in 1875 that fertilisation of the egg is accomplished by its union with one spermatozoon. This demonstration was rapidly followed by Van Beneden's discovery - eventually to unify genetics and cell biology - that the nuclei of germ cells each contain one half the number of chromosomes characteristic of body cells. By 1902, both Sutton and Boveri had realised that the behaviour of chromosomes in meiosis precisely parallels the behaviour of Mendel's genetic particles described some 35 years earlier. In many ways we have witnessed during the past 50 years, and particularly in the last quarter century, a series of exciting breakthroughs in establishing an understanding of genetic function and continuity that are comparable to those of the previous century in demonstrating cellular function and continuity. The determination of the structure of DNA in 1953 and the elucidation of the genetic code throughout the 1960s led to the rapid realisation of the code's universality. The parallel development of sophisticated techniques for studying the genetics of the model bacterium Escherichia coli and its plasmids and viruses paved the way for a new era in biology. We were soon to construct recombinant DNA molecules in vitro, propagate them and eventually express them in E. coli, taking full advantage of the universality of the code. The principles of cloning DNA molecules had been clearly enunciated by Berg and Hogness in the early 1970s, and I myself had the great fortune as a young post-doc to share in this excitement and participate in putting some of these principles into their early practice. By the end of that decade, genes had been cloned from a multitude of eukaryotes and, moreover, technologies had been developed by Maxam and Gilbert and by Sanger that enabled these cloned genes to be sequenced. The accelerating accumulation of knowledge enabled by these simple technical breakthroughs has been astounding, leading to the determination of the complete genome sequences of budding yeast, the nematode Caenorhabditis elegans and the fruit fly, Drosophila melanogaster, and the prospect of the complete human sequence within a few years. To date we have managed this accumulating wealth reasonably well. Cloned genes have allowed cell biologists access to the encoded proteins, and as a consequence we have a working knowledge of many cellular processes. The sub-cellular meanderings of molecules have been charted with increasing accuracy, and gene products have been positioned in regulatory pathways. The concerted application of genetic and molecular approaches has given new insights into cell biology. This is particularly evident from work on the yeasts, which have come into their own as model systems with our realisation of the extent to which cell biological processes have been conserved. Nevertheless, the resulting regulatory pathways that emerge from our current ways of looking at the cell are rather unidimensional, gene products being placed into linear pathways as a result of either molecular or genetic analyses. Our current views are often blind to the fact that the cell is a multidimensional structure whose components are arranged in space, have multiple contacts that change with time and can respond simultaneously to a multitude of signals. Glimpses of such complexity are emerging from studies in which microarrays of all the identified open reading frames (ORFs) from the complete budding yeast genome have been screened for changes in patterns of gene expression throughout the cell cycle or upon sporulation. Cell-cycle-dependent periodicity was found for 416 of the 6220 monitored ORFs, and over 25% of these genes were found to be clustered at particular chromosomal sites, which suggesting there are global chromosomal responses in transcriptional control (Cho et al., 1998). The study of sporulation is perhaps the first example of the application of this type of technology to a developmental process. It revealed that, of the 6220 genes, about 500 undergo repression and 500 induction in seven temporally distinct patterns during the sporulation process, identifying potential functions for many previously uncharacterised genes (Chu et al., 1998). These studies already reveal layers of complexity in the regulation of the levels of transcripts as cells prepare for and pass through the different stages of meiosis. How much more complex are these patterns likely to be when viewed in terms of proteins, and their interactions, locations and functions within the cell? It seems clear, however, that a wonderful molecular description of the events of meiosis that can match the cytological understanding revealed by the work of Van Beneden and Boveri one hundred years ago is within our grasp. The cataloguing of all cellular proteins is now feasible through a combination of 2D-gel analysis and mass spectrometry, from which molecular mass data can be correlated with the fragment sizes of peptides predicted from whole genome sequence data (the emerging field of proteomics). It is not an easy task, but it seems just a matter of time before we have all this information at our fingertips. Yet how can we know the functions of all these proteins and have a full 3D picture of how they interact within a cell and the dynamics with which they do so? Yeast may be the first eukaryote for which some of these problems can be approached. Its genome is six-times smaller than that of C. elegans and 200 times smaller than the human genome, and has the further advantage that the genes can be easily disrupted through homologous recombination. Thus the prospect of systematic gene deletion to study the function of the 3700 novel ORFs identified in the whole genome sequence is feasible for this organism (Winzeler et al., 1999). One group in particular has devised a multifaceted approach for doing this: the affected gene is simultaneously tagged with an in-frame transcriptional reporter and further modified to epitope tag the affected protein, which thus allows the latter to be immunolocalised within cells (Ross-MacDonald et al., 1999). We can thus see the glimmerings of a holistic, genome-wide, cell-wide unravelling of cellular physiology. Some of these approaches will be easily adaptable to higher organisms. We will soon have read-outs of RNA expression patterns in cells undergoing a variety of developmental and physiological programmes in normal and diseased states. The analysis of function and the identification of ORFs in higher eukaryotes are likely to be more problematic. However, solutions for the rapid assessment of the functions of novel genes are already emerging. New insights are coming from labs using double-stranded RNA to interfere with cellular processes in C. elegans. It was originally found in this organism that the injection of double-stranded RNA corresponding to part of the mRNA of a gene prevents the expression of that gene through a mechanism that currently remains mysterious (Fire, 1999). The technique works extremely well in the nematode and even in the fruit fly, but doubts had been cast as to whether it would ever be valuable in mammals. The recent finding that the technique does indeed work in the mouse may well accelerate programmes to identify gene function by circumventing the particularly lengthy procedures for disruption of mouse genes (Wianny and Zernicka-Goetz, 2000). The multiple layers of complexity revealed by these emerging studies give some indication of the computational power that will be needed to model the cell. Is it now time for a new breed of mathematical biologists to emerge? Our present generation of cellular and molecular biologists have lost sight of some of the basic principles of physical chemistry, and quantitative analyses are done poorly if at all. Should the quantification of reaction kinetics now come out of the traditional domain of enzymology and be applied to multiple cellular processes - if we are truly to understand the dynamics of the living cell? If the yeast cell is complex, then how much greater complexity will we find in multicellular eukaryotes, given all the potential for cell-cell interactions? These problems are perhaps most alluring in the field of development, in which many phenomena are now demanding attention at the cellular level. In recent decades we have seen classical embryological approaches supplemented by genetic analyses to define the components of many developmental signalling pathways. This has demonstrated the existence of a conserved collection of molecular switches that can be used in a variety of different developmental circumstances. We are perhaps reaching the limits at which conventional genetic analyses can interpret these processes: often the precise relationships between components of regulatory pathways is not clear. We require a better grasp of how the molecules within the pathways interact, which will require the concerted application of sub-cellular fractionation, to identify molecular complexes, and proteomics. This has to be achieved in a way that allows us to interpret the consequences of multiple signalling events between different cell types. In the introduction to his famous text The Cell in Development and Inheritance, E. B. Wilson wrote almost a century ago: ‘It has only recently become possible adequately to formulate the great problems of development and heredity in terms of cellular biology - indeed we can as yet do little more than so formulate them.’ Has our perspective changed during the past one hundred years? Are not these the same challenges that lie ahead for the twenty-first century? It is now rather like being Alice in Wonderland in a room with many doors, each of which marks the onset of a new journey. Undoubtedly, any of the doors will lead to remarkable opportunities, but to what extent can we, as Alice, rely upon drinking from the bottle, or eating the biscuit, that happens to be at hand? We will have to use the existing resources, but it will be fascinating to see what new ingenuities we can bring to bear to help us on our journey through Wonderland. I have the feeling that we are to witness conceptual challenges to the way we think about cell biology that we cannot yet begin to appreciate…but what I would give to be around in one hundred years time to witness the progress we have made on our journeys!
APA, Harvard, Vancouver, ISO, and other styles
27

Demirhan, Osman. "Genotoxic Effects of Radiofrequency-Electromagnetic Fields." Journal of Toxicology and Environmental Sciences 1, no. 1 (2021): 9–12. http://dx.doi.org/10.55124/jtes.v1i1.50.

Full text
Abstract:
Genotoxic Effects of Radiofrequency-Electromagnetic Fields.&#x0D; IntroductionRadiation is energy emission in the form of electromagnetic waves emitted from the solar system and natural resources on earth. The currents produced by the elementary particles formed by the electric current create the magnetic field. Earth's surface is under the influence of the geomagnetic field emanating from the sun. However, the outer liquid also has a magnetic field created as a result of heat transfer in the core. Therefore, all living organisms on earth live under the influence of electromagnetic fields (EMF). Today, besides these natural energy resources, rapidly developing technological developments provide most of the convenience in our lives and expose people to artificial electromagnetic fields. However, man's magnetic field is also under the influence of other natural and artificial magnetic fields around him. In particular, by ionizing radiation, which carries enough energy to break down the genetic material, die cells as a result of DNA damaging, and other diseases, especially cancer, can develop as a result of tissue damage.&#x0D; Electromagnetic Fields in Our LivesToday, apart from natural geomagnetic fields, radiation is emitted from many technological devices. The spectrum of these fields includes many different types of radiation, from subatomic radiation such as gamma and X-rays to radio waves, depending on their wavelengths. Though, as a result of the rapid increase of technological growth, the duration and amount of exposure to EMF is also steadily increasing. On the other hand, wireless gadgets such as computers, smartphones and medical radiological devices have become a necessity for humans. Almost everyone is exposed to radiofrequency electromagnetic fields (RF-EMF) from cell phone and base station antennas or other sources. Thus, the damage caused by the radiation to the environment affects living organisms even many kilometres away unlimitedly. All organisms in the world live under the influence of these negative environmental changes and a large part of the world population is exposed to radiofrequency (RF) radiation for a long time in their daily lives. So, though we are not aware of it, our organs and tissues are constantly exposed to radiation. Therefore, radiation adversely affects human, animal and plant health and disrupts the environment and ecological balance. An example of negative effects, radiation can cause genetic changes in the body (Figure 1).&#x0D; Radiation is divided into ionizing and non-ionizing. Ionizing radiations cause electron loss or gain in an atom or group of atoms in the medium they pass through. Thus, positively or negatively charged ions are formed. High energy X, gamma, ultraviolet and some visible rays in the ionized region of the electromagnetic spectrum can be counted. Since gamma rays, X rays and ultraviolet rays can ionize the molecules in living things more, they can easily disrupt the chemical structure of tissues, cells and DNA molecules in living organisms. Therefore, they can be very dangerous and deadly to living things. The energy of the waves in the non-ionizing region of the electromagnetic spectrum is low and the energy levels are insufficient for the ionization of molecules. Electricity, radio and TV waves, microwaves, and infrared rays are not ionizing because they have low energy. Waves emitted from electronic devices (cell phones, computers, microwave ovens, etc.) are absorbed by the human and animal body. The amount of energy absorbed by the unit biological tissue mass per unit time is called the specific absorption rate (SAR), and its unit is W/kg.&#x0D; Risks of Electromagnetic Fields on Living ThingsDepending on the structure of the tissues and organs, the radiation must reach a certain threshold dose for the effect to occur. Radiation levels below the threshold dose are not effective. Depending on the structure of the tissues and organs, the radiation must reach a certain threshold dose. The effects of small doses of waves are negligible. However, the clinical effects of waves above a certain threshold may increase. High dose waves can cause cell death in tissues. Damages in the cell may increase the risk of cancer and hereditary damage after a while, and somatic effects in people exposed to radiation may cause cancer to appear years later. There is much research on the effects of RF fields. In vitro and in vivo studies on rats, plants and different tissues of humans; suggests that the RF fields are not genotoxic and the fact that harmful effect is due to the heat effect. The contradictory results on this issue have brought about discussions. Therefore, there are still concerns about the potential adverse effects of RFR on human health. A good understanding of the biological effects of RF radiation will protect against potential damages. Due to these uncertainties, with the electromagnetic field project of the World Health Organization, experimental and modelling studies on the biological effects of RF radiation have been accelerated. In 2011, the International Agency for Research on Cancer decided that RF-EMR waves could be potentially carcinogenic to humans (2). Considering that almost everyone, including young children, uses mobile phones in addition to other technological devices, the danger of electromagnetic waves has increased social interest.&#x0D; Genotoxic Effects of EMFIn addition to stimulating apoptosis and changes in ion channels, RF-EMF waves also have a potential effect on genetic material. The radiation absorbed by organisms causes the ionization of target molecules. In particular, biological damage may occur as a result of stimulation/ionization of atoms and disruption of molecular structures while ionizing radiation passes through tissue. As a result of ionization in the cell, electron increases and free electrons cause damage, especially in macromolecules and DNA. Free electrons move directly or indirectly. Free electrons directly affect the phosphodiester or H-bonds of DNA. As a result, the phosphodiester bonds of DNA in the cell are broken, single or double-stranded breakages and chemical toxins increase. DNA double-strand breaks are the most relevant biologic damage induced by ionizing radiation (3,4).&#x0D; There are no cells that are resistant to radiation. The nucleus of the cell and especially the chromosomes in dividing cells are very sensitive to radiation. One of the most important effects of radiation on the cell is to suppress cell growth. In particular, growth is impaired in cells exposed to radiation during cell division (mitosis). Consequently, cells with a high division rate are more sensitive to radiation. DNA damage in somatic cells can lead to cancer development or cell death. Cell death can occur as a result of breaking down DNA because ionizing radiation has enough energy to break down the cell's genetic material. Thus, tissues are damaged and cancer development may be triggered. DNA damage caused by radiation in cells is repaired by metabolic repair processes. If the breaks in DNA as a result of DNA damage caused by radiation in cells are not too large, they can be repaired by metabolic repair processes. Still, errors may occur during this repair. Chromosomes containing different genetic codes and information may also occur. In the cell, the released electrons interact with water molecules, indirectly causing the water to be reactively divided into two parts. Free radicals carry an electron that is not electrically shared in their orbits. Free radicals can cause genetic damage in DNA such as nucleotide changes, double and single-strand breaks. Radiation can cause chromosomes to break, stick together and rearrange. All these changes can lead to mutations or even further, the death of the cell. However, in addition to ionizing radiation, extracellular genotoxic chemicals and intracellular oxidative metabolic residues can also create stress in cells during DNA replication and cell division. Damage may occur during DNA replication under such environmental stress conditions.&#x0D; To date, conflicting results have been reported regarding the genotoxic effects of RF-EMF waves on genetic material. It has been reported that the energy of low EM fields is not sufficient to break the chemical bonds of DNA, but the increase in exposure time is effective on the formation of oxygen radicals and the disruptions in the DNA repair process. The absorption of microwaves can cause significant local warming in cells. For example, an increase in temperature has been observed in cells in culture media exposed to waves of high SAR levels. However, there is evidence that reactive oxygen species are formed in cells indirectly and experimentally exposed to RF-EMF waves. Free oxygen radicals can create nucleotide entries in DNA as well as bind cellular components to DNA bases (5). The frequency of polymorphisms observed in DNA repair mechanism genes in children with acute leukaemia living close to high energy lines reveals the effect of this energy on the repair process. Significant evidence has been reported that genotoxic effects occur in various cell types when exposed to RF-EMF waves (6-10). Here, it has been reported that cells exposed to RF-EMF waves (1.800 MHz, SAR 2 W/kg) cause oxidative damage in mitochondrial DNA, DNA breaks in neurons and DNA breaks in amniotic cells (6,10). Similarly, the damage has been reported in lymphocytes exposed to various RF-EMF waves (8). However, exposure to RF-EMF waves is known to cause chromosome imbalance, changes in gene expression, and gene mutations. Such deleterious genetic effects have also been reported in neurons, blood lymphocytes, sperm, red blood cells, epithelial cells, hematopoietic tissue, lung cells, and bone marrow (1,11,12). It has been found that exposure to RF-EMF radiation also increases chromosome numerical aberrations (6,13). It has also been reported that increased chromosome separation in mouse oocytes exposed to EM and increased DNA fragmentation and apoptosis in fly egg cells (14,15). However, increased DNA breaks have been reported in the blastomeres of embryos of pregnant mice exposed to a frequency of 50 Hz, and a decrease in the number of blastocysts has been reported (16). Genetic damages to sex cells can lead to persistent genetic diseases in subsequent generations.&#x0D; Today, X-ray devices used for medical diagnosis have become one of the largest sources of radiation. These radiological procedures used for diagnosis constitute an important part of ionizing radiation. During these processes, the human body is visibly or invisibly affected by X-rays. As a matter of fact, X-rays have effects of disrupting the structure and biochemical activities of DNA, RNA, proteins and enzymes that are vital in the organism (17). Many studies on this subject have revealed that radiation has suppressive and mutational effects on DNA synthesis. These effects can cause serious damage to the cell as well as DNA and chromosome damage. In a recent study, chromosome damage was investigated in patients with X-ray angiography and personnel working in radiological procedures (18). Our findings showed that the beams used in interventional radiological procedures caused chromosomal damage and the rate of chromosomal abnormalities (CAs) increased significantly in patients after the procedure and this damage increased with the amount of radiation dose. Therefore, the radiation dose to be given to the patient should be chosen carefully. Besides, our findings showed that the frequency of CA is significantly higher in personnel working in radiological procedures. This reveals that interventional cardiologists are exposed to high radiation exposure. For this reason, we can say that the personnel working in radiological procedures (physician, health technician and nurse) are very likely to get diseases after years because they are exposed to low doses but long-term X-rays. Therefore, both the potential risks and safety of exposure to medical radiological devices must be continuously monitored. Furthermore, the fact that chromatid and chromosome breaks are very common among structural CAs in our findings suggests that they may be the cause of malignancy. Because, there are many cancer genes, tumour suppressor genes, enzyme genes involved in DNA repair and important genes or candidate genes responsible forapoptosis on these chromosomes. All this information shows that patients are more susceptible to DNA damage and inappropriate radiological examinations should be avoided. Therefore, X-ray and other diagnostic imaging techniques should not be applied unless necessary, and physicians and patients should be more careful in this regard.&#x0D; It has been reported that RF-EMR waves emitted from wireless communication device mobile phones have a genotoxic effect on human and mammalian cells (6,19). In a recent study; The effects of 900 and 1800 MHz cell phone frequencies on human chromosomes were investigated in amniotic cell cultures (6). Here, it has been reported that chromosome packing delays, damage and breaks occur in amniotic cells exposed to 900 and 1800 MHz every day at 3, 6 and 12 hours for twelve days. However, it was found that the frequency of 1800 MHz caused more CAs than 900 MHz, and the amount of damage increased with increasing usage time. These results confirm that GSM-like RF-EMR causes direct genotoxic effects in human in vitro cultures and has adverse effects on human chromosomes, and these effects increase in parallel with exposure time. This shows us that the mobile phone carries a risk for human health and these genetic damages can cause cancer. Therefore, necessary precautions should be taken for these harmful effects of mobile phones. Among these measures, the periods of mobile phone use should be kept short, especially the exposure of developing children and infants to mobile phones should be prevented, and avoiding excessive use of mobile phones may be one of the precautions against cancer. However, in order to evaluate it in more detail, the effects of mobile phones with environmental mutagens and/or carcinogens should be considered in subsequent researches.&#x0D; ConclusionToday, in parallel with the increasing technological developments, the demand of the society for electronic devices and phones and the frequency ranges of electronic devices are constantly increasing. Waves emitted from electronic devices are absorbed by human and animal bodies. Especially, the use of phones by contact with our body and the increase in usage time affects not only adults but also young children. Therefore, there is increasing concern in society about the negative biological effects of EM waves emitted from phones and other electronic devices. Results from all studies show that RF-EMF waves may be carcinogenic due to their genotoxic effect. Because cancer is a disease that occurs as a result of genetic damage. Considering these negative and harmful effects, regulations following international standards regarding the use of electronic devices should be made and society should be made aware of the risks.References&#x0D; &#x0D; Kim JH.; Lee K.; Kim HG.; Kim KB.; Kim HR. Possible Effects of Radiofrequency Electromagnetic Field Exposure on Central Nerve System. Biomol Ther. 2019, 27(3), 265-275.&#x0D; Baan R.; Grosse Y.; Lauby-Secretan B.; et al. WHO International Agency for Research on Cancer Monograph Working Group. Carcinogenicity of radiofrequency electromagnetic fields. Lancet Oncol. 2011, 12, 624–626.&#x0D; Berrington De Gonzalez A.; Darby S. Risk of cancer from diagnostic X-rays: estimates for the UK and 14 other countries. Lancet. 2004, 363, 345-351.&#x0D; Löbrich M.; Jeggo PA.The impact of a negligent G2/M checkpoint on genomic instability and cancer induction. Nat Rev Cancer. 2007, 861–869.&#x0D; M Valko.; M Izakovic.; M Mazur.; CJ Rhodes.; J Telser. Role of oxygen radicals in DNA damage and cancer incidence. Cell. Biochem. 2004, 266, 37–56.&#x0D; Uslu N.; Demirhan O.; Emre M.; Seydaoğlu G. The chromosomal effects of GSM-like electromagnetic radiation exposure on human fetal cells. Biomed Res Clin Prac. 2019, 4, 1-6.&#x0D; Lee S.; Johnson D.; Dunbar K Dong H.; Ge X.; Kim YC.; Wing C.; Jayathilaka N.; Emmanuel N.; Zhou CQ.; Gerber HL.; Tseng CC.; Wang SM. 2.45 GHz radiofrequency fields alter gene expression in cultured human cells. FEBS Lett. 2005, 579, 4829-4836.&#x0D; Phillips JL.; Singh NP.; Lai, H. Electromagnetic fields and DNA damage. Pathophysiology. 2009, 16, 79-88.&#x0D; Ruediger HW. Genotoxic effects of radiofrequency electromagnetic fields. Pathophysiology. 2009, 16, 89-102.&#x0D; Xu S.; Zhou Z.; Zhang L.; Yu Z.; Zhang W.; Wang Y.; Wang X.; Li M.; Chen Y.; Chen C.; He M.; Zhang G.; Zhong M. Exposure to 1800 MHz radiofrequency radiation induces oxidative damage to mitochondrial DNA in primary cultured neurons. Brain Res. 2010, 1311, 189-196.&#x0D; Demsia G.; Vlastos D.; Matthopoulos DP. Effect of 910-MHz electromagnetic field on rat bone marrow. 2004, 2, 48-54.&#x0D; Zhao TY.; Zou SP.; Knapp PE. Exposure to cell phone radiation up-regulates apoptosis genes in primary cultures of neurons and astrocytes. Lett. 2007, 412, 34-38.&#x0D; Mashevich M.; Folkman D.; Kesar A.; Barbul A.; Korenstein R.; Jerby E.; Avivi L. Exposure of human peripheral blood lymphocytes to electromagnetic fields associated with cellular phones leads to chromosomal instability. Bioelectromagnetics. 2003, 24, 82-90.&#x0D; Panagopoulos DJ.; Chavdoula ED.; Nezis IP.; Margaritis LH. Cell death induced by GSM 900-MHz and DCS 1800-MHz mobile telephony radiation. Mutat Res. 2007, 626(1–2), 69–78.&#x0D; Sagioglou NE.; Manta AK.; Giannarakis IK.; Skouroliakou AS.; Margaritis LH. Apoptotic cell death during Drosophila oogenesis is differentially increased by electromagnetic radiation depending on modulation, intensity and duration of exposure. Electromagn Biol Med. 2015, 1-14.&#x0D; Borhani N.; Rajaei F.; Salehi Z.; Javadi A. Analysis of DNA fragmentation in mouse embryos exposed to an extremely low-frequency electromagnetic field. Electromagn Biol Med. 2011, 30(4), 246–252.&#x0D; Rowley R.; Phillips EN.; Schroeder AL. Effects of ionizing radiation on DNA synthesis in eukaryotic cells. Int J Radiat Biol. 1999, 75( 3), 267-283.&#x0D; Çetinel N.; Demirhan O.; Demirtaş M.; Çağlıyan ÇE.; Cüreoğlu A.; Uslu IN.; Sertdemir Y. The Genotoxic Effect Of Interventional Cardiac Radiologic Procedures On Human Chromosomes. Clinical Medical Reviews and Reports. 2020, 3(1), 1-10.&#x0D; Aitken RJ.; Bennetts LE.; Sawyer D.; Wiklendt AM.; King BV. Impact of radio frequency electromagnetic radiation on DNA integrity in the male germline. Int J Androl. 2005, 28(3), 171–179.&#x0D;
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Jooyong. "A Case for Dynamic Reverse-code Generation." BRICS Report Series 14, no. 15 (2007). http://dx.doi.org/10.7146/brics.v14i15.22179.

Full text
Abstract:
Backtracking (i.e. reverse execution) helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. As has often been said, the ultimate solution for backtracking is to use reverse code: executing the reverse code restores the previous states of a program. In our earlier work, we presented a method to generate reverse code on the fly while running a debugger. This article presents a case study of dynamic reverse-code generation. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation can outperform the existing backtracking methods in terms of memory efficiency.
APA, Harvard, Vancouver, ISO, and other styles
29

Garrison, Lehman H., Daniel J. Eisenstein, Douglas Ferrer, Nina A. Maksimova, and Philip A. Pinto. "The Abacus cosmological N-body code." Monthly Notices of the Royal Astronomical Society, September 7, 2021. http://dx.doi.org/10.1093/mnras/stab2482.

Full text
Abstract:
Abstract We present Abacus, a fast and accurate cosmological N-body code based on a new method for calculating the gravitational potential from a static multipole mesh. The method analytically separates the near- and far-field forces, reducing the former to direct 1/r2 summation and the latter to a discrete convolution over multipoles. The method achieves 70 million particle updates per second per node of the Summit supercomputer, while maintaining a median fractional force error of 10−5. We express the simulation time step as an event-driven “pipeline”, incorporating asynchronous events such as completion of co-processor work, Input/Output, and network communication. Abacus has been used to produce the largest suite of N-body simulations to date, the AbacusSummit suite of 60 trillion particles (Maksimova et al., 2021), incorporating on-the-fly halo finding. Abacus enables the production of mock catalogs of the volume and resolution required by the coming generation of cosmological surveys.
APA, Harvard, Vancouver, ISO, and other styles
30

Kempeneers, Pieter, Ondrej Pesek, Marchi Davide De, and Pierre Soille. "pyjeo: A Python Package for the Analysis of Geospatial Data." October 17, 2019. https://doi.org/10.3390/ijgi8100461.

Full text
Abstract:
A new Python package, pyjeo, that deals with the analysis of geospatial data has been created by the Joint Research Centre (JRC). Adopting the principles of open science, the JRC strives for transparency and reproducibility of results. In this view, it has been decided to release pyjeo as free and open software. This paper describes the design of pyjeo and how its underlying C/C++ library was ported to Python. Strengths and limitations of the design choices are discussed. In particular, the data model that allows the generation of on-the-fly data cubes is of importance. Two uses cases illustrate how pyjeo can contribute to open science. The first is an example of large-scale processing, where pyjeo was used to create a global composite of Sentinel-2 data. The second shows how pyjeo can be imported within an interactive platform for image analysis and visualization. Using an innovative mechanism that interprets Python code within a C++ library on-the-fly, users can benefit from all functions in the pyjeo package. Images are processed in deferred mode, which is ideal for prototyping new algorithms on geospatial data, and assess the suitability of the results created on the fly at any scale and location.
APA, Harvard, Vancouver, ISO, and other styles
31

Lenticchia, Erica, and Marialorenza Vescovi. "Flexural design of alkali‐activated reinforced concrete beams: Evaluating model errors using standards for Portland concrete." Structural Concrete, January 23, 2025. https://doi.org/10.1002/suco.202400716.

Full text
Abstract:
AbstractThe article examines the potential of utilizing existing standard codes, although originally designed for Portland cement, in the flexural design of geopolymer (GP) concrete beams. In particular, the article collects experimental data from the literature on the cracking and ultimate moment of reinforced GP concrete beams, with and without steel fibers (SFs). The experimental moments are compared with those calculated using different standard codes for Portland cement (2nd generation EC2, ACI318, ACI363, AS3600) and GP concrete (SATS199). The purpose of this comparison is to evaluate the model error obtained with the different codes. The same procedure is applied on experimental data from RC beams made with Portland cement. To study the model error, the results obtained with different precursor materials (granulated blast furnace slag or fly ash), concrete compressive strengths, and reinforcement percentages are analyzed. The different codes have different levels of conservatism, resulting in different average model errors. However, within the same code, the average model errors for GP and Portland concretes are similar. Therefore, the existing codes can be used to calculate the cracking moment and ultimate moment of GP concrete beams. However, some uncertainty remains for the ultimate moment of over‐reinforced beams, for which the number of experimental data is still limited.
APA, Harvard, Vancouver, ISO, and other styles
32

"Making the Web 2.0 Faster for Next Generation." International Journal of Engineering and Advanced Technology 9, no. 1 (2019): 2922–24. http://dx.doi.org/10.35940/ijeat.a1237.109119.

Full text
Abstract:
Undeniably the most favored web scripting language is PHP. Almost 80% of the internet’s server-side web applications are written in PHP which includes big giants like WordPress, Wikipedia, and Facebook. In present-day, at an accelerating pace, the quantity of digital content is burgeoning. A heterogeneous set of users' devices is being amassed by these contents and administering these contents manually is an infeasible solution engendering an increasing set of problems. A solution to this problem would be to switch to a web programming language, which can be compiled. We are describing an easy to deploy and a continuous conversion mechanism for converting existing Web 2.0 PHP application systems into Facebook’s HHVM supported Hack server-side application systems. We are trying to use the power of Hack language and amplify the performance of existing PHP server-side applications. Instead of interpreting all of your code Hack translates it to assembly and runs that instead, which can lead to an immense amount of increase in performance. We are using Hacktificator, a tool developed by Facebook Developers and our demo web application running on HHVM to test and convert user’s existing PHP codebase to Hack language. With this proposed methodology we do not have to make any change to existing codebase manually or hire new engineers for the conversion, nor do we have to take down our live systems. Conversion can be done on the fly and will result in approximately 2x to 20x better performance. The availability of this tool can save costs for manual conversion, save time as well as improve the user experience of websites with better performance
APA, Harvard, Vancouver, ISO, and other styles
33

Haisch, Ulrich, Luc Schnell, and Stefan Schulte. "Drell-Yan production in third-generation gauge vector leptoquark models at NLO+PS in QCD." Journal of High Energy Physics 2023, no. 2 (2023). http://dx.doi.org/10.1007/jhep02(2023)070.

Full text
Abstract:
Abstract Motivated by the long-standing hints of lepton-flavour non-universality in the b → cℓν and b → sℓ+ℓ− channels, we study Drell-Yan ditau production at the Large Hadron Collider (LHC). In the context of models with third-generation gauge vector leptoquarks (LQs), we calculate the complete $$ \mathcal{O} $$ O (αs) corrections to the pp → τ+τ− process, achieving next-to-leading order (NLO) plus parton shower (NLO+PS) accuracy using the POWHEG method. We provide a dedicated Monte Carlo code that evaluates the NLO QCD corrections on-the-fly in the event generation and use it to study the numerical impact of NLO+PS corrections on the kinematic distributions that enter the existing experimental searches for non-resonant ditau final states. Based on our phenomenological analysis we derive NLO accurate constraints on the masses and couplings of third-generation gauge vector LQs using the latest LHC ditau search results corresponding to an integrated luminosity of around 140 fb−1 of proton-proton collisions at $$ \sqrt{s} $$ s = 13 TeV. The presented NLO+PS generator allows for an improved signal modelling, making it an essential tool for future ATLAS and CMS searches for vector LQs in τ+τ− final states at LHC Run III and beyond.
APA, Harvard, Vancouver, ISO, and other styles
34

Borgbjerg, Jens, and Arne Hørlyck. "Web-Based GPU-Accelerated Application for Multiplanar Reconstructions from Conventional 2D Ultrasound." Ultraschall in der Medizin - European Journal of Ultrasound, September 5, 2019. http://dx.doi.org/10.1055/a-0999-5347.

Full text
Abstract:
Abstract Purpose In ultrasound education there is a need for interactive web-based learning resources. The purpose of this project was to develop a web-based application that enables the generation and exploration of volumetric datasets from cine loops obtained with conventional 2D ultrasound. Materials and Methods JavaScript code for ultrasound video loading and the generation of volumetric datasets was created and merged with an existing web-based imaging viewer based on JavaScript and HTML5. The Web Graphics Library was utilized to enable hardware-accelerated image rendering. Results The result is a web application that works in most major browsers without any plug-ins. It allows users to load a conventional 2D ultrasound cine loop which can subsequently be manipulated with on-the-fly multiplanar reconstructions as in a Digital Imaging and Communications in Medicine (DICOM) viewer. The application is freely accessible at (http://www.castlemountain.dk/atlas/index.php?page = mulrecon&amp;mulreconPage = sonoviewer) where a demonstration of web-based sharing of generated cases can also be found. Conclusion The developed web-based application is unique in its ability to easily perform loading of one’s own ultrasound clips and conduct multiplanar reconstructions where interactive cases can be shared on the Internet.
APA, Harvard, Vancouver, ISO, and other styles
35

Haisch, Ulrich, Luc Schnell, and Stefan Schulte. "On Drell-Yan production of scalar leptoquarks coupling to heavy-quark flavours." Journal of High Energy Physics 2022, no. 11 (2022). http://dx.doi.org/10.1007/jhep11(2022)106.

Full text
Abstract:
Abstract Given the hints of lepton-flavour non-universality in semi-leptonic B decays, leptoquark (LQ) models with sizeable couplings to heavy-quark flavours are enjoying a renaissance. While such models are subject to stringent constraints from low-energy experiments also bounds from non-resonant dilepton searches at the Large Hadron Collider (LHC) turn out to be phenomenologically relevant. Based on the latest LHC dilepton analyses corresponding to an integrated luminosity of around 140 fb−1 of proton-proton collisions at $$ \sqrt{s} $$ s = 13 TeV, we present improved limits on the scalar LQ couplings that involve heavy-quark flavours and light or heavy dileptons. In particular, we show that effects beyond the leading order that are related to real QCD emissions are relevant in this context, since the inclusion of additional heavy-flavoured jets notably improves the exclusion limits that derive from the high-mass dilepton tails. The impact of electroweak corrections and interference effects between signal and background is also analysed. Within the POWHEG-BOX framework we provide a dedicated Monte Carlo code that allows for an on-the-fly signal event generation including all the LQ corrections considered in this article.
APA, Harvard, Vancouver, ISO, and other styles
36

Brown, Andrew R. "Code Jamming." M/C Journal 9, no. 6 (2006). http://dx.doi.org/10.5204/mcj.2681.

Full text
Abstract:
&#x0D; &#x0D; &#x0D; Jamming culture has become associated with digital manipulation and reuse of materials. As well, the term jamming has long been used by musicians (and other performers) to mean improvisation, especially in collaborative situations. A practice that gets to the heart of both these meanings is live coding; where digital content (music and/or visuals predominantly) is created through computer programming as a performance. During live coding performances digital content is created and presented in real time. Normally the code from the performers screen is displayed via data projection so that the audience can see the unfolding process as well as see or hear the artistic outcome. This article will focus on live coding of music, but the issues it raises for jamming culture apply to other mediums also. Live coding of music uses the computer as an instrument, which is “played” by the direct construction and manipulation of sonic and musical processes. Gestural control involves typing at the computer keyboard but, unlike traditional “keyboard” instruments, these key gestures are usually indirect in their effect on the sonic result because they result in programming language text which is then interpreted by the computer. Some live coding performers, notably Amy Alexander, have played on the duality of the keyboard as direct and indirect input source by using it as both a text entry device, audio trigger, and performance prop. In most cases, keyboard typing produces notational description during live coding performances as an indirect music making, related to what may previously have been called composing or conducting; where sound generation is controlled rather than triggered. The computer system becomes performer and the degree of interpretive autonomy allocated to the computer can vary widely, but is typically limited to probabilistic choices, structural processes and use of pre-established sound generators. In live coding practices, the code is a medium of expression through which creative ideas are articulated. The code acts as a notational representation of computational processes. It not only leads to the sonic outcome but also is available for reflection, reuse and modification. The aspects of music described by the code are open to some variation, especially in relation to choices about music or sonic granularity. This granularity continuum ranges from a focus on sound synthesis at one end of the scale to the structural organisation of musical events or sections at the other end. Regardless of the level of content granularity being controlled, when jamming with code the time constraints of the live performance environment force the performer to develop succinct and parsimonious expressions and to create processes that sustain activity (often using repetition, iteration and evolution) in order to maintain a coherent and developing musical structure during the performance. As a result, live coding requires not only new performance skills but also new ways of describing the structures of and processes that create music. Jamming activities are additionally complex when they are collaborative. Live Coding performances can often be collaborative, either between several musicians and/or between music and visual live coders. Issues that arise in collaborative settings are both creative and technical. When collaborating between performers in the same output medium (e.g., two musicians) the roles of each performer need to be defined. When a pianist and a vocalist improvise the harmonic and melodic roles are relatively obvious, but two laptop performers are more like a guitar duo where each can take any lead, supportive, rhythmic, harmonic, melodic, textual or other function. Prior organisation and sensitivity to the needs of the unfolding performance are required, as they have always been in musical improvisations. At the technical level it may be necessary for computers to be networked so that timing information, at least, is shared. Various network protocols, most commonly Open Sound Control (OSC), are used for this purpose. Another collaboration takes place in live coding, the one between the performer and the computer; especially where the computational processes are generative (as is often the case). This real-time interaction between musician and algorithmic process has been termed Hyperimprovisation by Roger Dean. Jamming cultures that focus on remixing often value the sharing of resources, especially through the movement and treatment of content artefacts such as audio samples and digital images. In live coding circles there is a similarly strong culture of resource sharing, but live coders are mostly concerned with sharing techniques, processes and tools. In recognition of this, it is quite common that when distributing works live coding artists will include descriptions of the processes used to create work and even share the code. This practice is also common in the broader computational arts community, as evident in the sharing of flash code on sites such as Levitated by Jared Tarbell, in the Processing site (Reas &amp; Fry), or in publications such as Flash Maths Creativity (Peters et al.). Also underscoring this culture of sharing, is a prioritising of reputation above (or prior to) profit. As a result of these social factors most live coding tools are freely distributed. Live Coding tools have become more common in the past few years. There are a number of personalised systems that utilise various different programming languages and environments. Some of the more polished programs, that can be used widely, include SuperCollider (McCartney), Chuck (Wang &amp; Cook) and Impromptu (Sorensen). While these environments all use different languages and varying ways of dealing with sound structure granularity, they do share some common aspects that reveal the priorities and requirements of live coding. Firstly, they are dynamic environments where the musical/sonic processes are not interrupted by modifications to the code; changes can be made on the fly and code is modifiable at runtime. Secondly, they are text-based and quite general programming environments, which means that the full leverage of abstract coding structures can be applied during live coding performances. Thirdly, they all prioritise time, both at architectural and syntactic levels. They are designed for real-time performance where events need to occur reliably. The text-based nature of these tools means that using them in live performance is barely distinguishable from any other computer task, such as writing an email, and thus the practice of projecting the environment to reveal the live process has become standard in the live coding community as a way of communicating with an audience (Collins). It is interesting to reflect on how audiences respond to the projection of code as part of live coding performances. In the author’s experience as both an audience member and live coding performer, the reception has varied widely. Most people seem to find it curious and comforting. Even if they cannot follow the code, they understand or are reassured that the performance is being generated by the code. Those who understand the code often report a sense of increased anticipation as they see structures emerge, and sometimes opportunities missed. Some people dislike the projection of the code, and see it as a distasteful display of virtuosity or as a distraction to their listening experience. The live coding practitioners tend to see the projection of code as a way of revealing the underlying generative and gestural nature of their performance. For some, such as Julian Rohrhuber, code projection is a way of revealing ideas and their development during the performance. “The incremental process of livecoding really is what makes it an act of public reasoning” (Rohrhuber). For both audience and performer, live coding is an explicitly risky venture and this element of public risk taking has long been central to the appreciation of the performing arts (not to mention sport and other cultural activities). The place of live coding in the broader cultural setting is still being established. It certainly is a form of jamming, or improvisation, it also involves the generation of digital content and the remixing of cultural ideas and materials. In some ways it is also connected to instrument building. Live coding practices prioritise process and therefore have a link with conceptual visual art and serial music composition movements from the 20th century. Much of the music produced by live coding has aesthetic links, naturally enough, to electronic music genres including musique concrète, electronic dance music, glitch music, noise art and minimalism. A grouping that is not overly coherent besides a shared concern for processes and systems. Live coding is receiving greater popular and academic attention as evident in recent articles in Wired (Andrews), ABC Online (Martin) and media culture blogs including The Teeming Void (Whitelaw 2006). Whatever its future profile in the boarder cultural sector the live coding community continues to grow and flourish amongst enthusiasts. The TOPLAP site is a hub of live coding activities and links prominent practitioners including, Alex McLean, Nick Collins, Adrian Ward, Julian Rohrhuber, Amy Alexander, Frederick Olofsson, Ge Wang, and Andrew Sorensen. These people and many others are exploring live coding as a form of jamming in digital media and as a way of creating new cultural practices and works. References Andrews, R. “Real DJs Code Live.” Wired: Technology News 6 July 2006. http://www.wired.com/news/technology/0,71248-0.html&gt;. Collins, N. “Generative Music and Laptop Performance.” Contemporary Music Review 22.4 (2004): 67-79. Fry, Ben, and Casey Reas. Processing. http://processing.org/&gt;. Martin, R. “The Sound of Invention.” Catapult. ABC Online 2006. http://www.abc.net.au/catapult/indepth/s1725739.htm&gt;. McCartney, J. “SuperCollider: A New Real-Time Sound Synthesis Language.” The International Computer Music Conference. San Francisco: International Computer Music Association, 1996. 257-258. Peters, K., M. Tan, and M. Jamie. Flash Math Creativity. Berkeley, CA: Friends of ED, 2004. Reas, Casey, and Ben Fry. “Processing: A Learning Environment for Creating Interactive Web Graphics.” International Conference on Computer Graphics and Interactive Techniques. San Diego: ACM SIGGRAPH, 2003. 1. Rohrhuber, J. Post to a Live Coding email list. livecode@slab.org. 10 Sep. 2006. Sorensen, A. “Impromptu: An Interactive Programming Environment for Composition and Performance.” In Proceedings of the Australasian Computer Music Conference 2005. Eds. A. R. Brown and T. Opie. Brisbane: ACMA, 2005. 149-153. Tarbell, Jared. Levitated. http://www.levitated.net/daily/index.html&gt;. TOPLAP. http://toplap.org/&gt;. Wang, G., and P.R. Cook. “ChucK: A Concurrent, On-the-fly, Audio Programming Language.” International Computer Music Conference. ICMA, 2003. 219-226 Whitelaw, M. “Data, Code &amp; Performance.” The Teeming Void 21 Sep. 2006. http://teemingvoid.blogspot.com/2006/09/data-code-performance.html&gt;. &#x0D; &#x0D; &#x0D; &#x0D; Citation reference for this article&#x0D; &#x0D; MLA Style&#x0D; Brown, Andrew R. "Code Jamming." M/C Journal 9.6 (2006). echo date('d M. Y'); ?&gt; &lt;http://journal.media-culture.org.au/0612/03-brown.php&gt;. APA Style&#x0D; Brown, A. (Dec. 2006) "Code Jamming," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?&gt; from &lt;http://journal.media-culture.org.au/0612/03-brown.php&gt;. &#x0D;
APA, Harvard, Vancouver, ISO, and other styles
37

Petek, Marko, Maja Zagorščak, Andrej Blejec, et al. "pISA-tree - a data management framework for life science research projects using a standardised directory tree." Scientific Data 9, no. 1 (2022). http://dx.doi.org/10.1038/s41597-022-01805-5.

Full text
Abstract:
AbstractWe developed pISA-tree, a straightforward and flexible data management solution for organisation of life science project-associated research data and metadata. pISA-tree was initiated by end-user requirements thus its strong points are practicality and low maintenance cost. It enables on-the-fly creation of enriched directory tree structure (project/Investigation/Study/Assay) based on the ISA model, in a standardised manner via consecutive batch files. Templates-based metadata is generated in parallel at each level enabling guided submission of experiment metadata. pISA-tree is complemented by two R packages, pisar and seekr. pisar facilitates integration of pISA-tree datasets into bioinformatic pipelines and generation of ISA-Tab exports. seekr enables synchronisation with the FAIRDOMHub repository. Applicability of pISA-tree was demonstrated in several national and international multi-partner projects. The system thus supports findable, accessible, interoperable and reusable (FAIR) research and is in accordance with the Open Science initiative. Source code and documentation of pISA-tree are available at https://github.com/NIB-SI/pISA-tree.
APA, Harvard, Vancouver, ISO, and other styles
38

Haisch, Ulrich, Darren J. Scott, Marius Wiesemann, Giulia Zanderighi, and Silvia Zanoli. "NNLO event generation for $$ pp\to Zh\to {\mathrm{\ell}}^{+}{\mathrm{\ell}}^{-}b\overline{b} $$ production in the SM effective field theory." Journal of High Energy Physics 2022, no. 7 (2022). http://dx.doi.org/10.1007/jhep07(2022)054.

Full text
Abstract:
Abstract We consider associated Zh production with Z → ℓ+ℓ− and $$ h\to b\overline{b} $$ h → b b ¯ decays in hadronic collisions. In the framework of the Standard Model effective field theory (SMEFT) we calculate the QCD corrections to this process and achieve next-to-next-to-leading order plus parton shower (NNLO+PS) accuracy using the MiNNLOPS method. This precision is obtained for a subset of six SMEFT operators, including the corrections from effective Yukawa- and chromomagnetic dipole-type interactions. Missing higher-order QCD effects associated with the considered dimension-six operators are estimated to have a relative numerical impact of less than a percent on the total rate once existing experimental limits on the relevant Wilson coefficients are taken into account. We provide a dedicated Monte Carlo (MC) code that evaluates the NNLO SMEFT corrections on-the-fly in the event generation. This MC generator is used to study the numerical impact of NNLO+PS corrections on the kinematic distributions in $$ pp\to Zh\to {\mathrm{\ell}}^{+}{\mathrm{\ell}}^{-}b\overline{b} $$ pp → Zh → ℓ + ℓ − b b ¯ production employing simple SMEFT benchmark scenarios. We identify the invariant mass $$ {m}_{b\overline{b}} $$ m b b ¯ of the two b-tagged jets as well as the three-invariant jet mass $$ {m}_{b\overline{b}j} $$ m b b ¯ j as particularly interesting observables to study SMEFT effects. These distributions receive contributions that change both their normalisation and shape with the latter modifications depending on the exact jet definition. To our knowledge SMEFT effects of this type have so far not been discussed in the literature. The presented MC generator can also serve as a starting point to obtain NNLO+PS accuracy for a suitable enlarged set of effective operators in the future.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, W., A. Pillepich, J. ZuHone, et al. "Radio relics in massive galaxy cluster mergers in the TNG-Cluster simulation." Astronomy & Astrophysics, March 1, 2024. http://dx.doi.org/10.1051/0004-6361/202348194.

Full text
Abstract:
Radio relics are diffuse synchrotron sources in the outskirts of merging galaxy clusters energized by the merger shocks. In this paper, we present an overview of the radio relics in massive cluster mergers identified in the new TNG-Cluster simulation. This is a suite of magnetohydrodynamical cosmological zoom-in simulations of 352 massive galaxy clusters with $ sampled from a 1 Gpc-sized cosmological box. The simulations were performed using the moving-mesh code AREPO with the galaxy formation model and high numerical resolution consistent with the TNG300 run of the IllustrisTNG series. We post-processed the shock properties obtained from the on-the-fly shock finder to estimate the diffuse radio emission generated by cosmological shockwaves for a total of $ radio relics at redshift $z=0-1$. TNG-Cluster returned a variety of radio relics with diverse morphologies, encompassing classical examples of double radio relics, single relics, and ``inverted" radio relics that are convex to the cluster center. Moreover, the simulated radio relics reproduced both the abundance and statistical relations of observed relics. We find that extremely large radio relics ($&gt;$ 2 Mpc) are predominantly produced in massive cluster mergers with $ This underscores the significance of simulating massive mergers to study giant radio relics similar to those found in observations. We released a library of radio relics from the TNG-Cluster simulation, which will serve as a crucial reference for upcoming next-generation surveys.
APA, Harvard, Vancouver, ISO, and other styles
40

Borgbjerg, Jens, John D. Thompson, John David Thompson, Ivar Mjøland Salte, and Jens Brøndum Frøkjær. "Towards AI-augmented radiology education: a web-based application for perception training in chest X-ray nodule detection." British Journal of Radiology, September 26, 2023. http://dx.doi.org/10.1259/bjr.20230299.

Full text
Abstract:
Objectives: Artificial intelligence (AI)-based applications for augmenting radiological education are underexplored. Prior studies have demonstrated the effectiveness of simulation in radiological perception training. This study aimed to develop and make available a pure web-based application called Perception Trainer for perception training in lung nodule detection in chest X-rays. Methods: Based on open-access data, we trained a deep-learning model for lung segmentation in chest X-rays. Subsequently, an algorithm for artificial lung nodule generation was implemented and combined with the segmentation model to allow on-the-fly procedural insertion of lung nodules in chest X-rays. This functionality was integrated into an existing zero-footprint web-based DICOM viewer, and a dynamic HTML page was created to specify case generation parameters. Results: The result is an easily accessible platform-agnostic web application available at: https://castlemountain.dk/mulrecon/perceptionTrainer.html. The application allows the user to specify the characteristics of lung nodules to be inserted into chest X-rays, and it produces automated feedback regarding nodule detection performance. Generated cases can be shared through a uniform resource locator. Conclusion: We anticipate that the description and availability of our developed solution with open-sourced codes may help facilitate radiological education and stimulate the development of similar AI-augmented educational tools. Advances in knowledge: A web-based application applying artificial intelligence-based techniques for radiological perception training was developed. The application demonstrates a novel approach for on-the-fly generation of cases in chest x-ray lung nodule detection employing deep learning-based segmentation and lung nodule simulation.
APA, Harvard, Vancouver, ISO, and other styles
41

Harel, Re’em, Tal Kadosh, Niranjan Hasabnis, Timothy Mattson, Yuval Pinter, and Gal Oren. "PragFormer: Data-Driven Parallel Source Code Classification with Transformers." International Journal of Parallel Programming 53, no. 1 (2024). http://dx.doi.org/10.1007/s10766-024-00778-9.

Full text
Abstract:
AbstractMulti-core shared memory architectures have become ubiquitous in computing hardware nowadays. As a result, there is a growing need to fully utilize these architectures by introducing appropriate parallelization schemes, such as OpenMP worksharing-loop constructs, to applications. However, most developers find introducing OpenMP directives to their code hard due to pervasive pitfalls in managing parallel shared memory. To assist developers in this process, many compilers, as well as source-to-source (S2S) translation tools, have been developed over the years, tasked with inserting OpenMP directives into code automatically. In addition to having limited robustness to their input format, these compilers still do not achieve satisfactory coverage and precision in locating parallelizable code and generating appropriate directives. Recently, many data-driven AI-based code completion (CC) tools, such as GitHub CoPilot, have been developed to ease and improve programming productivity. Leveraging the insights from existing AI-based programming-assistance tools, this work presents a novel AI model that can serve as a parallel-programming assistant. Specifically, our model, named PragFormer, is tasked with identifying loops that can benefit from conversion to parallel worksharing-loop construct (OpenMP directive) and even predict the need for specific data-sharing attributes clauses on the fly. We created a unique database, named Open-OMP, specifically for this goal. Open-OMP contains over 32,000 unique code snippets from different domains, half of which contain OpenMP directives, while the other half do not. We experimented with different model design parameters for these tasks and showed that our best-performing model outperforms a statistically-trained baseline as well as a state-of-the-art S2S compiler. In fact, it even outperforms the popular generative AI model of ChatGPT. In the spirit of advancing research on this topic, we have already released source code for PragFormer as well as Open-OMP dataset to public. Moreover, an interactive demo of our tool, as well as a Hugging Face webpage to experiment with our tool, are already available.
APA, Harvard, Vancouver, ISO, and other styles
42

Rushkoff, Douglas. "Coercion." M/C Journal 6, no. 3 (2003). http://dx.doi.org/10.5204/mcj.2193.

Full text
Abstract:
The brand began, quite literally, as a method for ranchers to identify their cattle. By burning a distinct symbol into the hide of a baby calf, the owner could insure that if it one day wandered off his property or was stolen by a competitor, he’d be able to point to that logo and claim the animal as his rightful property. When the manufacturers of products adopted the brand as a way of guaranteeing the quality of their goods, its function remained pretty much the same. Buying a package of oats with the Quaker label meant the customer could trace back these otherwise generic oats to their source. If there was a problem, he knew where he could turn. More important, if the oats were of satisfactory or superior quality, he knew where he could get them again. Trademarking a brand meant that no one else could call his oats Quaker. Advertising in this innocent age simply meant publicizing the existence of one’s brand. The sole objective was to increase consumers awareness of the product or company that made it. Those who even thought to employ specialists for the exclusive purpose of writing ad copy hired newspaper reporters and travelling salesmen, who knew how to explain the attributes of an item in words that people tended to remember. It wasn’t until 1922 that a preacher and travelling “medicine show” salesman-turned-copywriter named Claude Hopkins decided that advertising should be systematized into a science. His short but groundbreaking book Scientific Advertising proposed that the advertisement is merely a printed extension of the salesman¹s pitch and should follow the same rules. Hopkins believed in using hard descriptions over hype, and text over image: “The more you tell, the more you sell” and “White space is wasted space” were his mantras. Hopkins believed that any illustrations used in an ad should be directly relevant to the product itself, not just a loose or emotional association. He insisted on avoiding “frivolity” at all costs, arguing that “no one ever bought from a clown.” Although some images did appear in advertisements and on packaging as early as the 1800s - the Quaker Oats man showed up in 1877 - these weren¹t consciously crafted to induce psychological states in customers. They were meant just to help people remember one brand over another. How better to recall the brand Quaker than to see a picture of one? It wasn’t until the 1930s, 1940s, and 1950s, as Americans turned toward movies and television and away from newspapers and radio, that advertisers’ focus shifted away from describing their brands and to creating images for them. During these decades, Midwestern adman Leo Burnett concocted what is often called the Chicago school of advertising, in which lovable characters are used to represent products. Green Giant, which was originally just the Minnesota Valley Canning Company’s code name for an experimental pea, became the Jolly Green Giant in young Burnett’s world of animated characters. He understood that the figure would make a perfect and enticing brand image for an otherwise boring product and could also serve as a mnemonic device for consumers. As he watched his character grow in popularity, Burnett discovered that the mythical figure of a green giant had resonance in many different cultures around the world. It became a kind of archetype and managed to penetrate the psyche in more ways than one. Burnett was responsible for dozens of character-based brand images, including Tony the Tiger, Charlie the Tuna, Morris the Cat, and the Marlboro Man. In each case, the character creates a sense of drama, which engages the audience in the pitch. This was Burnett’s great insight. He still wanted to sell a product based on its attributes, but he knew he had to draw in his audience using characters. Brand images were also based on places, like Hidden Valley Ranch salad dressing, or on recognizable situations, such as the significant childhood memories labelled “Kodak moments” or a mother nurturing her son on a cold day, a defining image for Campbell’s soup. In all these cases, however, the moment, location, or character went only so far as to draw the audience into the ad, after which they would be subjected to a standard pitch: ‘Soup is good food’, or ‘Sorry, Charlie, only the best tuna get to be Starkist’. Burnett saw himself as a homespun Midwesterner who was contributing to American folklore while speaking in the plain language of the people. He took pride in the fact that his ads used words like “ain’t”; not because they had some calculated psychological effect on the audience, but because they communicated in a natural, plainspoken style. As these methods found their way to Madison Avenue and came to be practiced much more self-consciously, Burnett¹s love for American values and his focus on brand attributes were left behind. Branding became much more ethereal and image-based, and ads only occasionally nodded to a product’s attributes. In the 1960s, advertising gurus like David Ogilvy came up with rules about television advertising that would have made Claude Hopkins shudder. “Food in motion” dictated that food should always be shot by a moving camera. “Open with fire” meant that ads should start in a very exciting and captivating way. Ogilvy told his creatives to use supers - text superimposed on the screen to emphasize important phrases and taglines. All these techniques were devised to promote brand image, not the product. Ogilvy didn’t believe consumers could distinguish between products were it not for their images. In Ogilvy on Advertising, he explains that most people cannot tell the difference between their own “favourite” whiskey and the closest two competitors’: ‘Have they tried all three and compared the taste? Don¹t make me laugh. The reality is that these three brands have different images which appeal to different kinds of people. It isn¹t the whiskey they choose, it’s the image. The brand image is ninety percent of what the distiller has to sell.’ (Ogilvy, 1993). Thus, we learned to “trust our car to the man who wears the star” not because Texaco had better gasoline than Shell, but because the company’s advertisers had created a better brand image. While Burnett and his disciples were building brand myths, another school of advertisers was busy learning about its audience. Back in the 1920s, Raymond Rubicam, who eventually founded the agency Young and Rubicam, thought it might be interesting to hire a pollster named Dr. Gallup from Northwestern University to see what could be gleaned about consumers from a little market research. The advertising industry’s version of cultural anthropology, or demographics, was born. Like the public-relations experts who study their target populations in order to manipulate them later, marketers began conducting polls, market surveys, and focus groups on the segments of the population they hoped to influence. And to draw clear, clean lines between demographic groups, researchers must almost always base distinctions on four factors: race, age, sex, and wages. Demographic research is reductionist by design. I once consulted to an FM radio station whose station manager wanted to know, “Who is our listener?” Asking such a question reduces an entire listenership down to one fictional person. It’s possible that no single individual will ever match the “customer profile” meant to apply to all customers, which is why so much targeted marketing often borders on classist, racist, and sexist pandering. Billboards for most menthol cigarettes, for example, picture African-Americans because, according to demographic research, black people prefer them to regular cigarettes. Microsoft chose Rolling Stones songs to launch Windows 95, a product targeted at wealthy baby boomers. “The Women’s Global Challenge” was an advertising-industry-created Olympics for women, with no purpose other than to market to active females. By the 1970s, the two strands of advertising theory - demographic research and brand image - were combined to develop campaigns that work on both levels. To this day, we know to associate Volvos with safety, Dr. Pepper with individuality, and Harley-Davidson with American heritage. Each of these brand images is crafted to appeal to the target consumer’s underlying psychological needs: Volvo ads are aimed at upper-middle-class white parents who fear for their children’s health and security, Dr. Pepper is directed to young nonconformists, and the Harley-Davidson image supports its riders’ self-perception as renegades. Today’s modern (or perhaps postmodern) brands don’t invent a corporate image on their own; they appropriate one from the media itself, such as MetLife did with Snoopy, Butterfinger did with Bart Simpson, or Kmart did by hiring Penny Marshall and Rosie O’Donnell. These mascots were selected because their perceived characteristics match the values of their target consumers - not the products themselves. In the language of today’s marketers, brand images do not reflect on products but on advertisers’ perceptions of their audiences’ psychology. This focus on audience composition and values has become the standard operating procedure in all of broadcasting. When Fox TV executives learned that their animated series “King of the Hill”, about a Texan propane distributor, was not faring well with certain demographics, for example, they took a targeted approach to their character’s rehabilitation. The Brandweek piece on Fox’s ethnic campaign uncomfortably dances around the issue. Hank Hill is the proverbial everyman, and Fox wants viewers to get comfortable with him; especially viewers in New York, where “King of the Hill”’s homespun humor hasn’t quite caught on with the young urbanites. So far this season, the show has pulled in a 10.1 rating/15 share in households nationally, while garnering a 7.9 rating/12 share in New York (Brandweek, 1997) As far as Fox was concerned, while regular people could identify with the network’s new “everyman” character, New Yorkers weren’t buying his middle-American patter. The television show’s ratings proved what TV executives had known all along: that New York City’s Jewish demographic doesn’t see itself as part of the rest of America. Fox’s strategy for “humanizing” the character to those irascible urbanites was to target the group’s ethnographic self-image. Fox put ads for the show on the panels of sidewalk coffee wagons throughout Manhattan, with the tagline “Have a bagel with Hank”. In an appeal to the target market’s well-developed (and well-researched) cynicism, Hank himself is shown saying, “May I suggest you have that with a schmear”. The disarmingly ethnic humor here is meant to underscore the absurdity of a Texas propane salesman using a Jewish insider’s word like “schmear.” In another Upper West Side billboard, Hank’s son appeals to the passing traffic: “Hey yo! Somebody toss me up a knish!” As far as the New York demographic is concerned, these jokes transform the characters from potentially threatening Southern rednecks into loveable hicks bending over backward to appeal to Jewish sensibilities, and doing so with a comic and, most important, nonthreatening inadequacy. Today, the most intensely targeted demographic is the baby - the future consumer. Before an average American child is twenty months old, he can recognize the McDonald’s logo and many other branded icons. Nearly everything a toddler encounters - from Band-Aids to underpants - features the trademarked characters of Disney or other marketing empires. Although this target market may not be in a position to exercise its preferences for many years, it pays for marketers to imprint their brands early. General Motors bought a two-page ad in Sports Illustrated for Kids for its Chevy Venture minivan. Their brand manager rationalized that the eight-to-fourteen-year-old demographic consists of “back-seat consumers” (Leonhardt, 1997). The real intention of target marketing to children and babies, however, goes deeper. The fresh neurons of young brains are valuable mental real estate to admen. By seeding their products and images early, the marketers can do more than just develop brand recognition; they can literally cultivate a demographic’s sensibilities as they are formed. A nine-year-old child who can recognize the Budweiser frogs and recite their slogan (Bud-weis-er) is more likely to start drinking beer than one who can remember only Tony the Tiger yelling, “They¹re great!” (Currently, more children recognize the frogs than Tony.) This indicates a long-term coercive strategy. The abstraction of brand images from the products they represent, combined with an increasing assault on our demographically targeted psychological profiles, led to some justifiable consumer paranoia by the 1970s. Advertising was working on us in ways we couldn’t fully understand, and people began to look for an explanation. In 1973, Wilson Bryan Key, a communications researcher, wrote the first of four books about “subliminal advertising,” in which he accused advertisers of hiding sexual imagery in ice cubes, and psychoactive words like “sex” onto the airbrushed surfaces of fashion photographs. Having worked on many advertising campaigns from start to finish, in close proximity to everyone from copywriters and art directors to printers, I can comfortably put to rest any rumours that major advertising agencies are engaging in subliminal campaigns. How do images that could be interpreted as “sexual” show up in ice cubes or elbows? The final photographs chosen for ads are selected by committee out of hundreds that are actually shot. After hours or days of consideration, the group eventually feels drawn to one or two photos out of the batch. Not surprising, these photos tend to have more evocative compositions and details, but no penises, breasts, or skulls are ever superimposed onto the images. In fact, the man who claims to have developed subliminal persuasion, James Vicary, admitted to Advertising Age in 1984 that he had fabricated his evidence that the technique worked in order to drum up business for his failing research company. But this confession has not assuaged Key and others who relentlessly, perhaps obsessively, continue to pursue those they feel are planting secret visual messages in advertisements. To be fair to Key, advertisers have left themselves open to suspicion by relegating their work to the abstract world of the image and then targeting consumer psychology so deliberately. According to research by the Roper Organization in 1992, fifty-seven percent of American consumers still believe that subliminal advertising is practiced on a regular basis, and only one in twelve think it “almost never” happens. To protect themselves from the techniques they believe are being used against them, the advertising audience has adopted a stance of cynical suspicion. To combat our increasing awareness and suspicion of demographic targeting, marketers have developed a more camouflaged form of categorization based on psychological profiles instead of race and age. Jim Schroer, the executive director of new marketing strategy at Ford explains his abandonment of broad-demographic targeting: ‘It’s smarter to think about emotions and attitudes, which all go under the term: psychographics - those things that can transcend demographic groups.’ (Schroer, 1997) Instead, he now appeals to what he calls “consumers’ images of themselves.” Unlike broad demographics, the psychographic is developed using more narrowly structured qualitative-analysis techniques, like focus groups, in-depth interviews, and even home surveillance. Marketing analysts observe the behaviors of volunteer subjects, ask questions, and try to draw causal links between feelings, self-image, and purchases. A company called Strategic Directions Group provides just such analysis of the human psyche. In their study of the car-buying habits of the forty-plus baby boomers and their elders, they sought to define the main psychological predilections that human beings in this age group have regarding car purchases. Although they began with a demographic subset of the overall population, their analysis led them to segment the group into psychographic types. For example, members of one psychographic segment, called the ³Reliables,² think of driving as a way to get from point A to point B. The “Everyday People” campaign for Toyota is aimed at this group and features people depending on their reliable and efficient little Toyotas. A convertible Saab, on the other hand, appeals to the ³Stylish Fun² category, who like trendy and fun-to-drive imports. One of the company’s commercials shows a woman at a boring party fantasizing herself into an oil painting, where she drives along the canvas in a sporty yellow Saab. Psychographic targeting is more effective than demographic targeting because it reaches for an individual customer more directly - like a fly fisherman who sets bait and jiggles his rod in a prescribed pattern for a particular kind of fish. It’s as if a marketing campaign has singled you out and recognizes your core values and aspirations, without having lumped you into a racial or economic stereotype. It amounts to a game of cat-and-mouse between advertisers and their target psychographic groups. The more effort we expend to escape categorization, the more ruthlessly the marketers pursue us. In some cases, in fact, our psychographic profiles are based more on the extent to which we try to avoid marketers than on our fundamental goals or values. The so-called “Generation X” adopted the anti-chic aesthetic of thrift-store grunge in an effort to find a style that could not be so easily identified and exploited. Grunge was so self-consciously lowbrow and nonaspirational that it seemed, at first, impervious to the hype and glamour normally applied swiftly to any emerging trend. But sure enough, grunge anthems found their way onto the soundtracks of television commercials, and Dodge Neons were hawked by kids in flannel shirts saying “Whatever.” The members of Generation X are putting up a good fight. Having already developed an awareness of how marketers attempt to target their hearts and wallets, they use their insight into programming to resist these attacks. Unlike the adult marketers pursuing them, young people have grown up immersed in the language of advertising and public relations. They speak it like natives. As a result, they are more than aware when a commercial or billboard is targeting them. In conscious defiance of demographic-based pandering, they adopt a stance of self-protective irony‹distancing themselves from the emotional ploys of the advertisers. Lorraine Ketch, the director of planning in charge of Levi¹s trendy Silvertab line, explained, “This audience hates marketing that’s in your face. It eyeballs it a mile away, chews it up and spits it out” (On Advertising, 1998). Chiat/Day, one of the world’s best-known and experimental advertising agencies, found the answer to the crisis was simply to break up the Gen-X demographic into separate “tribes” or subdemographics - and include subtle visual references to each one of them in the ads they produce for the brand. According to Levi’s director of consumer marketing, the campaign meant to communicate, “We really understand them, but we are not trying too hard” (On Advertising, 1998). Probably unintentionally, Ms. Ketch has revealed the new, even more highly abstract plane on which advertising is now being communicated. Instead of creating and marketing a brand image, advertisers are creating marketing campaigns about the advertising itself. Silvertab’s target market is supposed to feel good about being understood, but even better about understanding the way they are being marketed to. The “drama” invented by Leo Burnett and refined by David Ogilvy and others has become a play within a play. The scene itself has shifted. The dramatic action no longer occurs between the audience and the product, the brand, or the brand image, but between the audience and the brand marketers. As audiences gain even more control over the media in which these interactive stories unfold, advertising evolves ever closer to a theatre of the absurd. excerpted from Coercion: Why We Listen to What "They" Say)? Works Cited Ogilvy, David. Ogilvy on Advertising. New York: Vintage, 1983. Brandweek Staff, "Number Crunching, Hollywood Style," Brandweek. October 6, 1997. Leonhardt, David, and Kathleen Kerwin, "Hey Kid, Buy This!" Business Week. June 30, 1997 Schroer, Jim. Quoted in "Why We Kick Tires," by Carol Morgan and Doron Levy. Brandweek. Sept 29, 1997. "On Advertising," The New York Times. August 14, 1998 Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rushkoff, Douglas. "Coercion " M/C: A Journal of Media and Culture&lt; http://www.media-culture.org.au/0306/06-coercion.php&gt;. APA Style Rushkoff, D. (2003, Jun 19). Coercion . M/C: A Journal of Media and Culture, 6,&lt; http://www.media-culture.org.au/0306/06-coercion.php&gt;
APA, Harvard, Vancouver, ISO, and other styles
43

Gregson, Kimberly. "Bad Avatar!" M/C Journal 10, no. 5 (2007). http://dx.doi.org/10.5204/mcj.2708.

Full text
Abstract:
&#x0D; &#x0D; &#x0D; While exploring the virtual world Second Life one day, I received a group message across the in-world communication system – “there’s a griefer on the beach. Stay away from the beach till we catch him.” There was no need to explain; everyone receiving the message knew what a griefer was and had a general idea of the kinds of things that could be happening. We’d all seen griefers at work before – someone monopolising the chat channel so no one else can communicate, people being “caged” at random, or even weapons fire causing so much “overhead” that all activity in the area slows to a crawl. These kinds of attacks are not limited to virtual worlds. Most people have experienced griefing in their everyday lives, which might best be defined as having fun at someone else’s expense. More commonly seen examples of this in the real world include teasing, bullying, and harassment; playground bullies have long made other children’s free time miserable. More destructive griefing includes arson and theft. Griefing activities happen in all kinds of games and virtual worlds. Griefers who laugh at new users and “yell” (so that all players can hear) that they stink, have followed new users of Disney’s tween-popular ToonTown. Griefers pose as friendly, helpful players who offer to show new users a path through difficult parts of a game, but then who abandon the new user in a spot where he or she does not have the skills to proceed. In World of Warcraft, a popular massively multiplayer online role playing game (MMORPG) created by Blizzard with more than seven million registered, if not active, users, griefers engage in what is known as corpse camping; they sit by a corpse, killing it over and over every time the player tries to get back into the game. The griefer gets a small number of experience points; the player being killed gets aggravated and has to wait out the griefing to play the game again (Warner &amp; Raiter). Griefing in World of Warcraft was featured in an award nominated episode of the television program South Park, in which one character killed every other player he met. This paper considers different types of griefing, both in online games and virtual worlds, and then looks at the actions other players, those being griefed, take against griefers. A variety of examples from Second Life are considered because of the open-structure of the world and its developing nature. Definitions and Types Griefing in online environments such as video games and virtual worlds has been defined as “purposefully engaging in activities to disrupt the gaming experience of other players” (Mulligan &amp; Patrovsky 250). The “purposeful” part of the definition means that accidental bumping and pushing, behaviours often exhibited by new users, are not griefing (Warner &amp; Raiter). Rossingol defines a griefer as, “a player of malign intentions. They will hurt, humiliate and dishevel the average gamer through bending and breaking the rules of online games. ...They want glory, gain or just to partake in a malignant joy at the misfortune of others.” Davis, who maintains a gaming blog, describes Second Life as being populated by “those who build things and those who like to tear them down,” with the latter being the griefers who may be drawn to the unstructured anything-goes nature of the virtual world (qtd. in Girard). Definitions of griefing differ based on context. For instance, griefing has been examined in a variety of multi-player online games. These games often feature missions where players have to kill other players (PvP), behaviour that in other contexts such as virtual worlds would be considered griefing. Putting a monster on the trail of a player considered rude or unskilled might be a way to teach a lesson, but also an example of griefing (Taylor). Foo and Koivisto define griefing in MMORPGs as “play styles that disrupt another player’s gaming experience, usually with specific intention. When the act is not specifically intended to disrupt and yet the actor is the sole beneficiary, it is greed play, a subtle form of grief play” (11). Greed play usually involves actions that disrupt the game play of others but without technically breaking any game rules. A different way of looking at griefing is that it is a sign that the player understands the game or virtual world deeply enough to take advantage of ambiguities in the rules by changing the game to something new (Koster). Many games have a follow option; griefers pick a victim, stand near them, get as naked as possible, and then just follow them around without talking or explaining their actions (Walker). Another example is the memorial service in World of Warcraft for a player who died in real life. The service was interrupted by an attack from another clan; everyone at the memorial service was killed. It is not clear cut who the griefers actually were in this case – the mourners who chose to have their peaceful service in an area marked for player combat or the attackers following the rules for that area and working to earn points and progress in the game. In the case of the mourners, they were changing the rules of the game to suit them, to create something unique – a shared space to mourn a common friend. But they were definitely not playing by the rules. The attackers, considered griefers by many both in and outside of the game, did nothing that broke any rules of the game, though perhaps they broke rules of common decency (“World”); what they did does not fit into the definition of griefing, as much as do the actions of the mourners (Kotaku). Reshaping the game can be done to embed a new, sometimes political, message into the game. A group named Velvet Strike formed to protest US military action. They went into Counter Strike to bring a “message of peace, love and happiness to online shooters by any means necessary” (King). They placed spray painted graphics containing anti-war messages into the game; when confronted with people from other teams the Velvet Strike members refused to shoot (King). The group website contains “recipes” for non-violent game play. One “recipe” involved the Velvet Strike member hiding at the beginning of a mission and not moving for the rest of the game. The other players would shoot each other and then be forced to spend the rest of the game looking for the last survivor in order to get credit for the win. Similar behaviour has been tried inside the game America’s Army. Beginning March, 2006, deLappe, an artist who opposes the U.S. government’s involvement in Iraq, engaged in griefing behaviour by filling (spamming) the in-game text channel with the names of the people killed in the war; no one else can communicate on that channel. Even his character name, dead-in-Iraq, is an anti-war protest (deLappe). “I do not participate in the proscribed mayhem. Rather, I stand in position and type until I am killed. After death, I hover over my dead avatar’s body and continue to type. Upon being re-incarnated in the next round, I continue the cycle” (deLappe n.p.). What about these games and virtual worlds might lead people to even consider griefing? For one thing, they seem anonymous, which can lead to irresponsible behaviour. Players use fake names. Characters on the screen do not seem real. Another reason may be that rules can be broken in videogames and virtual worlds with few consequences, and in fact the premise of the game often seems to encourage such rule breaking. The rules are not always clearly laid out. Each game or world has a Terms of Service agreement that set out basic acceptable behaviour. Second Life defines griefing in terms of the Terms of Service that all users agree to when opening accounts. Abuse is when someone consciously and with malicious intent violates those terms. On top of that limited set of guidelines, each landowner in a virtual world such as Second Life can also set rules for their own property, from dress code, to use of weapons, to allowable conversation topics. To better understand griefing, it is necessary to consider the motivations of the people involved. Early work on categorising player types was completed by Bartle, who studied users of virtual worlds, specifically MUDs, and identified four player types: killers, achievers, socialisers, and explorers. Killers and achievers seem most relevant in a discussion about griefing. Killers enjoy using other players to get ahead. They want to do things to other people (not for or with others), and they get the most pleasure if they can act without the consent of the other player. Knowing about a game or a virtual world gives no power unless that knowledge can be used to gain some advantage over others and to enhance your standing in the game. Achievers want power and dominance in a game so they can do things to the game and master it. Griefing could help them feel a sense of power if they got people to do their will to stop the griefing behavior. Yee studied the motivations of people who play MMORPGs. He found that people who engage in griefing actually scored high in being motivated to play by both achieving and competition (“Facets”). Griefers often want attention. They may want to show off their scripting skills in the hope of earning respect among other coders and possibly be hired to program for others. But many players are motivated by a desire to compete and to win; these categories do not seem to be adequate for understanding the different types of griefing (Yee, “Faces of Grief”). The research on griefing in games has also suggested ways to categorise griefers in virtual worlds. Suler divides griefers into two types (qtd. in Becker). The first is those who grief in order to make trouble for authority figures, including the people who create the worlds. A few of the more spectacular griefing incidents seem designed to cause trouble for Linden Lab, the creators of Second Life. Groups attacked the servers that run Second Life, known as the grid, in October of 2005; this became known as the “gray goo attack” (Second Life; Wallace). Servers were flooded with objects and Second Life had to be taken off line to be restored from backups. More organised groups, such as the W-hats, the SL Liberation Army, and Patriotic Nigas engage in more large scale and public griefing. Some groups hope to draw attention to the group’s goals. The SL Liberation Army wants Linden Lab to open up the governance of the virtual world so that users can vote on changes and policies being implemented and limit corporate movement into Second Life (MarketingVox). Patriotic Nigas, with about 35 active members, want to slow the entry of corporations into Second Life (Cabron, “Who are Second Life’s”). One often discussed griefer attack in Second Life included a flood of pink flying penises directed against land owner and the first person to have made a profit of more than one million United States dollars in a virtual world, Anshe Chung, during a well-publicised and attended interview in world with technology news outlet CNET (Walsh, “Second Life Millionaire” ). The second type proposed by Suler is the griefer who wants to hurt and victimise others (qtd. in Becker). Individual players often go naked into PG-rated areas to cause trouble. Weapons are used in areas where weapons are banned. Second Life publishes a police blotter, which lists examples of minor griefing and assigned punishment, including incidents of disturbing the peace and violating community standards for which warnings and short bans have been issued. These are the actions of individuals for the most part, as were the people who exploited security holes to enter the property uninvited during the grand opening of Endemol’s Big Brother island in Second Life; guests to the opening were firebombed and caged. One of the griefers explained her involvement: Well I’m from The Netherlands, and as you might know the tv concept of big brother was invented here, and it was in all the newspapers in Holland. So I thought It would be this huge event with lots of media. Then I kinda got the idea ‘hey I could ruin this and it might make the newspaper or tv. So that’s what set me off, lol. (qtd. in Sklar) Some groups do grief just to annoy. The Patriotic Nigas claim to have attacked the John Edwards headquarters inside SL wearing Bush ‘08 buttons (Cabron, “John Edwards Attackers”), but it was not a political attack. The group’s founder, Mudkips Acronym (the name of his avatar in SL) said, “I’m currently rooting for Obama, but that doesn’t mean we won’t raid him or anything. We’ll hit anyone if it’s funny, and if the guy I want to be president in 2008’s campaign provides the lulz, we’ll certainly not cross him off our list” (qtd. in Cabron, “John Edwards Attackers”). If they disrupt a high profile event or site, the attack will be covered by media that can amplify the thrill of the attack, enhance their reputation among other griefers, and add to their enjoyment of the griefing. Part of the definition of griefing is that the griefer enjoys causing other players pain and disrupting their game. One resident posted on the SL blog, “Griefers, for the most part, have no other agenda other than the thrill of sneaking one past and causing a big noise. Until a spokesperson comes forward with a manifesto, we can safely assume that this is the work of the “Jackass” generation, out to disrupt things to show that they can“ (Scarborough). Usually to have fun, griefers go after individuals, rather than the owners and administrators of the virtual world and so fit into Suler’s second type of griefing. These griefers enjoy seeing others get angry and frustrated. As one griefer said: Understanding the griefer mindset begins with this: We don’t take the game seriously at all. It continues with this: It’s fun because you react. Lastly: We do it because we’re jerks and like to laugh at you. I am the fly that kamikazes into your soup. I am the reason you can’t have nice things … . If I make you cry, you’ve made my day. (Drake) They have fun by making the other players mad. “Causing grief is the name of his game. His objective is simple: Make life hell for anyone unlucky enough to be playing with him. He’s a griefer. A griefer is a player bent on purposely frustrating others during a multiplayer game” (G4). “I’m a griefer. It’s what I do,” the griefer says. “And, man, people get so pissed off. It’s great” (G4). Taking Action against Griefers Understanding griefing from the griefer point of view leads us to examine the actions of those being griefed. Suler suggests several pairs of opposing actions that can be taken against griefers, based on his experience in an early social environment called Palace. Many of the steps still being used fit into these types. He first describes preventative versus remedial action. Preventative steps include design features to minimise griefing. The Second Life interface includes the ability to build 3D models and to create software; it also includes a menu for land owners to block those features at will, a design feature that helps prevent much griefing. Remedial actions are those taken by the administrators to deal with the effects of griefing; Linden Lab administrators can shut down whole islands to keep griefer activities from spreading to nearby islands. The second pair is interpersonal versus technical; interpersonal steps involve talking to the griefers to get them to stop ruining the game for others, while technical steps prevent griefers from re-entering the world. The elven community in Second Life strongly supports interpersonal steps; they have a category of members in their community known as guardians who receive special training in how to talk to people bent on destroying the peacefulness of the community or disturbing an event. The creators of Camp Darfur on Better World island also created a force of supporters to fend off griefer attacks after the island was destroyed twice in a week in 2006 (Kenzo). Linden Lab also makes use of technical methods; they cancel accounts so known griefers can not reenter. There were even reports that they had created a prison island where griefers whose antics were not bad enough to be totally banned would be sent via a one-way teleporter (Walsh, “Hidden Virtual World Prison”). Some users of Second Life favour technical steps; they believe that new users should be held a fixed amount of time on the Orientation island which would stop banned users from coming back into the world immediately. The third is to create tools for average users or super users (administrators); both involve software features, some of which are available to all users to help them make the game good for them while others are available only to people with administrator privileges. Average users who own land have a variety of tools available to limit griefing behaviour on their own property. In Second Life, the land owner is often blamed because he or she did not use the tools provided to landowners by Linden Lab; they can ban individual users, remove users from the land, mute their conversation, return items left on the property, and prevent people from building or running scripts. As one landowner said, “With the newbies coming in there, I’ve seen their properties just littered with crap because they don’t know protective measures you need to take as far as understanding land control and access rights” (qtd. in Girard). Super users, those who work for Linden Lab, can remove a player from the game for a various lengths of time based on their behaviour patterns. Responses to griefers can also be examined as either individual or joint actions. Individual actions include those that land owners can take against individual griefers. Individual users, regardless of account type, can file abuse reports against other individuals; Linden Lab investigates these reports and takes appropriate action. Quick and consistent reporting of all griefing, no matter how small, is advocated by most game companies and user groups as fairly successful. Strangely, some types of joint actions have been not so successful. Landowners have tried to form the Second Life Anti-Griefing Guild, but it folded because of lack of involvement. Groups providing security services have formed; many event organisers use this kind of service. (Hoffman). More successful efforts have included the creation of software, such as SLBanLink.com, Karma, and TrustNet that read lists of banned users into the banned list on all participating property. A last category of actions to be taken against griefers, and a category used by most residents of virtual worlds, is to leave them alone—to ignore them, to tolerate their actions. The thinking is that, as with many bullies in real life, griefers want attention; when deprived of that, they will move on to find other amusements. Yelling and screaming at griefers just reinforces their bad behaviour. Users simply teleport to other locations or log off. They warn others of the griefing behaviour using the various in-world communication tools so they too can stay away from the griefers. Most of the actions described above are not useful against griefers for whom a bad reputation is part of their credibility in the griefer community. The users of Second Life who staged the Gray Goo denial of service attack in October, 2005 fit into that category. They did nothing to hide the fact that they wanted to cause massive trouble; they named the self-replicating object that they created Grief Spawn and discussed ways to bring down the world on griefer forums (Wallace) Conclusion The most effective griefing usually involves an individual or small group who are only looking to have fun at someone else’s expense. It’s a small goal, and as long as there are any other users, it is easy to obtain the desired effect. In fact, as word spreads of the griefing and users feel compelled to change their behaviour to stave off future griefer attacks, the griefers have fun and achieve their goal. The key point here is that everyone has the same goal – have fun. Unfortunately, for one group – the griefers – achieving their goal precludes other users from reaching theirs. Political griefers are less successful in achieving their goals. Political creative play as griefing, like other kinds of griefing, is not particularly effective, which is another aspect of griefing as error. Other players react with frustration and violence to the actions of griefers such as deLappe and Velvet-Strike. If griefing activity makes people upset, they are less open to considering the political or economic motives of the griefers. Some complaints are relatively mild; “I’m all for creative protest and what not, but this is stupid. It’s not meaningful art or speaking out or anything of the type, its just annoying people who are never going to change their minds about how awesome they think war is” (Borkingchikapa). Others are more negative: “Somebody really needs to go find where that asshole lives and beat the shit out of him. Yeah, it’s a free country and he can legally pull this crap, but that same freedom extends to some patriot kicking the living shit out of him” (Reynolds). In this type of griefing no one’s goals for using the game are satisfied. The regular users can not have fun, but neither do they seem to be open to or accepting of the political griefer’s message. This pattern of success and failure may explain why there are so many examples of griefing to disrupt rather then the politically motivated kind. It may also suggest why efforts to curb griefing have been so ineffective in the past. Griefers who seek to disrupt for fun would see it as a personal triumph if others organised against them. Even if they found themselves banned from one area, they could quickly move somewhere else to have their fun since whom or where they harass does not really matter. Perhaps not all griefing is in error, rather, only those griefing activities motivated by any other goal than have fun. People invest their time and energy in creating their characters and developing skills. The behaviour of people in these virtual environments has a definite bearing on the real world. And perhaps that explains why people in these virtual worlds react so strongly to the behaviour. So, remember, stay off the beach until they catch the griefers, and if you want to make up the game as you go along, be ready for the other players to point at you and say “Bad, Bad Avatar.” References Bartle, Richard. “Players Who Suit MUDs.” Journal of MUD Research 1.1 (June 1996). 10 Sep. 2007 http://www.mud.co.uk/richard/hcds.htm&gt;. Becker, David. Inflicting Pain on “Griefers.” 13 Dec. 2004. 10 Oct. 2007 http://www.news.com/Inflicting-pain-on-griefers/2100-1043_3-5488403.html&gt;. Borkingchikapa. Playing America’s Army. 30 May 2006. 10 Aug. 2007 http://www.metafilter.com/51938/playing-Americas-Army&gt;. Cabron, Lou. John Edwards Attackers Unmasked. 5 Mar. 2007. 29 Apr. 2007 http://www.10zenmonkeys.com/2007/03/05/john-edwards-virtual-attackers-unmasked/&gt;. Cabron, Lou. Who Are Second Life’s “Patriotic Nigas”? 8 Mar. 2007. 30 Apr. 2007 http://www.10zenmonkeys.com/2007/03/08/patriotic-nigras-interview-john-edwards-second-life/&gt;. DeLappe, Joseph. Joseph deLappe. 2006. 10 Aug. 2007. http://www.unr.edu/art/DELAPPE/DeLappe%20Main%20Page/DeLappe%20Online%20MAIN.html&gt;. Drake, Shannon. “Jerk on the Internet.” The Escapist Magazine 15 Nov. 2005: 31-32. 20 June 2007 http://www.escapistmagazine.com/issue/19/31&gt;. Foo, Chek Yang. Redefining Grief Play. 2004. 10 Oct. 2007 http://64.233.167.104/search?q=cache:1mBYzWVqAsIJ:www.itu.dk/op/papers/ yang_foo.pdf+foo+koivisto&amp;hl=en&amp;ct=clnk&amp;cd=7&amp;gl=us&amp;client=firefox-a&gt;. Foo, Chek Yang, and Elina Koivisto. Grief Player Motivations. 2004. 15 Aug. 2007 http://www.itu.dk/op/papers/yang_foo_koivisto.pdf&gt;. G4. Confessions of a Griefer. N.D. 21 June 2007 http://www.g4tv.com/xplay/features/42527/Confessions_of_a_Griefer.html&gt;. Girard, Nicole. “Griefer Madness: Terrorizing Virtual Worlds.”_ Linux Insider_ 19 Sep. 2007. 3 Oct. 2007 http://www.linuxinsider.com/story/59401.html&gt;. Hoffman, E. C. “Tip Sheet: When Griefers Attack.” Business Week. 2007. 21 June 2007 http://www.businessweek.com/playbook/07/0416_1.htm&gt;. Kenzo, In. “Comment: Has Plastic Duck Migrated Back to SL?” Second Life Herald Apr. 2006. 10 Oct. 2007 http://www.secondlifeherald.com/slh/2006/04/has_plastic_duc.html&gt;. King, Brad. “Make Love, Not War.” Wired June 2002. 10 Aug. 2007 http://www.wired.com/gaming/gamingreviews/news/2002/06/52894&gt;. Koster, Raph. A Theory of Fun for Game Design. Scotsdale, AZ: Paraglyph, 2005. Kotaku. _WoW Funeral Party Gets Owned. _2006. 15 Aug. 2007 http://kotaku.com/gaming/wow/wow-funeral-party-gets-owned-167354.php&gt;. MarketingVox. Second Life Liberation Army Targets Brands. 7. Dec. 2006. 10 Aug. 2007 http://www.marketingvox.com/archives/2006/12/07/second-life-liberation-army-targets-brands/&gt;. Mulligan, Jessica, and Bridget Patrovsky. Developing Online Games: An Insider’s Guide. Indianapolis: New Riders, 2003. Reynolds, Ren. Terra Nova: dead-in-iraq. 7 May 2006. 15 Aug. 2007 http://terranova.blogs.com/terra_nova/2006/05/deadiniraq_.html&gt;. Rossingnol, Jim. “A Deadly Dollar.” The Escapist Magazine 15 Nov. 2005: 23-27. 20 June 2007 http://www.escapistmagazine.com/issue/19/23&gt;. Scarborough, Solivar. Mass Spam Issue Inworld Being Investigated. 13 Oct. 2006. 20 June 2007 http://blog.secondlife.com/2006/10/13/mass-spam-issue-inworld-being-investigated/&gt;. Sklar, Urizenus. “Big Brother Opening Hypervent Griefed for 4 Hours.” Second Life Herald 12 Dec. 2006. 10 Aug. 2007 http://www.secondlifeherald.com/slh/2006/12/big_brother_ope.html&gt;. Suler, John. The Bad Boys of Cyberspace. 1997. 10 Oct. 2007 http://www-usr.rider.edu/~suler/psycyber/badboys.html&gt;. Taylor, T.L. Play between Worlds: Exploring Online Game Culture. Cambridge, MA: MIT, 2006. Velvet Strike. Velvet-Strike. N.D. 10 Aug. 2007 http://www.opensorcery.net/velvet-strike/nonflame.html&gt;. Walker, John. “How to Be a Complete Bastard.” PC Gamer 13 Mar. 2007. 10 Aug. 2007 http://www.computerandvideogames.com/article.php?id=159883&amp;site=pcg&gt;. Wallace, Mark. “The Day the Grid Disappeared.” Escapist Magazine 15 Nov. 2005: 11. 20 June 2007 http://www.escapistmagazine.com/issue/19/11&gt;. Walsh, Tony. Hidden Virtual-World Prison Revealed. 3 Jan. 2006. 10 Oct. 2007 http://www.secretlair.com/index.php?/clickableculture/entry/hidden_virtual_world_prison_revealed/&gt;. Walsh, Tony. Second Life Millionaire Interview Penis-Bombed. 20 Dec. 2006. 10 Oct. 2007 http://www.secretlair.com/index.php?/clickableculture/entry/second_life_millionaire_interview_penis_bombed/&gt;. Warner, Dorothy, and Mike Raiter. _Social Context in Massively-Multiplayer Online Games. _2005. 20 Aug. 2007 http://www.i-r-i-e.net/inhalt/004/Warner-Raiter.pdf&gt;. “World of Warcraft: Funeral Ambush.” 2006. YouTube. 15 Aug. 2007 http://www.youtube.com/watch?v=31MVOE2ak5w&gt;. Yee, Nicholas. Facets: 5 Motivational Factors for Why People Play MMORPG’s. 2002. 10 Oct. 2007 http://www.nickyee.com/facets/home.html&gt;. Yee, Nicholas. Faces of Grief. 2005. June 2007 http://www.nickyee.com/daedalus/archives/000893.php?page=1&gt;. &#x0D; &#x0D; &#x0D; &#x0D; Citation reference for this article&#x0D; &#x0D; MLA Style&#x0D; Gregson, Kimberly. "Bad Avatar!: Griefing in Virtual Worlds." M/C Journal 10.5 (2007). echo date('d M. Y'); ?&gt; &lt;http://journal.media-culture.org.au/0710/06-gregson.php&gt;. APA Style&#x0D; Gregson, K. (Oct. 2007) "Bad Avatar!: Griefing in Virtual Worlds," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?&gt; from &lt;http://journal.media-culture.org.au/0710/06-gregson.php&gt;. &#x0D;
APA, Harvard, Vancouver, ISO, and other styles
44

Pedersen, Isabel, and Kirsten Ellison. "Startling Starts: Smart Contact Lenses and Technogenesis." M/C Journal 18, no. 5 (2015). http://dx.doi.org/10.5204/mcj.1018.

Full text
Abstract:
On 17 January 2013, Wired chose the smart contact lens as one of “7 Massive Ideas That Could Change the World” describing a Google-led research project. Wired explains that the inventor, Dr. Babak Parviz, wants to build a microsystem on a contact lens: “Using radios no wider than a few human hairs, he thinks these lenses can augment reality and incidentally eliminate the need for displays on phones, PCs, and widescreen TVs”. Explained further in other sources, the technology entails an antenna, circuits embedded into a contact lens, GPS, and an LED to project images on the eye, creating a virtual display (Solve for X). Wi-Fi would stream content through a transparent screen over the eye. One patent describes a camera embedded in the lens (Etherington). Another mentions medical sensing, such as glucose monitoring of tears (Goldman). In other words, Google proposes an imagined future when we use contact lenses to search the Internet (and be searched by it), shop online, communicate with friends, work, navigate maps, swipe through Tinder, monitor our health, watch television, and, by that time, probably engage in a host of activities not yet invented. Often referred to as a bionic contact, the smart contact lens would signal a weighty shift in the way we work, socialize, and frame our online identities. However, speculative discussion over this radical shift in personal computing, rarely if ever, includes consideration of how the body, acting as a host to digital information, will manage to assimilate not only significant affordances, but also significant constraints and vulnerabilities. At this point, for most people, the smart contact lens is just an idea. Is a new medium of communication started when it is launched in an advertising campaign? When we Like it on Facebook? If we chat about it during a party amongst friends? Or, do a critical mass of people actually have to be using it to say it has started? One might say that Apple’s Macintosh computer started as a media platform when the world heard about the famous 1984 television advertisement aired during the American NFL Super Bowl of that year. Directed by Ridley Scott, the ad entails an athlete running down a passageway and hurling a hammer at a massive screen depicting cold war style rulers expounding state propaganda. The screen explodes freeing those imprisoned from their concentration camp existence. The direct reference to Orwell’s 1984 serves as a metaphor for IBM in 1984. PC users were made analogous to political prisoners and IBM served to represent the totalitarian government. The Mac became a something that, at the time, challenged IBM, and suggested an alternative use for the desktop computer that had previously been relegated for work rather than life. Not everyone bought a Mac, but the polemical ad fostered the idea that Mac was certainly the start of new expectations, civic identities, value-systems, and personal uses for computers. The smart contact lens is another startling start. News of it shocks us, initiates social media clicks and forwards, and instigates dialogue. But, it also indicates the start of a new media paradigm that is already undergoing popular adoption as it is announced in mainstream news and circulated algorithmically across media channels. Since 2008, news outlets like CNN, The New York Times, The Globe and Mail, Asian International News, United News of India, The Times of London and The Washington Post have carried it, feeding the buzz in circulation that Google intends. Attached to the wave of current popular interest generated around any technology claiming to be “wearable,” a smart contact lens also seems surreptitious. We would no longer hold smartphones, but hide all of that digital functionality beneath our eyelids. Its emergence reveals the way commercial models have dramatically changed. The smart contact lens is a futuristic invention imagined for us and about us, but also a sensationalized idea socializing us to a future that includes it. It is also a real device that Parviz (with Google) has been inventing, promoting, and patenting for commercial applications. All of these workings speak to a broader digital culture phenomenon. We argue that the smart contact lens discloses a process of nascent posthuman adaptation, launched in an era that celebrates wearable media as simultaneously astonishing and banal. More specifically, we adopt technology based on our adaptation to it within our personal, political, medial, social, and biological contexts, which also function in a state of flux. N. Katherine Hayles writes that “Contemporary technogenesis, like evolution in general, is not about progress ... rather, contemporary technogenesis is about adaptation, the fit between organisms and their environments, recognizing that both sides of the engagement (human and technologies) are undergoing coordinated transformations” (81). This article attends to the idea that in these early stages, symbolic acts of adaptation signal an emergent medium through rhetorical processes that society both draws from and contributes to. In terms of project scope, this article contributes a focused analysis to a much larger ongoing digital rhetoric project. For the larger project, we conducted a discourse analysis on a collection of international publications concerning Babak Parviz and the invention. We searched for and collected newspaper stories, news broadcasts, YouTube videos from various sources, academic journal publications, inventors’ conference presentations, and advertising, all published between January 2008 and May 2014, generating a corpus of more than 600 relevant artifacts. Shortly after this time, Dr. Parviz, a Professor at the University of Washington, left the secretive GoogleX lab and joined Amazon.com (Mac). For this article we focus specifically on the idea of beginnings or genesis and how digital spaces increasingly serve as the grounds for emergent digital cultural phenomena that are rarely recognized as starting points. We searched through the corpus to identify a few exemplary international mainstream news stories to foreground predominant tropes in support of the claim we make that smart contacts lenses are a startling idea. Content producers deliberately use astonishment as a persuasive device. We characterize the idea of a smart contact lens cast in rhetorical terms in order to reveal how its allure works as a process of adaptation. Rhetorician and philosopher, Kenneth Burke writes that “rhetorical language is inducement to action (or to attitude)” (42). A rhetorical approach is instrumental because it offers a model to explain how we deploy, often times, manipulative meaning as senders and receivers while negotiating highly complex constellations of resources and contexts. Burke’s rhetorical theory can show how messages influence and become influenced by powerful hierarchies in discourse that seem transparent or neutral, ones that seem to fade into the background of our consciousness. For this article, we also concentrate on rhetorical devices such as ethos and the inventor’s own appeals through different modes of communication. Ethos was originally proposed by Aristotle to identify speaker credibility as a persuasive tactic. Addressed by scholars of rhetoric for centuries, ethos has been reconfigured by many critical theorists (Burke; Baumlin Ethos; Hyde). Baumlin and Baumlin suggest that “ethos describes an audience’s projection of authority and trustworthiness onto the speaker ... ethos suggests that the ethical appeal to be a radically psychological event situated in the mental processes of the audience – as belonging as much to the audience as to the actual character of a speaker” (Psychology 99). Discussed in the next section, our impression of Parviz and his position as inventor plays a dramatic role in the surfacing of the smart contact lens. Digital Rhetoric is an “emerging scholarly discipline concerned with the interpretation of computer-generated media as objects of study” (Losh 48). In an era when machine-learning algorithms become the messengers for our messages, which have become commodity items operating across globalized, capitalist networks, digital rhetoric provides a stable model for our approach. It leads us to demonstrate how this emergent medium and invention, the smart contact lens, is born amid new digital genres of speculative communication circulated in the everyday forums we engage on a daily basis. Smart Contact Lenses, Sensationalism, and Identity One relevant site for exploration into how an invention gains ethos is through writing or video penned or produced by the inventor. An article authored by Parviz in 2009 discusses his invention and the technical advancements that need to be made before the smart contact lens could work. He opens the article using a fictional and sensationalized analogy to encourage the adoption of his invention: The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.But why stop there?In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes. Identity building is made to correlate with smart contact lenses in a manner that frames them as exciting. Coming to terms with them often involves casting us as superhumans, wielding abilities that we do not currently possess. One reason for embellishment is because we do not need digital displays on the eyes, so the motive to use them must always be geared to transcending our assumed present condition as humans and society members. Consequently, imagination is used to justify a shift in human identity along a future trajectory.This passage above also instantiates a transformation from humanist to posthumanist posturing (i.e. “the cyborg”) in order to incent the adoption of smart contact lenses. It begins with the bold declarative statement, “The human eye is a perceptual powerhouse,” which is a comforting claim about our seemingly human superiority. Indexing abstract humanist values, Parviz emphasizes skills we already possess, including seeing a plethora of colours, adjusting to light on the fly, and thinking fast, indeed faster than “a high-speed Internet connection”. However, the text goes on to summon the Terminator character and his optic feats from the franchise of films. Filmic cyborg characters fulfill the excitement that posthuman rhetoric often seems to demand, but there is more here than sensationalism. Parviz raises the issue of augmenting human vision using science fiction as his contextualizing vehicle because he lacks another way to imbricate the idea. Most interesting in this passage is the inventor’s query “But why stop there?” to yoke the two claims, one biological (i.e., “The human eye is a perceptual powerhouse”) and one fictional (i.e. Terminator, Vernor Vinge characters). The query suggests, Why stop with human superiority, we may as well progress to the next level and embrace a smart contact lens just as fictional cyborgs do. The non-threatening use of fiction makes the concept seem simultaneously exciting and banal, especially because the inventor follows with a clear description of the necessary scientific engineering in the rest of the article. This rhetorical act signifies the voice of a technoelite, a heavily-funded cohort responding to global capitalist imperatives armed with a team of technologists who can access technological advancements and imbue comments with an authority that may extend beyond their fields of expertise, such as communication studies, sociology, psychology, or medicine. The result is a powerful ethos. The idea behind the smart contact lens maintains a degree of respectability long before a public is invited to use it.Parviz exhumes much cultural baggage when he brings to life the Terminator character to pitch smart contact lenses. The Terminator series of films has established the “Arnold Schwarzenegger” character a cultural mainstay. Each new film reinvented him, but ultimately promoted him within a convincing dystopian future across the whole series: The Terminator (Cameron), Terminator 2: Judgment Day (Cameron), Terminator 3: Rise of the Machines (Mostow), Terminator Salvation (McG) and Terminator Genisys (Taylor) (which appeared in 2015 after Parviz’s article). Recently, several writers have addressed how cyborg characters figure significantly in our cultural psyche (Haraway, Bukatman; Leaver). Tama Leaver’s Artificial Culture explores the way popular, contemporary, cinematic, science fiction depictions of embodied Artificial Intelligence, such as the Terminator cyborgs, “can act as a matrix which, rather than separating or demarcating minds and bodies or humanity and the digital, reinforce the symbiotic connection between people, bodies, and technologies” (31). Pointing out the violent and ultimately technophobic motive of The Terminator films, Leaver reads across them to conclude nevertheless that science fiction “proves an extremely fertile context in which to address the significance of representations of Artificial Intelligence” (63).Posthumanism and TechnogenesisOne reason this invention enters the public’s consciousness is its announcement alongside a host of other technologies, which seem like parts of a whole. We argue that this constant grouping of technologies in the news is one process indicative of technogenesis. For example, City A.M., London’s largest free commuter daily newspaper, reports on the future of business technology as a hodgepodge of what ifs: As Facebook turns ten, and with Bill Gates stepping down as Microsoft chairman, it feels like something is drawing to an end. But if so, it is only the end of the technological revolution’s beginning ... Try to look ahead ten years from now and the future is dark. Not because it is bleak, but because the sheer profusion of potential is blinding. Smartphones are set to outnumber PCs within months. After just a few more years, there are likely to be 3bn in use across the planet. In ten years, who knows – wearables? smart contact lenses? implants? And that’s just the start. The Internet of Things is projected to be a $300bn (£183bn) industry by 2020. (Sidwell) This reporting is a common means to frame the commodification of technology in globalized business news that seeks circulation as much as it does readership. But as a text, it also posits how individuals frame the future and their participation with it (Pedersen). Smart contacts appear to move along this exciting, unstoppable trajectory where the “potential is blinding”. The motive is to excite and scare. However, simultaneously, the effect is predictable. We are quite accustomed to this march of innovations that appears everyday in the morning paper. We are asked to adapt rather than question, consequently, we never separate the parts from the whole (e.g., “wearables? smart contact lenses? Implants”) in order to look at them critically.In coming to terms with Cary Wolf’s definition of posthumanism, Greg Pollock writes that posthumanism is the questioning that goes on “when we can no longer rely on ‘the human’ as an autonomous, rational being who provides an Archimedean point for knowing about the world (in contrast to “humanism,” which uses such a figure to ground further claims)” (208). With similar intent, N. Katherine Hayles formulating the term technogenesis suggests that we are not really progressing to another level of autonomous human existence when we adopt media, we are in effect, adapting to media and media are also in a process of adapting to us. She writes: As digital media, including networked and programmable desktop stations, mobile devices, and other computational media embedded in the environment, become more pervasive, they push us in the direction of faster communication, more intense and varied information streams, more integration of humans and intelligent machines, and more interactions of language with code. These environmental changes have significant neurological consequences, many of which are now becoming evident in young people and to a lesser degree in almost everyone who interacts with digital media on a regular basis. (11) Following Hayles, three actions or traits characterize adaptation in a manner germane to the technogenesis of media like smart contact lenses. The first is “media embedded in the environment”. The trait of embedding technology in the form of sensors and chips into external spaces evokes the onset of The Internet of Things (IoT) foundations. Extensive data-gathering sensors, wireless technologies, mobile and wearable components integrated with the Internet, all contribute to the IoT. Emerging from cloud computing infrastructures and data models, The IoT, in its most extreme, involves a scenario whereby people, places, animals, and objects are given unique “embedded” identifiers so that they can embark on constant data transfer over a network. In a sense, the lenses are adapted artifacts responding to a world that expects ubiquitous networked access for both humans and machines. Smart contact lenses will essentially be attached to the user who must adapt to these dynamic and heavily mediated contexts.Following closely on the first, the second point Hayles makes is “integration of humans and intelligent machines”. The camera embedded in the smart contact lens, really an adapted smartphone camera, turns the eye itself into an image capture device. By incorporating them under the eyelids, smart contact lenses signify integration in complex ways. Human-machine amalgamation follows biological, cognitive, and social contexts. Third, Hayles points to “more interactions of language with code.” We assert that with smart contact lenses, code will eventually govern interaction between countless agents in accordance with other smart devices, such as: (1) exchanges of code between people and external nonhuman networks of actors through machine algorithms and massive amalgamations of big data distributed on the Internet;(2) exchanges of code amongst people, human social actors in direct communication with each other over social media; and (3) exchanges of coding and decoding between people and their own biological processes (e.g. monitoring breathing, consuming nutrients, translating brainwaves) and phenomenological (but no less material) practices (e.g., remembering, grieving, or celebrating). The allure of the smart contact lens is the quietly pressing proposition that communication models such as these will be radically transformed because they will have to be adapted to use with the human eye, as the method of input and output of information. Focusing on genetic engineering, Eugene Thacker fittingly defines biomedia as “entail[ing] the informatic recontextualization of biological components and processes, for ends that may be medical or nonmedical (economic, technical) and with effects that are as much cultural, social, and political as they are scientific” (123). He specifies, “biomedia are not computers that simply work on or manipulate biological compounds. Rather, the aim is to provide the right conditions, such that biological life is able to demonstrate or express itself in a particular way” (123). Smart contact lenses sit on the cusp of emergence as a biomedia device that will enable us to decode bodily processes in significant new ways. The bold, technical discourse that announces it however, has not yet begun to attend to the seemingly dramatic “cultural, social, and political” effects percolating under the surface. Through technogenesis, media acclimatizes rapidly to change without establishing a logic of the consequences, nor a design plan for emergence. Following from this, we should mention issues such as the intrusion of surveillance algorithms deployed by corporations, governments, and other hegemonic entities that this invention risks. If smart contact lenses are biomedia devices inspiring us to decode bodily processes and communicate that data for analysis, for ourselves, and others in our trust (e.g., doctors, family, friends), we also need to be wary of them. David Lyon warns: Surveillance has spilled out of its old nation-state containers to become a feature of everyday life, at work, at home, at play, on the move. So far from the single all-seeing eye of Big Brother, myriad agencies now trace and track mundane activities for a plethora of purposes. Abstract data, now including video, biometric, and genetic as well as computerized administrative files, are manipulated to produce profiles and risk categories in a liquid, networked system. The point is to plan, predict, and prevent by classifying and assessing those profiles and risks. (13) In simple terms, the smart contact lens might disclose the most intimate information we possess and leave us vulnerable to profiling, tracking, and theft. Irma van der Ploeg presupposed this predicament when she wrote: “The capacity of certain technologies to change the boundary, not just between what is public and private information but, on top of that, between what is inside and outside the human body, appears to leave our normative concepts wanting” (71). The smart contact lens, with its implied motive to encode and disclose internal bodily information, needs considerations on many levels. Conclusion The smart contact lens has made a digital beginning. We accept it through the mass consumption of the idea, which acts as a rhetorical motivator for media adoption, taking place long before the device materializes in the marketplace. This occurrence may also be a sign of our “posthuman predicament” (Braidotti). We have argued that the smart contact lens concept reveals our posthuman adaptation to media rather than our reasoned acceptance or agreement with it as a logical proposition. By the time we actually squabble over the price, express fears for our privacy, and buy them, smart contact lenses will long be part of our everyday culture. References Baumlin, James S., and Tita F. Baumlin. “On the Psychology of the Pisteis: Mapping the Terrains of Mind and Rhetoric.” Ethos: New Essays in Rhetorical and Critical Theory. Eds. James S. Baumlin and Tita F. Baumlin. Dallas: Southern Methodist University Press, 1994. 91-112. Baumlin, James S., and Tita F. Baumlin, eds. Ethos: New Essays in Rhetorical and Critical Theory. Dallas: Southern Methodist University Press, 1994. Bilton, Nick. “A Rose-Colored View May Come Standard.” The New York Times, 4 Apr. 2012. Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Bukatman, Scott. Terminal Identity: The Virtual Subject in Postmodern Science Fiction. Durham: Duke University Press, 1993. Burke, Kenneth. A Rhetoric of Motives. Berkeley: University of California Press, 1950. Cameron, James, dir. The Terminator. Orion Pictures, 1984. DVD. Cameron, James, dir. Terminator 2: Judgment Day. Artisan Home Entertainment, 2003. DVD. Etherington, Darrell. “Google Patents Tiny Cameras Embedded in Contact Lenses.” TechCrunch, 14 Apr. 2014. Goldman, David. “Google to Make Smart Contact Lenses.” CNN Money 17 Jan. 2014. Haraway, Donna. Simians, Cyborgs and Women: The Reinvention of Nature. London: Free Association Books, 1991. Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago, 2012. Hyde, Michael. The Ethos of Rhetoric. Columbia: University of South Carolina Press, 2004. Leaver, Tama. Artificial Culture: Identity, Technology, and Bodies. New York: Routledge, 2012. Losh, Elizabeth. Virtualpolitik: An Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes. Boston: MIT Press. 2009. Lyon, David, ed. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. New York: Routledge, 2003. Mac, Ryan. “Amazon Lures Google Glass Creator Following Phone Launch.” Forbes.com, 14 July 2014. McG, dir. Terminator Salvation. Warner Brothers, 2009. DVD. Mostow, Jonathan, dir. Terminator 3: Rise of the Machines. Warner Brothers, 2003. DVD. Parviz, Babak A. “Augmented Reality in a Contact Lens.” IEEE Spectrum, 1 Sep. 2009. Pedersen, Isabel. Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media. Anderson, South Carolina: Parlor Press, 2013. Pollock, Greg. “What Is Posthumanism by Cary Wolfe (2009).” Rev. of What is Posthumanism?, by Cary Wolfe. Journal for Critical Animal Studies 9.1/2 (2011): 235-241. Sidwell, Marc. “The Long View: Bill Gates Is Gone and the Dot-com Era Is Over: It's Only the End of the Beginning.” City A.M., 7 Feb. 2014. “Solve for X: Babak Parviz on Building Microsystems on the Eye.” YouTube, 7 Feb. 2012. Taylor, Alan, dir. Terminator: Genisys. Paramount Pictures, 2015. DVD. Thacker, Eugene “Biomedia.” Critical Terms for Media Studies. Eds. W.J.T Mitchell and Mark Hansen, Chicago: Chicago Press, 2010. 117-130. Van der Ploeg, Irma. “Biometrics and the Body as Information.” Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. Ed. David Lyon. New York: Routledge, 2003. 57-73. Wired Staff. “7 Massive Ideas That Could Change the World.” Wired.com, 17 Jan. 2013.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography