Journal articles on the topic 'Web services. Computer network architectures. Electronic data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 journal articles for your research on the topic 'Web services. Computer network architectures. Electronic data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Budiyanto, Setiyo, Freddy Artadima Silaban, Lukman Medriavin Silalahi, Selamet Kurniawan, Fajar Rahayu I. M., Ucuk Darusalam, and Septi Andryana. "Design and monitoring body temperature and heart rate in humans based on WSN using star topology." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 1 (April 1, 2021): 326. http://dx.doi.org/10.11591/ijeecs.v22.i1.pp326-334.

Full text
Abstract:
<span><span>Electronic health (E-health) uses information and communication technology including electronics, telecommunications, computers, and informatics to process various types of medical information, to carry out clinical services (diagnosis or therapy). Health is the most important asset in human life, therefore maintaining health is a top priority and serious attention needed. Heart rate and body temperature are vital signs that the hospital routinely checks for clinical signs and are useful for strengthening the diagnosis of a disease. In this research monitoring heart rate and body temperature with the wireless sensor network (WSN) method that uses <span>NodeMCU 1.0 as a controller module and wireless as communication between nodes, the wireless network used in this research Wi-Fi network. As a data taker, a DS18b20 temperature sensor and a heart rate sensor (pulse sensor) are needed, which will be displayed by the ThingSpeak web and smartphones. From the test results, the success rate of the system in detecting heart rates is 97.17%. Whereas in detecting body temperature the success rate of the system is 99.28%. For data transmission, the system can send data smoothly at a maximum distance of 15 meters with a barrier.</span></span></span>
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Qitong, Jun Han, and Dianfu Ma. "A Framework for Service Semantic Description Based on Knowledge Graph." Electronics 10, no. 9 (April 24, 2021): 1017. http://dx.doi.org/10.3390/electronics10091017.

Full text
Abstract:
To construct a large-scale service knowledge graph is necessary. We propose a method, namely semantic information extension, for service knowledge graphs. We insist on the information of services described by Web Services Description Language (WSDL) and we design the ontology layer of web service knowledge graph and construct the service graph, and using the WSDL document data set, the generated service knowledge graph contains 3738 service entities. In particular, our method can give a full performance to its effect in service discovery. To evaluate our approach, we conducted two sets of experiments to explore the relationship between services and classify services that develop by service descriptions. We constructed two experimental data sets, then designed and trained two different deep neural networks for the two tasks to extract the semantics of the natural language used in the service discovery task. In the prediction task of exploring the relationship between services, the prediction accuracy rate reached 95.1%, and in the service classification experiment, the accuracy rate of TOP5 reached 60.8%. Our experience shows that the service knowledge graph has additional advantages over traditional file storage when managing additional semantic information is effective and the new service representation method is helpful for service discovery and composition tasks.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Guang Yin, Yan Na Zhang, and Jian Hua Qu. "The Design of Information Platform Structure Used in Expressway Management Based on SOA Frame." Advanced Materials Research 403-408 (November 2011): 2997–3003. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2997.

Full text
Abstract:
On the basis of the study on the expressway management business flow as well as the SOA architecture concept, this paper designs the overall framework for a systematical expressway management information platform, which includes network of 3 levels, grading processing, comprehensive monitoring, comprehensive information application management, web portal and security system, and then designs the overall technical frame of this platform. Road administration is the core part of highway administration and the concentrated reflection of external highway administrative management. In addition to the approval of highway property, construction, maintenance and management, its main contents also involve the highway maintenance, highway entrance fee management and other road maintenance. It covers a wide range and there is so much information of deferent types to be processed and used. Although domestic and foreign roads and road administration departments have some information management systems in use, some are special highway road administration information systems[1,2], and some are highway comprehensive information management systems[3], the integrated comprehensive highway information management contents are lack of research, and the professional, comprehensive, open road administration information platform has not been established yet. Therefore, the author conducted the design of constructing advanced road administration information platform structure. This platform uses the service oriented architecture (SOA), integrates advanced information network technology, data communications monitoring and transmission technology, electronic control technology, computer processing technique with 3S, PDA and other advanced technologies, so that it is effectively integrated and used in the highway information management. The construction of the platform will realize the effective and scientific management for highways business, ensuring that the highways are unblocked, and maintaining the road property and rights of highways, thus promoting the development of the industrial management information.
APA, Harvard, Vancouver, ISO, and other styles
4

Alaa, Rana, Mariam Gawish, and Manuel Fernández-Veiga. "Improving Recommendations for Online Retail Markets Based on Ontology Evolution." Electronics 10, no. 14 (July 11, 2021): 1650. http://dx.doi.org/10.3390/electronics10141650.

Full text
Abstract:
The semantic web is considered to be an extension of the present web. In the semantic web, information is given with well-defined meanings, and thus helps people worldwide to cooperate together and exchange knowledge. The semantic web plays a significant role in describing the contents and services in a machine-readable form. It has been developed based on ontologies, which are deemed the backbone of the semantic web. Ontologies are a key technique with which semantics are annotated, and they provide common comprehensible foundation for resources on the semantic web. The use of semantics and artificial intelligence leads to what is known to be “Smarter Web”, where it will be easy to retrieve what customers want to see on e-commerce platforms, and thus will help users save time and enhance their search for the products they need. The semantic web is used as well as webs 3.0, which helps enhancing systems performance. Previous personalized recommendation methods based on ontologies identify users’ preferences by means of static snapshots of purchase data. However, as the user preferences evolve with time, the one-shot ontology construction is too constrained for capturing individual diverse opinions and users’ preferences evolution over time. This paper will present a novel recommendation system architecture based on ontology evolution, the proposed subsystem architecture for ontology evolution. Furthermore, the paper proposes an ontology building methodology based on a semi-automatic technique as well as development of online retail ontology. Additionally, a recommendation method based on the ontology reasoning is proposed. Based on the proposed method, e-retailers can develop a more convenient product recommendation system to support consumers’ purchase decisions.
APA, Harvard, Vancouver, ISO, and other styles
5

Dankwa, Stephen, and Lu Yang. "Securing IoT Devices: A Robust and Efficient Deep Learning with a Mixed Batch Adversarial Generation Process for CAPTCHA Security Verification." Electronics 10, no. 15 (July 27, 2021): 1798. http://dx.doi.org/10.3390/electronics10151798.

Full text
Abstract:
The Internet of Things environment (e.g., smart phones, smart televisions, and smart watches) ensures that the end user experience is easy, by connecting lives on web services via the internet. Integrating Internet of Things devices poses ethical risks related to data security, privacy, reliability and management, data mining, and knowledge exchange. An adversarial machine learning attack is a good practice to adopt, to strengthen the security of text-based CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart), to withstand against malicious attacks from computer hackers, to protect Internet of Things devices and the end user’s privacy. The goal of this current study is to perform security vulnerability verification on adversarial text-based CAPTCHA, based on attacker–defender scenarios. Therefore, this study proposed computation-efficient deep learning with a mixed batch adversarial generation process model, which attempted to break the transferability attack, and mitigate the problem of catastrophic forgetting in the context of adversarial attack defense. After performing K-fold cross-validation, experimental results showed that the proposed defense model achieved mean accuracies in the range of 82–84% among three gradient-based adversarial attack datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

De Fazio, Roberto, Massimo De Vittorio, and Paolo Visconti. "Innovative IoT Solutions and Wearable Sensing Systems for Monitoring Human Biophysical Parameters: A Review." Electronics 10, no. 14 (July 12, 2021): 1660. http://dx.doi.org/10.3390/electronics10141660.

Full text
Abstract:
Digital and information technologies are heavily pervading several aspects of human activities, improving our life quality. Health systems are undergoing a real technological revolution, radically changing how medical services are provided, thanks to the wide employment of the Internet of Things (IoT) platforms supporting advanced monitoring services and intelligent inferring systems. This paper reports, at first, a comprehensive overview of innovative sensing systems for monitoring biophysical and psychophysical parameters, all suitable for integration with wearable or portable accessories. Wearable devices represent a headstone on which the IoT-based healthcare platforms are based, providing capillary and real-time monitoring of patient’s conditions. Besides, a survey of modern architectures and supported services by IoT platforms for health monitoring is presented, providing useful insights for developing future healthcare systems. All considered architectures employ wearable devices to gather patient parameters and share them with a cloud platform where they are processed to provide real-time feedback. The reported discussion highlights the structural differences between the discussed frameworks, from the point of view of network configuration, data management strategy, feedback modality, etc.
APA, Harvard, Vancouver, ISO, and other styles
7

Koloveas, Paris, Thanasis Chantzios, Sofia Alevizopoulou, Spiros Skiadopoulos , and Christos Tryfonopoulos . "inTIME: A Machine Learning-Based Framework for Gathering and Leveraging Web Data to Cyber-Threat Intelligence." Electronics 10, no. 7 (March 30, 2021): 818. http://dx.doi.org/10.3390/electronics10070818.

Full text
Abstract:
In today’s world, technology has become deep-rooted and more accessible than ever over a plethora of different devices and platforms, ranging from company servers and commodity PCs to mobile phones and wearables, interconnecting a wide range of stakeholders such as households, organizations and critical infrastructures. The sheer volume and variety of the different operating systems, the device particularities, the various usage domains and the accessibility-ready nature of the platforms creates a vast and complex threat landscape that is difficult to contain. Staying on top of these evolving cyber-threats has become an increasingly difficult task that presently relies heavily on collecting and utilising cyber-threat intelligence before an attack (or at least shortly after, to minimize the damage) and entails the collection, analysis, leveraging and sharing of huge volumes of data. In this work, we put forward inTIME, a machine learning-based integrated framework that provides an holistic view in the cyber-threat intelligence process and allows security analysts to easily identify, collect, analyse, extract, integrate, and share cyber-threat intelligence from a wide variety of online sources including clear/deep/dark web sites, forums and marketplaces, popular social networks, trusted structured sources (e.g., known security databases), or other datastore types (e.g., pastebins). inTIME is a zero-administration, open-source, integrated framework that enables security analysts and security stakeholders to (i) easily deploy a wide variety of data acquisition services (such as focused web crawlers, site scrapers, domain downloaders, social media monitors), (ii) automatically rank the collected content according to its potential to contain useful intelligence, (iii) identify and extract cyber-threat intelligence and security artifacts via automated natural language understanding processes, (iv) leverage the identified intelligence to actionable items by semi-automatic entity disambiguation, linkage and correlation, and (v) manage, share or collaborate on the stored intelligence via open standards and intuitive tools. To the best of our knowledge, this is the first solution in the literature to provide an end-to-end cyber-threat intelligence management platform that is able to support the complete threat lifecycle via an integrated, simple-to-use, yet extensible framework.
APA, Harvard, Vancouver, ISO, and other styles
8

Aghenta, Lawrence Oriaghe, and Mohammad Tariq Iqbal. "Low-Cost, Open Source IoT-Based SCADA System Design Using Thinger.IO and ESP32 Thing." Electronics 8, no. 8 (July 24, 2019): 822. http://dx.doi.org/10.3390/electronics8080822.

Full text
Abstract:
Supervisory Control and Data Acquisition (SCADA) is a technology for monitoring and controlling distributed processes. SCADA provides real-time data exchange between a control/monitoring centre and field devices connected to the distributed processes. A SCADA system performs these functions using its four basic elements: Field Instrumentation Devices (FIDs) such as sensors and actuators which are connected to the distributed process plants being managed, Remote Terminal Units (RTUs) such as single board computers for receiving, processing and sending the remote data from the field instrumentation devices, Master Terminal Units (MTUs) for handling data processing and human machine interactions, and lastly SCADA Communication Channels for connecting the RTUs to the MTUs, and for parsing the acquired data. Generally, there are two classes of SCADA hardware and software; Proprietary (Commercial) and Open Source. In this paper, we present the design and implementation of a low-cost, Open Source SCADA system by using Thinger.IO local server IoT platform as the MTU and ESP32 Thing micro-controller as the RTU. SCADA architectures have evolved over the years from monolithic (stand-alone) through distributed and networked architectures to the latest Internet of Things (IoT) architecture. The SCADA system proposed in this work is based on the Internet of Things SCADA architecture which incorporates web services with the conventional (traditional) SCADA for a more robust supervisory control and monitoring. It comprises of analog Current and Voltage Sensors, the low-power ESP32 Thing micro-controller, a Raspberry Pi micro-controller, and a local Wi-Fi Router. In its implementation, the current and voltage sensors acquire the desired data from the process plant, the ESP32 micro-controller receives, processes and sends the acquired sensor data via a Wi-Fi network to the Thinger.IO local server IoT platform for data storage, real-time monitoring and remote control. The Thinger.IO server is locally hosted by the Raspberry Pi micro-controller, while the Wi-Fi network which forms the SCADA communication channel is created using the Wi-Fi Router. In order to test the proposed SCADA system solution, the designed hardware was set up to remotely monitor the Photovoltaic (PV) voltage, current, and power, as well as the storage battery voltage of a 260 W, 12 V Solar PV System. Some of the created Human Machine Interfaces (HMIs) on Thinger.IO Server where an operator can remotely monitor the data in the cloud, as well as initiate supervisory control activities if the acquired data are not in the expected range, using both a computer connected to the network, and Thinger.IO Mobile Apps are presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Annapragada, Akshaya V., Marcella M. Donaruma-Kwoh, Ananth V. Annapragada, and Zbigniew A. Starosolski. "A natural language processing and deep learning approach to identify child abuse from pediatric electronic medical records." PLOS ONE 16, no. 2 (February 26, 2021): e0247404. http://dx.doi.org/10.1371/journal.pone.0247404.

Full text
Abstract:
Child physical abuse is a leading cause of traumatic injury and death in children. In 2017, child abuse was responsible for 1688 fatalities in the United States, of 3.5 million children referred to Child Protection Services and 674,000 substantiated victims. While large referral hospitals maintain teams trained in Child Abuse Pediatrics, smaller community hospitals often do not have such dedicated resources to evaluate patients for potential abuse. Moreover, identification of abuse has a low margin of error, as false positive identifications lead to unwarranted separations, while false negatives allow dangerous situations to continue. This context makes the consistent detection of and response to abuse difficult, particularly given subtle signs in young, non-verbal patients. Here, we describe the development of artificial intelligence algorithms that use unstructured free-text in the electronic medical record—including notes from physicians, nurses, and social workers—to identify children who are suspected victims of physical abuse. Importantly, only the notes from time of first encounter (e.g.: birth, routine visit, sickness) to the last record before child protection team involvement were used. This allowed us to develop an algorithm using only information available prior to referral to the specialized child protection team. The study was performed in a multi-center referral pediatric hospital on patients screened for abuse within five different locations between 2015 and 2019. Of 1123 patients, 867 records were available after data cleaning and processing, and 55% were abuse-positive as determined by a multi-disciplinary team of clinical professionals. These electronic medical records were encoded with three natural language processing (NLP) algorithms—Bag of Words (BOW), Word Embeddings (WE), and Rules-Based (RB)—and used to train multiple neural network architectures. The BOW and WE encodings utilize the full free-text, while RB selects crucial phrases as identified by physicians. The best architecture was selected by average classification accuracy for the best performing model from each train-test split of a cross-validation experiment. Natural language processing coupled with neural networks detected cases of likely child abuse using only information available to clinicians prior to child protection team referral with average accuracy of 0.90±0.02 and average area under the receiver operator characteristic curve (ROC-AUC) 0.93±0.02 for the best performing Bag of Words models. The best performing rules-based models achieved average accuracy of 0.77±0.04 and average ROC-AUC 0.81±0.05, while a Word Embeddings strategy was severely limited by lack of representative embeddings. Importantly, the best performing model had a false positive rate of 8%, as compared to rates of 20% or higher in previously reported studies. This artificial intelligence approach can help screen patients for whom an abuse concern exists and streamline the identification of patients who may benefit from referral to a child protection team. Furthermore, this approach could be applied to develop computer-aided-diagnosis platforms for the challenging and often intractable problem of reliably identifying pediatric patients suffering from physical abuse.
APA, Harvard, Vancouver, ISO, and other styles
10

Le-Tuan, Anh, Conor Hayes, Manfred Hauswirth, and Danh Le-Phuoc. "Pushing the Scalability of RDF Engines on IoT Edge Devices." Sensors 20, no. 10 (May 14, 2020): 2788. http://dx.doi.org/10.3390/s20102788.

Full text
Abstract:
Semantic interoperability for the Internet of Things (IoT) is enabled by standards and technologies from the Semantic Web. As recent research suggests a move towards decentralised IoT architectures, we have investigated the scalability and robustness of RDF (Resource Description Framework)engines that can be embedded throughout the architecture, in particular at edge nodes. RDF processing at the edge facilitates the deployment of semantic integration gateways closer to low-level devices. Our focus is on how to enable scalable and robust RDF engines that can operate on lightweight devices. In this paper, we have first carried out an empirical study of the scalability and behaviour of solutions for RDF data management on standard computing hardware that have been ported to run on lightweight devices at the network edge. The findings of our study shows that these RDF store solutions have several shortcomings on commodity ARM (Advanced RISC Machine) boards that are representative of IoT edge node hardware. Consequently, this has inspired us to introduce a lightweight RDF engine, which comprises an RDF storage and a SPARQL processor for lightweight edge devices, called RDF4Led. RDF4Led follows the RISC-style (Reduce Instruction Set Computer) design philosophy. The design constitutes a flash-aware storage structure, an indexing scheme, an alternative buffer management technique and a low-memory-footprint join algorithm that demonstrates improved scalability and robustness over competing solutions. With a significantly smaller memory footprint, we show that RDF4Led can handle 2 to 5 times more data than popular RDF engines such as Jena TDB (Tuple Database) and RDF4J, while consuming the same amount of memory. In particular, RDF4Led requires 10%–30% memory of its competitors to operate on datasets of up to 50 million triples. On memory-constrained ARM boards, it can perform faster updates and can scale better than Jena TDB and Virtuoso. Furthermore, we demonstrate considerably faster query operations than Jena TDB and RDF4J.
APA, Harvard, Vancouver, ISO, and other styles
11

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no. 3 (August 1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Full text
Abstract:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
APA, Harvard, Vancouver, ISO, and other styles
12

Sizov, V. A., D. M. Malinichev, and V. V. Mochalov. "Improvement of the Regulatory Framework of Information Security for Terminal Access Devices of the State Information System." Open Education 24, no. 2 (April 22, 2020): 73–79. http://dx.doi.org/10.21686/1818-4243-2020-2-73-79.

Full text
Abstract:
The aim of the study is to increase the effectiveness of information security management for state information systems (SIS) with terminal access devices by improving regulatory legal acts that should be logically interconnected and not contradict each other, as well as use a single professional thesaurus that allows understanding and describe information security processes.Currently, state information systems with terminal access devices are used to ensure the realization of the legitimate interests of citizens in information interaction with public authorities [1].One of the types of such systems are public systems [2]. They are designed to provide electronic services to citizens, such as paying taxes, obtaining certificates, filing of applications and other information. The processed personal data may belong to special, biometric, publicly available and other categories [3]. Various categories of personal data, concentrated in a large volume about a large number of citizens, can lead to significant damage as a result of their leakage, which means that this creates information risks.There are several basic types of architectures of state information systems: systems based on the “thin clientpeer-to-peer network systems; file server systems; data processing centers; systems with remote user access; the use of different types of operating systems (heterogeneity of the environment); use of applications independent of operating systems; use of dedicated communication channels [4]. Such diversity and heterogeneity of state information systems, on the one hand, and the need for high-quality state regulation in the field of information security in these systems, on the other hand, require the study and development of legal acts that take into account primarily the features of systems that have a typical modern architecture of “thin customer". Materials and research methods. The protection of the state information system is regulated by a large number of legal acts that are constantly being improved with changes and additions to the content. At the substantive level, it includes many stages, such as the formation of SIS requirements, the development of a security system, its implementation, and certification. The protected information is processed in order to enforce the law and ensure the functioning of the authorities. The need to protect confidential information is determined by the legislation of the Russian Federation [5, 6]. Therefore, to assess the quality of the regulatory framework of information security for terminal access devices of the state information system, the analysis of the main regulatory legal acts is carried out and on the basis of it, proposals are developed by analogy to improve existing regulatory documents in the field of information security.Results. The paper has developed proposals for improving the regulatory framework of information security for terminal access devices of the state information system- for uniformity and unification, the terms with corresponding definitions are justified for their establishment in the documents of the Federal Service for Technical and Export Control (FSTEC) or Rosstandart;- rules for the formation of requirements for terminals, which should be equivalent requirements for computer equipment in the “Concept for the protection of computer equipment and automated systems from unauthorized access to information ".Conclusion. General recommendations on information protection in state information systems using the “thin client" architecture are proposed, specific threats that are absent in the FSTEC threat bank are justified, and directions for further information security for the class of state information systems under consideration are identified. Due to the large number of stakeholders involved in the coordination and development of unified solutions, a more specific consideration of the problems and issues raised is possible only with the participation of representatives of authorized federal executive bodies and business representatives for discussion.
APA, Harvard, Vancouver, ISO, and other styles
13

Antony, Joby, Basanta Mahato, Sachin Sharma, and Gaurav Chitranshi. "Distributed data acquisition and control system based on low cost Embedded Web servers." International Journal of Instrumentation Control and Automation, July 2011, 78–81. http://dx.doi.org/10.47893/ijica.2011.1015.

Full text
Abstract:
In the present IT age, we are in need of fully automated industrial system. To design of Data Acquisition System (DAS) and its control is a challenging part of any measurement, automation and control system applications. Advancement in technology is very well reflected and supported by changes in measurement and control instrumentation. To move to highspeed serial from Parallel bus architectures has become prevalent and among these Ethernet is the most preferred switched Serial bus, which is forward-looking and backwardcompatible. Great stride have been made in promoting Ethernet use for industrial networks and factory automation. The Web based distributed measurement and control is slowly replacing parallel architectures due to its non-crate architecture which reduces complexities of cooling, maintenance etc. for slow speed field processing. A new kind of expandable, distributed large I/O data acquisition system based on low cost microcontroller based electronic web server[1] boards has been investigated and developed in this paper, whose hardware boards use 8-bit RISC processor with Ethernet controller, and software platform use AVR-GCC for firmware and Python for OS independent man machine interface. This system can measure all kinds of electrical and thermal parameters such as voltage, current, thermocouple, RTD, and so on. The measured data can be displayed on web pages at different geographical locations, and at the same time can be transmitted through RJ-45 Ethernet network to remote DAS or DCS monitoring system by using HTTP protocol. A central embedded single board computer (SBC) can act as a central CPU to communicate between web servers automatically.
APA, Harvard, Vancouver, ISO, and other styles
14

Naqvi, Naureen, Sabih Ur Rehman, and Zahidul Islam. "A Hyperconnected Smart City Framework." Australasian Journal of Information Systems 24 (August 25, 2020). http://dx.doi.org/10.3127/ajis.v24i0.2531.

Full text
Abstract:
Recent technological advancements have given rise to the concept of hyper-connected smart cities being adopted around the world. These cities aspire to achieve better outcomes for citizens by improving the quality of service delivery, information sharing, and creating a sustainable environment. A smart city comprises of a network of interconnected devices also known as IoT (Internet of Things), which captures data and transmits it to a platform for analysis. This data covers a variety of information produced in large volumes also known as Big Data. From data capture to processing and storage, there are several stages where a breach in security and privacy could result in catastrophic impacts. Presently there is a gap in the centralization of knowledge to implement smart city services with a secure architecture. To bridge this gap, we present a framework that highlights challenges within the smart city applications and synthesizes the techniques feasible to solve them. Additionally, we analyze the impact of a potential breach on smart city applications and state-of-the-art architectures available. Furthermore, we identify the stakeholders who may have an interest in learning about the relationships between the significant aspects of a smart city. We demonstrate these relationships through force-directed network diagrams. They will help raise the awareness amongst the stakeholders for planning the development of a smart city. To complement our framework, we designed web-based interactive resources that are available from http://ausdigitech.com/smartcity/.
APA, Harvard, Vancouver, ISO, and other styles
15

Sulova, Snezhana, and Boris Bankov. "Approach for social media content-based analysis for vacation resorts." Journal of Communications Software and Systems 15, no. 3 (September 18, 2019). http://dx.doi.org/10.24138/jcomss.v15i3.712.

Full text
Abstract:
The impact of social networks on our liveskeeps increasing because they provide content,generated and controlled by users, that is constantly evolving. They aid us in spreading news, statements, ideas and comments very quickly. Social platforms are currently one of the richest sources of customer feedback on a variety of topics. A topic that is frequently discussed is the resort and holiday villages and the tourist services offered there. Customer comments are valuable to both travel planners and tour operators. The accumulation of opinions in the web space is a prerequisite for using and applying appropriate tools for their computer processing and for extracting useful knowledge from them. While working with unstructured data, such as social media messages, there isn’t a universal text processing algorithm because each social network and its resources have their own characteristics. In this article, we propose a new approach for an automated analysis of a static set of historical data of user messages about holiday and vacation resorts, published on Twitter. The approach is based on natural language processing techniques and the application of machine learning methods. The experiments are conducted using softwareproduct RapidMiner.
APA, Harvard, Vancouver, ISO, and other styles
16

Ruch, Adam, and Steve Collins. "Zoning Laws: Facebook and Google+." M/C Journal 14, no. 5 (October 18, 2011). http://dx.doi.org/10.5204/mcj.411.

Full text
Abstract:
As the single most successful social-networking Website to date, Facebook has caused a shift in both practice and perception of online socialisation, and its relationship to the offline world. While not the first online social networking service, Facebook’s user base dwarfs its nearest competitors. Mark Zuckerberg’s creation boasts more than 750 million users (Facebook). The currently ailing MySpace claimed a ceiling of 100 million users in 2006 (Cashmore). Further, the accuracy of this number has been contested due to a high proportion of fake or inactive accounts. Facebook by contrast, claims 50% of its user base logs in at least once a day (Facebook). The popular and mainstream uptake of Facebook has shifted social use of the Internet from various and fragmented niche groups towards a common hub or portal around which much everyday Internet use is centred. The implications are many, but this paper will focus on the progress what Mimi Marinucci terms the “Facebook effect” (70) and the evolution of lists as a filtering mechanism representing one’s social zones within Facebook. This is in part inspired by the launch of Google’s new social networking service Google+ which includes “circles” as a fundamental design feature for sorting contacts. Circles are an acknowledgement of the shortcomings of a single, unified friends list that defines the Facebook experience. These lists and circles are both manifestations of the same essential concept: our social lives are, in fact, divided into various zones not defined by an online/offline dichotomy, by fantasy role-play, deviant sexual practices, or other marginal or minority interests. What the lists and circles demonstrate is that even very common, mainstream people occupy different roles in everyday life, and that to be effective social tools, social networking sites must grant users control over their various identities and over who knows what about them. Even so, the very nature of computer-based social tools lead to problematic definitions of identities and relationships using discreet terms, in contrast to more fluid, performative constructions of an individual and their relations to others. Building the Monolith In 1995, Sherry Turkle wrote that “the Internet has become a significant social laboratory for experimenting with the constructions and reconstructions of self that characterize postmodern life” (180). Turkle describes the various deliberate acts of personnae creation possible online in contrast to earlier constraints placed upon the “cycling through different identities” (179). In the past, Turkle argues, “lifelong involvement with families and communities kept such cycling through under fairly stringent control” (180). In effect, Turkle was documenting the proliferation of identity games early adopters of Internet technologies played through various means. Much of what Turkle focused on were MUDs (Multi-User Dungeons) and MOOs (MUD Object Oriented), explicit play-spaces that encouraged identity-play of various kinds. Her contemporary Howard Rheingold focused on what may be described as the more “true to life” communities of the WELL (Whole Earth ‘Lectronic Link) (1–38). In particular, Rheingold explored a community established around the shared experience of parenting, especially of young children. While that community was not explicitly built on the notion of role-play, the parental identity was an important quality of community members. Unlike contemporary social media networks, these early communities were built on discreet platforms. MUDs, MOOs, Bulletin Board Systems, UseNet Groups and other early Internet communication platforms were generally hosted independently of one another, and even had to be dialled into via modem separately in some cases (such as the WELL). The Internet was a truly disparate entity in 1995. The discreetness of each community supported the cordoning off of individual roles or identities between them. Thus, an individual could quite easily be “Pete” a member of the parental WELL group and “Gorak the Destroyer,” a role-player on a fantasy MUD without the two roles ever being associated with each other. As Turkle points out, even within each MUD ample opportunity existed to play multiple characters (183–192). With only a screen name and associated description to identify an individual within the MUD environment, nothing technical existed to connect one player’s multiple identities, even within the same community. As the Internet has matured, however, the tendency has been shifting towards monolithic hubs, a notion of collecting all of “the Internet” together. From a purely technical and operational perspective, this has led to the emergence of the ISP (Internet service provider). Users can make a connection to one point, and then be connected to everything “on the Net” instead of individually dialling into servers and services one at a time as was the case in the early 1980s with companies such as Prodigy, the Source, CompuServe, and America On-Line (AOL). The early information service providers were largely walled gardens. A CompuServe user could only access information on the CompuServe network. Eventually the Internet became the network of choice and services migrated to it. Standards such as HTTP for Web page delivery and SMTP for email became established and dominate the Internet today. Technically, this has made the Internet much easier to use. The services that have developed on this more rationalised and unified platform have also tended toward monolithic, centralised architectures, despite the Internet’s apparent fundamental lack of a hierarchy. As the Internet replaced the closed networks, the wider Web of HTTP pages, forums, mailing lists and other forms of Internet communication and community thrived. Perhaps they required slightly more technological savvy than the carefully designed experience of walled-garden ISPs such as AOL, but these fora and IRC (Internet Relay Chat) rooms still provided the discreet environments within which to role-play. An individual could hold dozens of login names to as many different communities. These various niches could be simply hobby sites and forums where a user would deploy their identity as model train enthusiast, musician, or pet owner. They could also be explicitly about role-play, continuing the tradition of MUDs and MOOs into the new millennium. Pseudo- and polynymity were still very much part of the Internet experience. Even into the early parts of the so-called Web 2.0 explosion of more interactive Websites which allowed for easier dialog between site owner and viewer, a given identity would be very much tied to a single site, blog or even individual comments. There was no “single sign on” to link my thread from a music forum to the comments I made on a videogame blog to my aquarium photos at an image gallery site. Today, Facebook and Google, among others, seek to change all that. The Facebook Effect Working from a psychological background Turkle explored the multiplicity of online identities as a valuable learning, even therapeutic, experience. She assessed the experiences of individuals who were coming to terms with aspects of their own personalities, from simple shyness to exploring their sexuality. In “You Can’t Front on Facebook,” Mimi Marinucci summarizes an analysis of online behaviour by another psychologist, John Suler (67–70). Suler observed an “online disinhibition effect” characterised by users’ tendency to express themselves more openly online than offline (321). Awareness of this effect was drawn (no pun intended) into popular culture by cartoonist Mike Krahulik’s protagonist John Gabriel. Although Krahulik’s summation is straight to the point, Suler offers a more considered explanation. There are six general reasons for the online disinhibition effect: being anonymous, being invisible, the communications being out of sync, the strange sensation that a virtual interlocutor is all in the mind of the user, the general sense that the online world simply is not real and the minimisation of status and authority (321–325). Of the six, the notion of anonymity is most problematic, as briefly explored above in the case of AOL. The role of pseudonymity has been explored in more detail in Ruch, and will be considered with regard to Facebook and Google+ below. The Facebook effect, Marinucci argues, mitigates all six of these issues. Though Marinucci explains the mitigation of each factor individually, her final conclusion is the most compelling reason: “Facebook often facilitates what is best described as an integration of identities, and this integration of identities in turn functions as something of an inhibiting factor” (73). Ruch identifies this phenomenon as the “aggregation of identities” (219). Similarly, Brady Robards observes that “social network sites such as MySpace and Facebook collapse the entire array of social relationships into just one category, that of ‘Friend’” (20). Unlike earlier community sites, Ruch notes “Facebook rejects both the mythical anonymity of the Internet, but also the actual pseudo- or polynonymous potential of the technologies” (219). Essentially, Facebook works to bring the offline social world online, along with all the conventional baggage that accompanies the individual’s real-world social life. Facebook, and now Google+, present a hard, dichotomous approach to online identity: anonymous and authentic. Their socially networked individual is the “real” one, using a person’s given name, and bringing all (or as many as the sites can capture) their contacts from the offline world into the online one, regardless of context. The Facebook experience is one of “friending” everyone one has any social contact with into one homogeneous group. Not only is Facebook avoiding the multiple online identities that interested Turkle, but it is disregarding any multiplicity of identity anywhere, including any online/offline split. David Kirkpatrick reports Mark Zuckerberg’s rejection of this construction of identity is explained by his belief that “You have one identity … having two identities for yourself is an example of a lack of integrity” (199). Arguably, Zuckerberg’s calls for accountability through identity continue a perennial concern for anonymity online fuelled by “on the Internet no one knows you’re a dog” style moral panics. Over two decades ago Lindsy Van Gelder recounted the now infamous case of “Joan and Alex” (533) and Julian Dibbell recounted “a rape in cyberspace” (11). More recent anxieties concern the hacking escapades of Anonymous and LulzSec. Zuckerberg’s approach has been criticised by Christopher Poole, the founder of 4Chan—a bastion of Internet anonymity. During his keynote presentation at South by SouthWest 2011 Poole argued that Zuckerberg “equates anonymity with a lack of authenticity, almost a cowardice.” Yet in spite of these objections, Facebook has mainstream appeal. From a social constructivist perspective, this approach to identity would be satisfying the (perceived?) need for a mainstream, context-free, general social space online to cater for the hundreds of millions of people who now use the Internet. There is no specific, pre-defined reason to join Facebook in the way there is a particular reason to join a heavy metal music message board. Facebook is catering to the need to bring “real” social life online generally, with “real” in this case meaning “offline and pre-existing.” Very real risks of missing “real life” social events (engagements, new babies, party invitations etc) that were shared primarily via Facebook became salient to large groups of individuals not consciously concerned with some particular facet of identity performance. The commercial imperatives towards monolithic Internet and identity are obvious. Given that both Facebook and Google+ are in the business of facilitating the sale of advertising, their core business value is the demographic information they can sell to various companies for target advertising. Knowing a user’s individual identity and tastes is extremely important to those in the business of selling consumers what they currently want as well as predicting their future desires. The problem with this is the dawning realisation that even for the average person, role-playing is part of everyday life. We simply aren’t the same person in all contexts. None of the roles we play need to be particularly scandalous for this to be true, but we have different comfort zones with people that are fuelled by context. Suler proposes and Marinucci confirms that inhibition may be just as much part of our authentic self as the uninhibited expression experienced in more anonymous circumstances. Further, different contexts will inform what we inhibit and what we express. It is not as though there is a simple binary between two different groups and two different personal characteristics to oscillate between. The inhibited personnae one occupies at one’s grandmother’s home is a different inhibited self one plays at a job interview or in a heated discussion with faculty members at a university. One is politeness, the second professionalism, the third scholarly—yet they all restrain the individual in different ways. The Importance of Control over Circles Google+ is Google’s latest foray into the social networking arena. Its previous ventures Orkut and Google Buzz did not fare well, both were variously marred by legal issues concerning privacy, security, SPAM and hate groups. Buzz in particular fell afoul of associating Google accounts with users” real life identities, and (as noted earlier), all the baggage that comes with it. “One user blogged about how Buzz automatically added her abusive ex-boyfriend as a follower and exposed her communications with a current partner to him. Other bloggers commented that repressive governments in countries such as China or Iran could use Buzz to expose dissidents” (Novak). Google+ takes a different approach to its predecessors and its main rival, Facebook. Facebook allows for the organisation of “friends” into lists. Individuals can span more than one list. This is an exercise analogous to what Erving Goffman refers to as “audience segregation” (139). According to the site’s own statistics the average Facebook user has 130 friends, we anticipate it would be time-consuming to organise one’s friends according to real life social contexts. Yet without such organisation, Facebook overlooks the social structures and concomitant behaviours inherent in everyday life. Even broad groups offer little assistance. For example, an academic’s “Work People” list may include the Head of Department as well as numerous other lecturers with whom a workspace is shared. There are things one might share with immediate colleagues that should not be shared with the Head of Department. As Goffman states, “when audience segregation fails and an outsider happens upon a performance that was not meant for him, difficult problems in impression management arise” (139). By homogenising “friends” and social contexts users are either inhibited or run the risk of some future awkward encounters. Google+ utilises “circles” as its method for organising contacts. The graphical user interface is intuitive, facilitated by an easy drag and drop function. Use of “circles” already exists in the vocabulary used to describe our social structures. “List” by contrast reduces the subject matter to simple data. The utility of Facebook’s friends lists is hindered by usability issues—an unintuitive and convoluted process that was added to Facebook well after its launch, perhaps a reaction to privacy concerns rather than a genuine attempt to emulate social organisation. For a cogent breakdown of these technical and design problems see Augusto Sellhorn. Organising friends into lists is a function offered by Facebook, but Google+ takes a different approach: organising friends in circles is a central feature; the whole experience is centred around attempting to mirror the social relations of real life. Google’s promotional video explains the centrality of emulating “real life relationships” (Google). Effectively, Facebook and Google+ have adopted two different systemic approaches to dealing with the same issue. Facebook places the burden of organising a homogeneous mass of “friends” into lists on the user as an afterthought of connecting with another user. In contrast, Google+ builds organisation into the act of connecting. Whilst Google+’s approach is more intuitive and designed to facilitate social networking that more accurately reflects how real life social relationships are structured, it suffers from forcing direct correlation between an account and the account holder. That is, use of Google+ mandates bringing online the offline. Google+ operates a real names policy and on the weekend of 23 July 2011 suspended a number of accounts for violation of Google’s Community Standards. A suspension notice posted by Violet Blue reads: “After reviewing your profile, we determined the name you provided violates our Community Standards.” Open Source technologist Kirrily Robert polled 119 Google+ users about their experiences with the real names policy. The results posted to her on blog reveal that users desire pseudonymity, many for reasons of privacy and/or safety rather than the lack of integrity thought by Zuckerberg. boyd argues that Google’s real names policy is an abuse of power and poses danger to those users employing “nicks” for reasons including being a government employment or the victim of stalking, rape or domestic abuse. A comprehensive list of those at risk has been posted to the Geek Feminism Wiki (ironically, the Wiki utilises “Connect”, Facebook’s attempt at a single sign on solution for the Web that connects users’ movements with their Facebook profile). Facebook has a culture of real names stemming from its early adopters drawn from trusted communities, and this culture became a norm for that service (boyd). But as boyd also points out, “[r]eal names are by no means universal on Facebook.” Google+ demands real names, a demand justified by rhetoric of designing a social networking system that is more like real life. “Real”, in this case, is represented by one’s given name—irrespective of the authenticity of one’s pseudonym or the complications and dangers of using one’s given name. Conclusion There is a multiplicity of issues concerning social networks and identities, privacy and safety. This paper has outlined the challenges involved in moving real life to the online environment and the contests in trying to designate zones of social context. Where some earlier research into the social Internet has had a positive (even utopian) feel, the contemporary Internet is increasingly influenced by powerful and competing corporations. As a result, the experience of the Internet is not necessarily as flexible as Turkle or Rheingold might have envisioned. Rather than conducting identity experimentation or exercising multiple personnae, we are increasingly obligated to perform identity as it is defined by the monolithic service providers such as Facebook and Google+. This is not purely an indictment of Facebook or Google’s corporate drive, though they are obviously implicated, but has as much to do with the new social practice of “being online.” So, while there are myriad benefits to participating in this new social context, as Poole noted, the “cost of failure is really high when you’re contributing as yourself.” Areas for further exploration include the implications of Facebook positioning itself as a general-purpose user authentication tool whereby users can log into a wide array of Websites using their Facebook credentials. If Google were to take a similar action the implications would be even more convoluted, given the range of other services Google offers, from GMail to the Google Checkout payment service. While the monolithic centralisation of these services will have obvious benefits, there will be many more subtle problems which must be addressed. References Blue, Violet. “Google Plus Deleting Accounts en Masse: No Clear Answers.” zdnet.com (2011). 10 Aug. 2011 ‹http://www.zdnet.com/blog/violetblue/google-plus-deleting-accounts-en-masse-no-clear-answers/56›. boyd, danah. “Real Names Policies Are an Abuse of Power.” zephoria.org (2011). 10 Aug. 2011 ‹http://www.zephoria.org/thoughts/archives/2011/08/04/real-names.html›. Cashmore, Pete. “MySpace Hits 100 Million Accounts.” mashable.com (2006). 10 Aug. 2011 ‹http://mashable.com/2006/08/09/myspace-hits-100-million-accounts›. Dibble, Julian. My Tiny Life: Crime and Passion in a Virtual World. New York: Henry Holt & Company, 1998. Facebook. “Fact Sheet.” Facebook (2011). 10 Aug. 2011 ‹http://www.facebook.com/press/info.php?statistic›. Geek Feminism Wiki. “Who Is Harmed by a Real Names Policy?” 2011. 10 Aug. 2011 ‹http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy› Goffman, Erving. The Presentation of Self in Everyday Life. London: Penguin, 1959. Google. “The Google+ Project: Explore Circles.” Youtube.com (2011). 10 Aug. 2011 ‹http://www.youtube.com/watch?v=ocPeAdpe_A8›. Kirkpatrick, David. The Facebook Effect. New York: Simon & Schuster, 2010. Marinucci, Mimi. “You Can’t Front on Facebook.” Facebook and Philosophy. Ed. Dylan Wittkower. Chicago & La Salle, Illinois: Open Court, 2010. 65–74. Novak, Peter. “Privacy Commissioner Reviewing Google Buzz.” CBC News: Technology and Science (2010). 10 Aug. 2011 ‹http://www.cbc.ca/news/technology/story/2010/02/16/google-buzz-privacy.html›. Poole, Christopher. Keynote presentation. South by SouthWest. Texas, Austin, 2011. Robards, Brady. “Negotiating Identity and Integrity on Social Network Sites for Educators.” International Journal for Educational Integrity 6.2 (2010): 19–23. Robert, Kirrily. “Preliminary Results of My Survey of Suspended Google Accounts.” 2011. 10 Aug. 2011 ‹http://infotrope.net/2011/07/25/preliminary-results-of-my-survey-of-suspended-google-accounts/›. Rheingold, Howard. The Virtual Community: Homesteading on the Electronic Frontier. New York: Harper Perennial, 1993. Ruch, Adam. “The Decline of Pseudonymity.” Posthumanity. Eds. Adam Ruch and Ewan Kirkland. Oxford: Inter-Disciplinary.net Press, 2010: 211–220. Sellhorn, Augusto. “Facebook Friend Lists Suck When Compared to Google+ Circles.” sellmic.com (2011). 10 Aug. 2011 ‹http://sellmic.com/blog/2011/07/01/facebook-friend-lists-suck-when-compared-to-googleplus-circles›. Suler, John. “The Online Disinhibition Effect.” CyberPsychology and Behavior 7 (2004): 321–326. Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster, 1995. Van Gelder, Lindsy. “The Strange Case of the Electronic Lover.” Computerization and Controversy: Value Conflicts and Social Choices Ed. Rob Kling. New York: Academic Press, 1996: 533–46.
APA, Harvard, Vancouver, ISO, and other styles
17

Moore, Christopher Luke. "Digital Games Distribution: The Presence of the Past and the Future of Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.166.

Full text
Abstract:
A common criticism of the rhythm video games genre — including series like Guitar Hero and Rock Band, is that playing musical simulation games is a waste of time when you could be playing an actual guitar and learning a real skill. A more serious criticism of games cultures draws attention to the degree of e-waste they produce. E-waste or electronic waste includes mobiles phones, computers, televisions and other electronic devices, containing toxic chemicals and metals whose landfill, recycling and salvaging all produce distinct environmental and social problems. The e-waste produced by games like Guitar Hero is obvious in the regular flow of merchandise transforming computer and video games stores into simulation music stores, filled with replica guitars, drum kits, microphones and other products whose half-lives are short and whose obsolescence is anticipated in the annual cycles of consumption and disposal. This paper explores the connection between e-waste and obsolescence in the games industry, and argues for the further consideration of consumers as part of the solution to the problem of e-waste. It uses a case study of the PC digital distribution software platform, Steam, to suggest that the digital distribution of games may offer an alternative model to market driven software and hardware obsolescence, and more generally, that such software platforms might be a place to support cultures of consumption that delay rather than promote hardware obsolescence and its inevitability as e-waste. The question is whether there exists a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities (its current 'green' benefit), but also for supporting consumer practices that further reduce e-waste. The games industry relies on a rapid production and innovation cycle, one that actively enforces hardware obsolescence. Current video game consoles, including the PlayStation 3, the Xbox 360 and Nintendo Wii, are the seventh generation of home gaming consoles to appear within forty years, and each generation is accompanied by an immense international transportation of games hardware, software (in various storage formats) and peripherals. Obsolescence also occurs at the software or content level and is significant because the games industry as a creative industry is dependent on the extensive management of multiple intellectual properties. The computing and video games software industry operates a close partnership with the hardware industry, and as such, software obsolescence directly contributes to hardware obsolescence. The obsolescence of content and the redundancy of the methods of policing its scarcity in the marketplace has been accelerated and altered by the processes of disintermediation with a range of outcomes (Flew). The music industry is perhaps the most advanced in terms of disintermediation with digital distribution at the center of the conflict between the legitimate and unauthorised access to intellectual property. This points to one issue with the hypothesis that digital distribution can lead to a reduction in hardware obsolescence, as the marketplace leader and key online distributor of music, Apple, is also the major producer of new media technologies and devices that are the paragon of stylistic obsolescence. Stylistic obsolescence, in which fashion changes products across seasons of consumption, has long been observed as the dominant form of scaled industrial innovation (Slade). Stylistic obsolescence is differentiated from mechanical or technological obsolescence as the deliberate supersedence of products by more advanced designs, better production techniques and other minor innovations. The line between the stylistic and technological obsolescence is not always clear, especially as reduced durability has become a powerful market strategy (Fitzpatrick). This occurs where the design of technologies is subsumed within the discourses of manufacturing, consumption and the logic of planned obsolescence in which the product or parts are intended to fail, degrade or under perform over time. It is especially the case with signature new media technologies such as laptop computers, mobile phones and portable games devices. Gamers are as guilty as other consumer groups in contributing to e-waste as participants in the industry's cycles of planned obsolescence, but some of them complicate discussions over the future of obsolescence and e-waste. Many gamers actively work to forestall the obsolescence of their games: they invest time in the play of older games (“retrogaming”) they donate labor and creative energy to the production of user-generated content as a means of sustaining involvement in gaming communities; and they produce entirely new game experiences for other users, based on existing software and hardware modifications known as 'mods'. With Guitar Hero and other 'rhythm' games it would be easy to argue that the hardware components of this genre have only one future: as waste. Alternatively, we could consider the actual lifespan of these objects (including their impact as e-waste) and the roles they play in the performances and practices of communities of gamers. For example, the Elmo Guitar Hero controller mod, the Tesla coil Guitar Hero controller interface, the Rock Band Speak n' Spellbinder mashup, the multiple and almost sacrilegious Fender guitar hero mods, the Guitar Hero Portable Turntable Mod and MAKE magazine's Trumpet Hero all indicate a significant diversity of user innovation, community formation and individual investment in the post-retail life of computer and video game hardware. Obsolescence is not just a problem for the games industry but for the computing and electronics industries more broadly as direct contributors to the social and environmental cost of electrical waste and obsolete electrical equipment. Planned obsolescence has long been the experience of gamers and computer users, as the basis of a utopian mythology of upgrades (Dovey and Kennedy). For PC users the upgrade pathway is traversed by the consumption of further hardware and software post initial purchase in a cycle of endless consumption, acquisition and waste (as older parts are replaced and eventually discarded). The accumulation and disposal of these cultural artefacts does not devalue or accrue in space or time at the same rate (Straw) and many users will persist for years, gradually upgrading and delaying obsolescence and even perpetuate the circulation of older cultural commodities. Flea markets and secondhand fairs are popular sites for the purchase of new, recent, old, and recycled computer hardware, and peripherals. Such practices and parallel markets support the strategies of 'making do' described by De Certeau, but they also continue the cycle of upgrade and obsolescence, and they are still consumed as part of the promise of the 'new', and the desire of a purchase that will finally 'fix' the users' computer in a state of completion (29). The planned obsolescence of new media technologies is common, but its success is mixed; for example, support for Microsoft's operating system Windows XP was officially withdrawn in April 2009 (Robinson), but due to the popularity in low cost PC 'netbooks' outfitted with an optimised XP operating system and a less than enthusiastic response to the 'next generation' Windows Vista, XP continues to be popular. Digital Distribution: A Solution? Gamers may be able to reduce the accumulation of e-waste by supporting the disintermediation of the games retail sector by means of online distribution. Disintermediation is the establishment of a direct relationship between the creators of content and their consumers through products and services offered by content producers (Flew 201). The move to digital distribution has already begun to reduce the need to physically handle commodities, but this currently signals only further support of planned, stylistic and technological obsolescence, increasing the rate at which the commodities for recording, storing, distributing and exhibiting digital content become e-waste. Digital distribution is sometimes overlooked as a potential means for promoting communities of user practice dedicated to e-waste reduction, at the same time it is actively employed to reduce the potential for the unregulated appropriation of content and restrict post-purchase sales through Digital Rights Management (DRM) technologies. Distributors like Amazon.com continue to pursue commercial opportunities in linking the user to digital distribution of content via exclusive hardware and software technologies. The Amazon e-book reader, the Kindle, operates via a proprietary mobile network using a commercially run version of the wireless 3G protocols. The e-book reader is heavily encrypted with Digital Rights Management (DRM) technologies and exclusive digital book formats designed to enforce current copyright restrictions and eliminate second-hand sales, lending, and further post-purchase distribution. The success of this mode of distribution is connected to Amazon's ability to tap both the mainstream market and the consumer demand for the less-than-popular; those books, movies, music and television series that may not have been 'hits' at the time of release. The desire to revisit forgotten niches, such as B-sides, comics, books, and older video games, suggests Chris Anderson, linked with so-called “long tail” economics. Recently Webb has queried the economic impact of the Long Tail as a business strategy, but does not deny the underlying dynamics, which suggest that content does not obsolesce in any straightforward way. Niche markets for older content are nourished by participatory cultures and Web 2.0 style online services. A good example of the Long Tail phenomenon is the recent case of the 1971 book A Lion Called Christian, by Anthony Burke and John Rendall, republished after the author's film of a visit to a resettled Christian in Africa was popularised on YouTube in 2008. Anderson's Long Tail theory suggests that over time a large number of items, each with unique rather than mass histories, will be subsumed as part of a larger community of consumers, including fans, collectors and everyday users with a long term interest in their use and preservation. If digital distribution platforms can reduce e-waste, they can perhaps be fostered by to ensuring digital consumers have access to morally and ethically aware consumer decisions, but also that they enjoy traditional consumer freedoms, such as the right to sell on and change or modify their property. For it is not only the fixation on the 'next generation' that contributes to obsolescence, but also technologies like DRM systems that discourage second hand sales and restrict modification. The legislative upgrades, patches and amendments to copyright law that have attempted to maintain the law's effectiveness in competing with peer-to-peer networks have supported DRM and other intellectual property enforcement technologies, despite the difficulties that owners of intellectual property have encountered with the effectiveness of DRM systems (Moore, Creative). The games industry continues to experiment with DRM, however, this industry also stands out as one of the few to have significantly incorporated the user within the official modes of production (Moore, Commonising). Is the games industry capable (or willing) of supporting a digital delivery system that attempts to minimise or even reverse software and hardware obsolescence? We can try to answer this question by looking in detail at the biggest digital distributor of PC games, Steam. Steam Figure 1: The Steam Application user interface retail section Steam is a digital distribution system designed for the Microsoft Windows operating system and operated by American video game development company and publisher, Valve Corporation. Steam combines online games retail, DRM technologies and internet-based distribution services with social networking and multiplayer features (in-game voice and text chat, user profiles, etc) and direct support for major games publishers, independent producers, and communities of user-contributors (modders). Steam, like the iTunes games store, Xbox Live and other digital distributors, provides consumers with direct digital downloads of new, recent and classic titles that can be accessed remotely by the user from any (internet equipped) location. Steam was first packaged with the physical distribution of Half Life 2 in 2004, and the platform's eventual popularity is tied to the success of that game franchise. Steam was not an optional component of the game's installation and many gamers protested in various online forums, while the platform was treated with suspicion by the global PC games press. It did not help that Steam was at launch everything that gamers take objection to: a persistent and initially 'buggy' piece of software that sits in the PC's operating system and occupies limited memory resources at the cost of hardware performance. Regular updates to the Steam software platform introduced social network features just as mainstream sites like MySpace and Facebook were emerging, and its popularity has undergone rapid subsequent growth. Steam now eclipses competitors with more than 20 million user accounts (Leahy) and Valve Corporation makes it publicly known that Steam collects large amounts of data about its users. This information is available via the public player profile in the community section of the Steam application. It includes the average number of hours the user plays per week, and can even indicate the difficulty the user has in navigating game obstacles. Valve reports on the number of users on Steam every two hours via its web site, with a population on average between one and two million simultaneous users (Valve, Steam). We know these users’ hardware profiles because Valve Corporation makes the results of its surveillance public knowledge via the Steam Hardware Survey. Valve’s hardware survey itself conceptualises obsolescence in two ways. First, it uses the results to define the 'cutting edge' of PC technologies and publishing the standards of its own high end production hardware on the companies blog. Second, the effect of the Survey is to subsequently define obsolescent hardware: for example, in the Survey results for April 2009, we can see that the slight majority of users maintain computers with two central processing units while a significant proportion (almost one third) of users still maintained much older PCs with a single CPU. Both effects of the Survey appear to be well understood by Valve: the Steam Hardware Survey automatically collects information about the community's computer hardware configurations and presents an aggregate picture of the stats on our web site. The survey helps us make better engineering and gameplay decisions, because it makes sure we're targeting machines our customers actually use, rather than measuring only against the hardware we've got in the office. We often get asked about the configuration of the machines we build around the office to do both game and Steam development. We also tend to turn over machines in the office pretty rapidly, at roughly every 18 months. (Valve, Team Fortress) Valve’s support of older hardware might counter perceptions that older PCs have no use and begins to reverse decades of opinion regarding planned and stylistic obsolescence in the PC hardware and software industries. Equally significant to the extension of the lives of older PCs is Steam's support for mods and its promotion of user generated content. By providing software for mod creation and distribution, Steam maximises what Postigo calls the development potential of fan-programmers. One of the 'payoffs' in the information/access exchange for the user with Steam is the degree to which Valve's End-User Licence Agreement (EULA) permits individuals and communities of 'modders' to appropriate its proprietary game content for use in the creation of new games and games materials for redistribution via Steam. These mods extend the play of the older games, by requiring their purchase via Steam in order for the individual user to participate in the modded experience. If Steam is able to encourage this kind of appropriation and community support for older content, then the potential exists for it to support cultures of consumption and practice of use that collaboratively maintain, extend, and prolong the life and use of games. Further, Steam incorporates the insights of “long tail” economics in a purely digital distribution model, in which the obsolescence of 'non-hit' game titles can be dramatically overturned. Published in November 2007, Unreal Tournament 3 (UT3) by Epic Games, was unappreciated in a market saturated with games in the first-person shooter genre. Epic republished UT3 on Steam 18 months later, making the game available to play for free for one weekend, followed by discounted access to new content. The 2000 per cent increase in players over the game's 'free' trial weekend, has translated into enough sales of the game for Epic to no longer consider the release a commercial failure: It’s an incredible precedent to set: making a game a success almost 18 months after a poor launch. It’s something that could only have happened now, and with a system like Steam...Something that silently updates a purchase with patches and extra content automatically, so you don’t have to make the decision to seek out some exciting new feature: it’s just there anyway. Something that, if you don’t already own it, advertises that game to you at an agreeably reduced price whenever it loads. Something that enjoys a vast community who are in turn plugged into a sea of smaller relevant communities. It’s incredibly sinister. It’s also incredibly exciting... (Meer) Clearly concerns exist about Steam's user privacy policy, but this also invites us to the think about the economic relationship between gamers and games companies as it is reconfigured through the private contractual relationship established by the EULA which accompanies the digital distribution model. The games industry has established contractual and licensing arrangements with its consumer base in order to support and reincorporate emerging trends in user generated cultures and other cultural formations within its official modes of production (Moore, "Commonising"). When we consider that Valve gets to tax sales of its virtual goods and can further sell the information farmed from its users to hardware manufacturers, it is reasonable to consider the relationship between the corporation and its gamers as exploitative. Gabe Newell, the Valve co-founder and managing director, conversely believes that people are willing to give up personal information if they feel it is being used to get better services (Leahy). If that sentiment is correct then consumers may be willing to further trade for services that can reduce obsolescence and begin to address the problems of e-waste from the ground up. Conclusion Clearly, there is a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities but also supporting consumer practices that further reduce e-waste. For an industry where only a small proportion of the games made break even, the successful relaunch of older games content indicates Steam's capacity to ameliorate software obsolescence. Digital distribution extends the use of commercially released games by providing disintermediated access to older and user-generated content. For Valve, this occurs within a network of exchange as access to user-generated content, social networking services, and support for the organisation and coordination of communities of gamers is traded for user-information and repeat business. Evidence for whether this will actively translate to an equivalent decrease in the obsolescence of game hardware might be observed with indicators like the Steam Hardware Survey in the future. The degree of potential offered by digital distribution is disrupted by a range of technical, commercial and legal hurdles, primary of which is the deployment of DRM, as part of a range of techniques designed to limit consumer behaviour post purchase. While intervention in the form of legislation and radical change to the insidious nature of electronics production is crucial in order to achieve long term reduction in e-waste, the user is currently considered only in terms of 'ethical' consumption and ultimately divested of responsibility through participation in corporate, state and civil recycling and e-waste management operations. The message is either 'careful what you purchase' or 'careful how you throw it away' and, like DRM, ignores the connections between product, producer and user and the consumer support for environmentally, ethically and socially positive production, distribrution, disposal and recycling. This article, has adopted a different strategy, one that sees digital distribution platforms like Steam, as capable, if not currently active, in supporting community practices that should be seriously considered in conjunction with a range of approaches to the challenge of obsolescence and e-waste. References Anderson, Chris. "The Long Tail." Wired Magazine 12. 10 (2004). 20 Apr. 2009 ‹http://www.wired.com/wired/archive/12.10/tail.html›. De Certeau, Michel. The Practice of Everyday Life. Berkeley: U of California P, 1984. Dovey, Jon, and Helen Kennedy. Game Cultures: Computer Games as New Media. London: Open University Press,2006. Fitzpatrick, Kathleen. The Anxiety of Obsolescence. Nashville: Vanderbilt UP, 2008. Flew, Terry. New Media: An Introduction. South Melbourne: Oxford UP, 2008. Leahy, Brian. "Live Blog: DICE 2009 Keynote - Gabe Newell, Valve Software." The Feed. G4TV 18 Feb. 2009. 16 Apr. 2009 ‹http://g4tv.com/thefeed/blog/post/693342/Live-Blog-DICE-2009-Keynote-–-Gabe-Newell-Valve-Software.html›. Meer, Alec. "Unreal Tournament 3 and the New Lazarus Effect." Rock, Paper, Shotgun 16 Mar. 2009. 24 Apr. 2009 ‹http://www.rockpapershotgun.com/2009/03/16/unreal-tournament-3-and-the-new-lazarus-effect/›.Moore, Christopher. "Commonising the Enclosure: Online Games and Reforming Intellectual Property Regimes." Australian Journal of Emerging Technologies and Society 3. 2, (2005). 12 Apr. 2009 ‹http://www.swin.edu.au/sbs/ajets/journal/issue5-V3N2/abstract_moore.htm›. Moore, Christopher. "Creative Choices: Changes to Australian Copyright Law and the Future of the Public Domain." Media International Australia 114 (Feb. 2005): 71–83. Postigo, Hector. "Of Mods and Modders: Chasing Down the Value of Fan-Based Digital Game Modification." Games and Culture 2 (2007): 300-13. Robinson, Daniel. "Windows XP Support Runs Out Next Week." PC Business Authority 8 Apr. 2009. 16 Apr. 2009 ‹http://www.pcauthority.com.au/News/142013,windows-xp-support-runs-out-next-week.aspx›. Straw, Will. "Exhausted Commodities: The Material Culture of Music." Canadian Journal of Communication 25.1 (2000): 175. Slade, Giles. Made to Break: Technology and Obsolescence in America. Cambridge: Harvard UP, 2006. Valve. "Steam and Game Stats." 26 Apr. 2009 ‹http://store.steampowered.com/stats/›. Valve. "Team Fortress 2: The Scout Update." Steam Marketing Message 20 Feb. 2009. 12 Apr. 2009 ‹http://storefront.steampowered.com/Steam/Marketing/message/2269/›. Webb, Richard. "Online Shopping and the Harry Potter Effect." New Scientist 2687 (2008): 52-55. 16 Apr. 2009 ‹http://www.newscientist.com/article/mg20026873.300-online-shopping-and-the-harry-potter-effect.html?page=2›. With thanks to Dr Nicola Evans and Dr Frances Steel for their feedback and comments on drafts of this paper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography