Journal articles on the topic 'Technology resources survey and applications act (Proposed)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Technology resources survey and applications act (Proposed).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Xujuan, Raj Gururajan, Yuefeng Li, Revathi Venkataraman, Xiaohui Tao, Ghazal Bargshady, Prabal D. Barua, and Srinivas Kondalsamy-Chennakesavan. "A survey on text classification and its applications." Web Intelligence 18, no. 3 (September 30, 2020): 205–16. http://dx.doi.org/10.3233/web-200442.

Full text
Abstract:
Text classification (a.k.a text categorisation) is an effective and efficient technology for information organisation and management. With the explosion of information resources on the Web and corporate intranets continues to increase, it has being become more and more important and has attracted wide attention from many different research fields. In the literature, many feature selection methods and classification algorithms have been proposed. It also has important applications in the real world. However, the dramatic increase in the availability of massive text data from various sources is creating a number of issues and challenges for text classification such as scalability issues. The purpose of this report is to give an overview of existing text classification technologies for building more reliable text classification applications, to propose a research direction for addressing the challenging problems in text mining.
APA, Harvard, Vancouver, ISO, and other styles
2

Ward, Frank A. "Cost–benefit and water resources policy: a survey." Water Policy 14, no. 2 (September 27, 2011): 250–80. http://dx.doi.org/10.2166/wp.2011.021.

Full text
Abstract:
This paper reviews recent developments in cost–benefit analysis for water policy researchers who wish to understand the applications of economic principles to inform emerging water policy debates. The cost–benefit framework can provide a comparison of total economic gains and losses resulting from a proposed water policy. Cost–benefit analysis can provide decision-makers with a comparison of the impacts of two or more water policy options using methods that are grounded in time-tested economic principles. Economic efficiency, measured as the difference between added benefits and added costs, can inform water managers and the public of the economic impacts of water programs to address peace, development, health, the environment, climate and poverty. Faced by limited resources, cost–benefit analysis can inform policy choices by summarizing trade-offs involved in designing, applying, or reviewing a wide range of water programs. The data required to conduct a cost–benefit analysis are often poor but the steps needed to carry out that analysis require posing the right questions.
APA, Harvard, Vancouver, ISO, and other styles
3

Masmoudi, Ahlem, Kais Mnif, and Faouzi Zarai. "A Survey on Radio Resource Allocation for V2X Communication." Wireless Communications and Mobile Computing 2019 (October 24, 2019): 1–12. http://dx.doi.org/10.1155/2019/2430656.

Full text
Abstract:
Thanks to the deployment of new techniques to support high data rate, high reliability, and QoS provision, Long-Term Evolution (LTE) can be applied for diverse applications. Vehicle-to-everything (V2X) is one of the evolving applications for LTE technology to improve traffic safety, to minimize congestion, and to ensure comfortable driving which requires stringent reliability and latency requirements. As mentioned in the 3rd Generation Partnership Project (3GPP), LTE-based Device-to-Device (D2D) communication is an enabler for V2X services to meet these requirements. Therefore, radio resource management (RRM) is important to efficiently allocate resources to V2X communications. In this paper, we present the V2X communications, their requirements and services, the V2X-based LTE-D2D communication modes, and the existing resource allocation algorithms for V2X communications. Moreover, we classify the existing resource allocation algorithms proposed in the literature and we compare them according to selected criteria.
APA, Harvard, Vancouver, ISO, and other styles
4

Guiguer, N., and T. Franz. "Development and Applications of a Wellhead Protection Area Delineation Computer Program." Water Science and Technology 24, no. 11 (December 1, 1991): 51–62. http://dx.doi.org/10.2166/wst.1991.0336.

Full text
Abstract:
In the last few years, groundwater management has concentrated on the protection of groundwater quality. An increasing number of countries has adopted policies to protect vital groundwater resources from deterioration by regulating human interaction with the subsurface, the use of potential contaminants, land use restrictions, and waste transport and storage. One of the more common regulatory approaches to the protection of groundwater focuses on public water supplies to reduce the potential of human exposure to hazardous contaminants. Under the framework of the Safe Drinking Water Act amended by U.S. Congress in 1986, The U.S.EPA (1987) issued guidelines for the delineation of wellhead protection areas, recommending the use of analytical and numerical models for the identification of such areas. In this study, the theoretical background for the development of one such numerical model is presented. Two real-world applications are discussed: in the first case history, the model is applied to a Superfund Site in Puerto Rico as a tool for assessment of the effectiveness of a proposed pump-and-treat scheme for aquifer remediation. Based on simulation results for the evolution of the existing contaminant plume it was verified that such a scheme would not work with the proposed purging wells. The second case history is the delineation of a wellhead protection area in the Town of Littleton, Massachusetts, and subsequent design of a monitoring well network.
APA, Harvard, Vancouver, ISO, and other styles
5

Jones, James, Daniel Gottlieb, Joshua C. Mandel, Vladimir Ignatov, Alyssa Ellis, Wayne Kubick, and Kenneth D. Mandl. "A landscape survey of planned SMART/HL7 bulk FHIR data access API implementations and tools." Journal of the American Medical Informatics Association 28, no. 6 (March 1, 2021): 1284–87. http://dx.doi.org/10.1093/jamia/ocab028.

Full text
Abstract:
Abstract The Office of National Coordinator for Health Information Technology final rule implementing the interoperability and information blocking provisions of the 21st Century Cures Act requires support for two SMART (Substitutable Medical Applications, Reusable Technologies) application programming interfaces (APIs) and instantiates Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) as a lingua franca for health data. We sought to assess the current state and near-term plans for the SMART/HL7 Bulk FHIR Access API implementation across organizations including electronic health record vendors, cloud vendors, public health contractors, research institutions, payors, FHIR tooling developers, and other purveyors of health information technology platforms. We learned that many organizations not required through regulation to use standardized bulk data are rapidly implementing the API for a wide array of use cases. This may portend an unprecedented level of standardized population-level health data exchange that will support an apps and analytics ecosystem. Feedback from early adopters on the API’s limitations and unsolved problems in the space of population health are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
6

Hussain, Altaf, Faisal Azam, Muhammad Sharif, Mussarat Yasmin, and Sajjad Mohsin. "A Survey on ANN Based Task Scheduling Strategies in Heterogeneous Distributed Computing Systems." Nepal Journal of Science and Technology 16, no. 1 (January 18, 2016): 69–78. http://dx.doi.org/10.3126/njst.v16i1.14359.

Full text
Abstract:
Heterogeneous Distributed Computing Systems (HeDCS) efficiently utilize the heterogeneity of diverse computational resources which are interlinked through high speed networks for executing a group of computing intensive applications. Directed acyclic graphs (DAGs) are usually used to represent these parallel applications with varied computational requirements and constraints. The optimal scheduling of the given set of precedence constrained tasks to available resources is a core concern in HeDCS and is known to be NP Complete problem. Non deterministic nature of application programs and heterogeneous environment are the main challenges in designing, implementing and analyzing phases of task scheduling techniques. A myriad of heuristic and meta-heuristic approaches have been proposed in the literature to solve this complex problem. The basic purpose of this study is to cover ANN based task scheduling strategies in the distributed computing environment perspective. Further existing scheduling heuristics could be classified in a new state of art classification including the description of frequently used parameters in the mentioned scheduling strategies. The flexible and powerful nature of ANN for identifying the data patterns, underlying time and other constraints and learning capabilities have shown to be a promising candidate among other heuristics.Nepal Journal of Science and Technology Vol. 16, No.1 (2015) pp. 69-78
APA, Harvard, Vancouver, ISO, and other styles
7

Oseni, Umar A., Abideen Adeyemi Adewale, and Sodiq O. Omoola. "The feasibility of online dispute resolution in the Islamic banking industry in Malaysia." International Journal of Law and Management 60, no. 1 (February 12, 2018): 34–54. http://dx.doi.org/10.1108/ijlma-06-2016-0057.

Full text
Abstract:
Purpose The paper aims to examine the perceptions of three major stakeholders – bankers, lawyers and customers – in the Islamic banking industry in Malaysia to assess their behavioural intention to use the proposed online dispute resolution (ODR) mechanism. Design/methodology/approach The study modifies the unified theory of acceptance and use of technology (UTAUT) within the context of ODR and its feasibility in the Malaysian Islamic banking industry. The model was extended to include trust in technology and trust in bank, which might have significant influences on the intentions of major stakeholders to use ODR for banking-related disputes. Actual use of the ODR was not included in the model as specified in the original UTAUT. Based on an internet survey, responses were obtained from about 109 respondents. The data obtained were subjected to multivariate statistical analyses. Findings Results obtained indicate that trust in technology and effort expectancy are the most influencing determinants of the behavioural intention to use ODR among stakeholders in the Islamic banking industry in Malaysia. However, performance expectancy and social influence did not produce significant effects on behavioural intention. Research limitations/implications Applying ODR in the banking industry in Malaysia will contribute to sustainable banking businesses in major Islamic finance jurisdictions. Being the most advanced region in global Islamic banking business, Asia sets the pace through theoretical and empirical studies in exploring innovative ideals such as ODR to promote sustainable business that not only ensures proper customer relationship management but also promotes consume protection. Practical implications Results obtained suggest that the increasing use of internet banking will make ODR the preferable mechanism for dispute resolution in small-scale disputes in retail banking. This will also require some form of predictability, enforceability and Shari‘ah compliance in the process of dispute resolution for the major stakeholders to have full confidence in the ODR mechanism. The recently introduced Financial Ombudsman Scheme in the Islamic Financial Services Act 2013 of Malaysia is expected to serve as a good legal basis for the ODR mechanism. Originality/value This appears to be one of the earliest attempts to examine the application of ODR in resolving Islamic banking disputes with a detailed analysis on its legal basis and implication.
APA, Harvard, Vancouver, ISO, and other styles
8

Motiei, Malihe, Nor Hidayati Zakaria, Davide Aloini, and Mohammad Akbarpour Sekeh. "Developing Instruments for Enterprise Resources Planning (ERP) Post-Implementation Failure Model." International Journal of Enterprise Information Systems 11, no. 3 (July 2015): 68–83. http://dx.doi.org/10.4018/ijeis.2015070105.

Full text
Abstract:
Implementing Enterprise Resource Planning (ERP) projects in many organizations are faced with failure concept in recent years. Researchers focused to implement ERP projects successfully by proposing the success model. However, through these attentions to get ERP benefits, the ERP failure measurement model is required. Therefore, the aim of this study is to develop the instruments for ERP post-implementation failure measurement model. To achieve this outcome, the study firstly evaluates the suitability of Technology-Organization-Environment framework for the proposed conceptual model. Constructs were used for this model included two formative and six reflective constructs. A questionnaire was developed to test the validity and reliability of instrument items. A survey was conducted among Iranian industries to collect data and data analyzed by Smart PLS software. The results indicated that all instruments items included 37 critical risk factors (CRFs) as measurement were acceptable for the ERP post-implementation failure model.
APA, Harvard, Vancouver, ISO, and other styles
9

Zabed Ahmed, S. M. "The use of IT-based information services." Program 48, no. 2 (April 1, 2014): 167–84. http://dx.doi.org/10.1108/prog-08-2012-0048.

Full text
Abstract:
Purpose – The aim of this paper is to investigate the current status of public universities in Bangladesh in terms of library resources and services, IT infrastructure and training requirements for the establishment of a centralized, networked electronic library for the universities in the country. Design/methodology/approach – A survey was conducted in March-April 2012 to ascertain the level of library automation practices, access to online resources and IT facilities utilized by the public universities in Bangladesh. The survey questionnaire was distributed through post and emails directed to the university librarians. The librarians were also asked to identify the type of IT-related training they had and the type of training they require. Findings – The survey results indicate that there are insufficiencies in library resources, automation practices, access to online resources and IT facilities in the universities. Although the use of computer and network technologies in older universities is reasonably high; newer universities are lagging far behind in the latest technology applications. The results also suggest significant training needs by the librarians across all areas of electronic information processing. Originality/value – This is the first time an attempt has been made to assess the readiness of the public universities in Bangladesh for implementing IT-based information services. The paper also proposed a framework for implementing an integrated electronic library for the universities in the country to offer them better access to a wide range of online resources and services.
APA, Harvard, Vancouver, ISO, and other styles
10

Garrido-Muñoz , Ismael, Arturo Montejo-Ráez , Fernando Martínez-Santiago , and L. Alfonso Ureña-López . "A Survey on Bias in Deep NLP." Applied Sciences 11, no. 7 (April 2, 2021): 3184. http://dx.doi.org/10.3390/app11073184.

Full text
Abstract:
Deep neural networks are hegemonic approaches to many machine learning areas, including natural language processing (NLP). Thanks to the availability of large corpora collections and the capability of deep architectures to shape internal language mechanisms in self-supervised learning processes (also known as “pre-training”), versatile and performing models are released continuously for every new network design. These networks, somehow, learn a probability distribution of words and relations across the training collection used, inheriting the potential flaws, inconsistencies and biases contained in such a collection. As pre-trained models have been found to be very useful approaches to transfer learning, dealing with bias has become a relevant issue in this new scenario. We introduce bias in a formal way and explore how it has been treated in several networks, in terms of detection and correction. In addition, available resources are identified and a strategy to deal with bias in deep NLP is proposed.
APA, Harvard, Vancouver, ISO, and other styles
11

Khan, Sania. "Exploring the firm’s influential determinants pertinent to workplace innovation." Problems and Perspectives in Management 19, no. 1 (March 9, 2021): 272–80. http://dx.doi.org/10.21511/ppm.19(1).2021.23.

Full text
Abstract:
Significant changes in organizations with good human resources (HR) practices can transform the workplace to a great extent. Although there is a fair amount of research on workplace innovation, most firms even now act as barriers to personnel growth and workplace innovation. This study proposed to explore various influential factors of firms from a holistic perspective that affect workplace innovation by adopting the principal component analysis (PCA) method to reduce the dimensionalities and better emphasize firms’ development. The useful data were collected using a survey questionnaire from one hundred and ninety-five (195) respondents from different Indian organizations. Totally forty-six sub-factors were identified and developed into nine significant organizational factors influencing workplace improvement viz., organization culture and environment, innovation process, resources, organization structure, corporate strategy, employee, knowledge management, technology and management, and leadership. The study suggested that any firm must emphasize these core determinants at the workplace to motivate the employees towards innovation and organizations to be competitive in the industry. The study invites firm policymakers, HR managers, and top management to formulate the best organizational strategies to encourage an innovative culture in firms. AcknowledgmentThe author(s) of this study acknowledges all the respondents who contributed their quality opinion and made this study possible and helpful in contributing to the industry.
APA, Harvard, Vancouver, ISO, and other styles
12

Alsetoohy, Omar, Baker Ayoun, Saleh Arous, Farida Megahed, and Gihan Nabil. "Intelligent agent technology: what affects its adoption in hotel food supply chain management?" Journal of Hospitality and Tourism Technology 10, no. 3 (September 17, 2019): 286–310. http://dx.doi.org/10.1108/jhtt-01-2018-0005.

Full text
Abstract:
Purpose The study adopted a conceptualized technological, organizational and environmental (TOE) model to empirically investigate the factors affecting hotel managers’ attitudes toward intelligent agent technology (IAT) adoption in the hotel food supply chain management (HFSCM) and their intentions for future adoption. Design/methodology/approach In-person survey was carried out in luxury hotels in Florida. Findings The findings indicated that merely 5.7 per cent of hotels are fully implementing IAT. Perceived benefits, reliability, quality of human resources, information intensity and market capabilities had a statistically significant positive impact on hotel managers’ attitudes. However, complexity and cost had a negative influence on hotel managers’ attitudes toward IAT adoption in the HFSCM. Managers’ attitude further positively influences their intention to adopt. Practical implications The validated model helps guide hotel decision makers who are considering IAT adoption in the HFSCM. Hotels that are seeking sources for competitive advantages would better consider the TOE factors in IAT adoption prior to making a decision. Originality/value This is the first study that examined IAT adoption in the hotel industry from a theoretical and empirical perspective. The validated model proposed for the adoption of IAT in HFSCM enriched the TOE model and the diffusion of innovations theory.
APA, Harvard, Vancouver, ISO, and other styles
13

Rezaie, Maryam, Hamid Eslami Nosratabadi, and Hamed Fazlollahtabar. "Applying KANO Model for Users’ Satisfaction Assessment in E-Learning Systems." International Journal of Information and Communication Technology Education 8, no. 3 (July 2012): 1–12. http://dx.doi.org/10.4018/jicte.2012070101.

Full text
Abstract:
Many projects fail due to lack of product development to meet customer needs, leading to a waste of organizational resources and non systematic creation of products. Understanding user behavior and the effective management are key elements in the competitive knowledge-based economy. One of the outlets for knowledge-based economy is e-learning, facilitating education using information technology (IT) infrastructure, which plays an important role in today’s virtual world breaking distance and time obstacles. The purpose of this study is to probe e-learning users’ satisfaction attributes having noticeable impacts on enhancing instruction paradigm. Therefore, using two concepts of asynchronous learning and KANO model, the authors conduct a survey on user satisfaction in e-learning educational centers in Iran via interviews. Five satisfaction factors are pedagogical regulation, user characteristics, user interface, ICT infrastructures, group interactivity, and content. A questionnaire is proposed based on KANO concept and samples are collected. The statistical analyses are worked out on questionnaires applying Statistical Package for the Social Sciences (SPSS) software package. The results show that group interaction and user interface have high satisfaction level while content and infrastructures are the effective factors of dissatisfaction.
APA, Harvard, Vancouver, ISO, and other styles
14

Dorea, C. C. "Use of Moringa spp. seeds for coagulation: a review of a sustainable option." Water Supply 6, no. 1 (January 1, 2006): 219–27. http://dx.doi.org/10.2166/ws.2006.027.

Full text
Abstract:
Regrettably, it is still common to find places without access to safe drinking water due to a lack of resources or appropriate technologies to support adequate solutions. Remedial efforts will need to focus on appropriate solutions for such locations. Coagulation with Moringa spp. seeds has been proposed as a sustainable option for water treatment for low-income locations due to the many other uses of the Moringa tree. Yet, its application is not limited to particle separation. A survey of the databases of WOK (Web of Knowledge) and CSA (Cambridge Scientific Abstracts) identified studies utilising Moringa spp. seeds as a coagulant. Although the majority of the studies were of laboratory or household applications, there were also reported trials in pilot- and full-scale treatment trains. Moringa spp. seed extracts were assessed as primary coagulants, co-coagulants (with aluminium sulfate) or secondary coagulant aids (flocculants). Turbidity reduction efficiencies vary according to the source water characteristics, coagulant preparation technique, and seed type. Treatment benefits of coagulation with Moringa spp. seeds are objectively assessed and have been contrasted with practical considerations in view of its sustainable application.
APA, Harvard, Vancouver, ISO, and other styles
15

Yuan, Chien Wen, Benjamin V. Hanrahan, and John M. Carroll. "Assessing timebanking use and coordination: implications for service exchange tools." Information Technology & People 32, no. 2 (April 1, 2019): 344–63. http://dx.doi.org/10.1108/itp-09-2017-0311.

Full text
Abstract:
PurposeTimebanking is a generalized, voluntary service exchange that promotes use of otherwise idle resources in a community and facilitates community building. Participants offer and request services through the mediation of the timebank software. In timebanking, giving help and accepting help are both contributions; contributions are recognized and quantified through exchange of time-based currency. The purpose of this paper is to explore how users perceive timebank offers and requests differently and how they influence actual use.Design/methodology/approachThis survey study, conducted in over 120 timebanks across the USA, examines users’ timebanking participation, adapting dimensions of Technology Acceptance Model (TAM).FindingsThe authors found that perceived ease of use in timebanking platforms was positively associated with positive attitudes toward both requests and offers, whereas perceived usefulness was negatively associated with positive attitudes toward requests and offers. The authors also found that having positive attitudes toward requests was important to elicit behavioral intention to make a request, but that positive attitudes toward offers did not affect behavioral intentions to make offers.Practical implicationsThe authors discussed these results and proposed design suggestions for future service exchange tools to address the issues the authors raised.Originality/valueThe study is among the first few studies that examine timebanking participation using large-scale survey data. The authors evaluate sociotechnical factors of timebanking participation through adapting dimensions of TAM.
APA, Harvard, Vancouver, ISO, and other styles
16

Feng, Cailing, Xiaoyu Huang, and Lihua Zhang. "A multilevel study of transformational leadership, dual organizational change and innovative behavior in groups." Journal of Organizational Change Management 29, no. 6 (October 3, 2016): 855–77. http://dx.doi.org/10.1108/jocm-01-2016-0005.

Full text
Abstract:
Purpose Based on dual organizational theory, the purpose of this paper is to examine the relationship between transformational leadership and innovative behavior in groups. The authors proposed that group innovative behavior was influenced by transformational leadership as a group-level construct which was moderated by dual organizational change that represent organization-level resources. Furthermore, the authors identified two organizational change-related situational variables-radical change and incremental change and examined their effects on group innovative behavior. Design/methodology/approach The authors collected data from full-time employees working in groups in 43 companies, located in five cities in China including Beijing, Yantai, Chengdu, Xi’an, and Chengde. These enterprises were from a wide range of industries, including manufacturing, financing, information technology, and geological exploration. The authors chose a middle- or senior-level manager from each company to act as chief survey respondent, who were asked to contact managers and employees from a list they had provided and invite them to participate in a web-based survey (via an e-mailed link) or a paper-and-pencil survey. A total of 192 managers and 756 direct subordinates from 112 groups completed the survey. Findings Results found that transformational leadership was positively related to group innovative behavior, and this relationship was moderated by radical change, but not incremental change; radical change and incremental change were also positively related to group innovative behavior. Research limitations/implications This study adopts a cross-sectional study design, which is insufficient for deriving causal inferences. Future research may adopt a longitudinal study design to investigate causal impacts. Besides, some unmeasured variables could be related to transformational leadership and innovative behavior. Practical implications The paper includes implications for adopting appropriate leadership style to motivate innovative behavior, promoting dual organizational change to boost innovative behavior, and generating greater innovative behavior for transformational leaders in times of radical change. Originality/value This cross-level study contributes to the relationship between transformational leadership and group innovative behavior in the context of dual organizational change.
APA, Harvard, Vancouver, ISO, and other styles
17

Spalek, Seweryn. "Does investment in project management pay off?" Industrial Management & Data Systems 114, no. 5 (June 3, 2014): 832–56. http://dx.doi.org/10.1108/imds-10-2013-0447.

Full text
Abstract:
Purpose – There is a significant knowledge gap in the common understanding regarding the value that investment leading to an increase in project management maturity brings to the organisation. The purpose of this paper is to narrow this gap by investigating the relationship between an increase in the project management maturity level and the project's performance. Additionally, it advocates the investment roadmap approach. Design/methodology/approach – This study is part of a worldwide research initiative into maturity in project management covering 447 global companies. For this purpose, survey data from experts from 194 select companies was analysed. Findings – The cost of forthcoming projects depends on the level of maturity of project management and type of industry. Research limitations/implications – The study is limited to three different industries (machinery, construction and information technology) and by the method of assessing their future project costs. New research directions are suggested. Practical implications – The results of the study should help companies in allocating limited resources appropriately using the proposed roadmap. Social implications – An increase in project management maturity can be achieved through different investment methods. This will benefit society as well. Originality/value – The paper focuses on global companies dealing in machinery. The area has not been explored sufficiently from the project management perspective. It discusses the relationship between an increase in maturity and future project costs in three industries: machinery, construction and information technology. The paper suggests practical guidelines for project management and sequences in proper investments when resources are limited.
APA, Harvard, Vancouver, ISO, and other styles
18

Donati, Massimiliano, Alessio Celli, Alessio Ruiu, Sergio Saponara, and Luca Fanucci. "A Telemedicine Service System Exploiting BT/BLE Wireless Sensors for Remote Management of Chronic Patients." Technologies 7, no. 1 (January 18, 2019): 13. http://dx.doi.org/10.3390/technologies7010013.

Full text
Abstract:
The management of the increasing number of patients affected by cardiovascular, pulmonary, and metabolic chronic diseases represents a major challenge for the National Health System (NHS) in any developed country. Chronic diseases are indeed the main cause of hospitalization, especially for elderly people, leading to sustainability problems due to the huge amount of resources required. In the last years, the adoption of the chronic care model (CCM) as assistive model improved the management of these patients and reduced the related healthcare costs. The diffusion of wireless sensors, portable devices and connectivity enables to implement new information and communication technology (ICT)-based innovative applications to further improve the outcomes of the CCM. This paper presents a telemedicine platform for data acquisition, distribution, processing, presentation, and storage, aimed to remotely monitor the clinical status of chronic patients. The proposed solution is based on monitoring kits, with wireless Bluetooth (BT)/ Bluetooth low energy (BLE) sensors and a gateway (i.e., smartphone or tablet) connected to a web-based cloud application that collects and makes available the clinical information to the medical staff. The platform allows clinicians and practitioners to monitor at distance their patients, according to personalized treatment plans, and to act promptly in case of aggravations, reducing hospitalizations and improving patients’ quality of life.
APA, Harvard, Vancouver, ISO, and other styles
19

Saponara, Sergio, and Luca Fanucci. "Homogeneous and Heterogeneous MPSoC Architectures with Network-On-Chip Connectivity for Low-Power and Real-Time Multimedia Signal Processing." VLSI Design 2012 (August 14, 2012): 1–17. http://dx.doi.org/10.1155/2012/450302.

Full text
Abstract:
Two multiprocessor system-on-chip (MPSoC) architectures are proposed and compared in the paper with reference to audio and video processing applications. One architecture exploits a homogeneous topology; it consists of 8 identical tiles, each made of a 32-bit RISC core enhanced by a 64-bit DSP coprocessor with local memory. The other MPSoC architecture exploits a heterogeneous-tile topology with on-chip distributed memory resources; the tiles act as application specific processors supporting a different class of algorithms. In both architectures, the multiple tiles are interconnected by a network-on-chip (NoC) infrastructure, through network interfaces and routers, which allows parallel operations of the multiple tiles. The functional performances and the implementation complexity of the NoC-based MPSoC architectures are assessed by synthesis results in submicron CMOS technology. Among the large set of supported algorithms, two case studies are considered: the real-time implementation of an H.264/MPEG AVC video codec and of a low-distortion digital audio amplifier. The heterogeneous architecture ensures a higher power efficiency and a smaller area occupation and is more suited for low-power multimedia processing, such as in mobile devices. The homogeneous scheme allows for a higher flexibility and easier system scalability and is more suited for general-purpose DSP tasks in power-supplied devices.
APA, Harvard, Vancouver, ISO, and other styles
20

Giansanti, Daniele, and Giulia Veltro. "The Digital Divide in the Era of COVID-19: An Investigation into an Important Obstacle to the Access to the mHealth by the Citizen." Healthcare 9, no. 4 (March 26, 2021): 371. http://dx.doi.org/10.3390/healthcare9040371.

Full text
Abstract:
In general, during the COVID-19 pandemic there has been a growth in the use of digital technological solutions in many sectors, from that of consumption, to Digital Health and in particular to mobile health (mHealth) where an important role has been played by mobile technology (mTech). However, this has not always happened in a uniform way. In fact, in many cases, citizens found themselves unable to take advantage of these opportunities due to the phenomenon of the Digital Divide (DD). It depends on multifaceted aspects ranging from the lack of access to instrumental and network resources, to cultural and social barriers and also to possible forms of communication disability. In the study we set ourselves the articulated goal of developing a probing methodology that addresses the problems connected to DD in a broad sense, capable of minimizing the bias of a purely electronic submission and evaluating its effectiveness and outcome. At the moment, we have submitted the survey both electronically (with an embedded solution to spread it inside the families/acquaintances) and using the wire phone. The results highlighted three polarities (a) the coherence of the two methods; (b) the outcome of the entire submission in relation to key issues (e.g., familiarity on contact tracing Apps, medical Apps, social Apps, messaging Apps, Digital-health, non-medical Apps); (c) a Digital Divide strongly dependent on age and in particular for the elderly is mainly evident in the use of mTech in general and in particular in mHealth applications. Future developments of the study foresee, after adequate data-mining, an in-depth study of all the aspects proposed in the survey, from those relating to access to resources, training, disability and other cultural factors.
APA, Harvard, Vancouver, ISO, and other styles
21

Pantho, Md Jubaer Hossain, Pankaj Bhowmik, and Christophe Bobda. "Towards an Efficient CNN Inference Architecture Enabling In-Sensor Processing." Sensors 21, no. 6 (March 10, 2021): 1955. http://dx.doi.org/10.3390/s21061955.

Full text
Abstract:
The astounding development of optical sensing imaging technology, coupled with the impressive improvements in machine learning algorithms, has increased our ability to understand and extract information from scenic events. In most cases, Convolution neural networks (CNNs) are largely adopted to infer knowledge due to their surprising success in automation, surveillance, and many other application domains. However, the convolution operations’ overwhelming computation demand has somewhat limited their use in remote sensing edge devices. In these platforms, real-time processing remains a challenging task due to the tight constraints on resources and power. Here, the transfer and processing of non-relevant image pixels act as a bottleneck on the entire system. It is possible to overcome this bottleneck by exploiting the high bandwidth available at the sensor interface by designing a CNN inference architecture near the sensor. This paper presents an attention-based pixel processing architecture to facilitate the CNN inference near the image sensor. We propose an efficient computation method to reduce the dynamic power by decreasing the overall computation of the convolution operations. The proposed method reduces redundancies by using a hierarchical optimization approach. The approach minimizes power consumption for convolution operations by exploiting the Spatio-temporal redundancies found in the incoming feature maps and performs computations only on selected regions based on their relevance score. The proposed design addresses problems related to the mapping of computations onto an array of processing elements (PEs) and introduces a suitable network structure for communication. The PEs are highly optimized to provide low latency and power for CNN applications. While designing the model, we exploit the concepts of biological vision systems to reduce computation and energy. We prototype the model in a Virtex UltraScale+ FPGA and implement it in Application Specific Integrated Circuit (ASIC) using the TSMC 90nm technology library. The results suggest that the proposed architecture significantly reduces dynamic power consumption and achieves high-speed up surpassing existing embedded processors’ computational capabilities.
APA, Harvard, Vancouver, ISO, and other styles
22

Srinivasan, M., A. Andral, M. Dejus, F. Hossain, C. Peterson, E. Beighley, T. Pavelsky, et al. "Engaging the Applications Community of the future Surface Water and Ocean Topography (SWOT) Mission." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 30, 2015): 1497–504. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-1497-2015.

Full text
Abstract:
NASA and the French space agency, CNES, with contributions from the Canadian Space Agency (CSA) and United Kingdom Space Agency (UKSA) are developing new wide swath altimetry technology that will cover most of the world’s ocean and surface freshwater bodies. The proposed Surface Water and Ocean Topography (SWOT) mission will have the capability to make observations of surface water (lakes, rivers, wetland) heights and measurements of ocean surface topography with unprecedented spatial coverage, temporal sampling, and spatial resolution compared to existing technologies. These data will be useful for monitoring the hydrologic cycle, flooding, and characterizing human impacts on a changing environment. <br><br> The applied science community is a key element in the success of the SWOT mission, demonstrating the high value of the science and data products in addressing societal issues and needs. The SWOT applications framework includes a working group made up of applications specialists, SWOT science team members, academics and SWOT Project members to promote applications research and engage a broad community of potential SWOT data users. A defined plan and a guide describing a program to engage early adopters in using proxies for SWOT data, including sophisticated ocean and hydrology simulators, an airborne analogue for SWOT (AirSWOT), and existing satellite datasets, are cornerstones for the program. A user survey is in development and the first user workshop was held in 2015, with annual workshops planned. <br><br> The anticipated science and engineering advances that SWOT will provide can be transformed into valuable services to decision makers and civic organizations focused on addressing global disaster risk reduction initiatives and potential science-based mitigation activities for water resources challenges of the future. With the surface water measurements anticipated from SWOT, a broad range of applications can inform inland and coastal managers and marine operators of terrestrial and oceanic phenomena relevant to their work.
APA, Harvard, Vancouver, ISO, and other styles
23

Yunis, Manal, Abdul-Nasser El-Kassar, and Abbas Tarhini. "Impact of ICT-based innovations on organizational performance." Journal of Enterprise Information Management 30, no. 1 (February 13, 2017): 122–41. http://dx.doi.org/10.1108/jeim-01-2016-0040.

Full text
Abstract:
Purpose Research has shown that information and communication technologies (ICT) are crucial for economic growth. The purpose of this paper is to develop a framework that would depict and examine the nature of the relationship between ICT use and organizational performance in the Lebanese market, taking into consideration the impact that innovation and corporate entrepreneurship may have on this relationship. Design/methodology/approach To investigate the proposed model a survey targeting employees, and managers who adopted ICT applications in SMEs located in Lebanon was conducted. Findings The results indicate that ICT and innovation are strategic resources. However, their contribution to sustainable competitive advantage vitally depends on the implicitness and entrepreneurial behaviors of those involved. It is through this capability that ICT and ICT-based innovations could make a difference in organization’s performance – both present and future. Research limitations/implications First, the respondents were selected using the convenience sampling technique. Second, the data were collected through self-report questionnaires. Finally, the use of perceptual data related to performance may have a bias effect on the study results. Practical/implications At the practical level, the study results have repercussions for managers, technology suppliers, and innovation adopters and managers, as this may contribute to better understanding of the factors that could influence the adoption, management, and use of ICT resources for enhancing the competitiveness level of the firm. Originality/value The results of this study have implications for ICT adoption in Lebanese SMEs. More importantly, they suggest a framework which depicts the relationship between ICT and the organization’s innovation level on one hand, and a company’s performance on the other, taking entrepreneurship as a mediator in this relationship.
APA, Harvard, Vancouver, ISO, and other styles
24

El-Sayed, Hesham, Sharmi Sankar, Heng Yu, and Gokulnath Thandavarayan. "Benchmarking of Recommendation Trust Computation for Trust/Trustworthiness Estimation in HDNs." International Journal of Computers Communications & Control 12, no. 5 (September 10, 2017): 612. http://dx.doi.org/10.15837/ijccc.2017.5.2895.

Full text
Abstract:
In the recent years, Heterogeneous Distributed Networks (HDNs) is a predominant technology implemented to enable various application in different fields like transportation, medicine, war zone, etc. Due to its arbitrary self-organizing nature and temporary topologies in the spatial-temporal region, distributed systems are vulnerable with a few security issues and demands high security countermeasures. Unlike other static networks, the unique characteristics of HDNs demands cutting edge security policies. Numerous cryptographic techniques have been proposed by different researchers to address the security issues in HDNs. These techniques utilize too many resources, resulting in higher network overheads. This being classified under light weight security scheme, the Trust Management System (TMS) tends to be one of the most promising technology, featured with more efficiency in terms of availability, scalability and simplicity. It advocates both the node level validation and data level verification enhancing trust between the attributes. Further, it thwarts a wide range of security attacks by incorporating various statistical techniques and integrated security services. In this paper, we present a literature survey on different TMS that highlights reliable techniques adapted across the entire HDNs. We then comprehensively study the existing distributed trust computations and benchmark them in accordance to their effectiveness. Further, performance analysis is applied on the existing computation techniques and the benchmarked outcome delivered by Recommendation Trust Computations (RTC) is discussed. A Receiver Operating Characteristics (ROC) curve illustrates better accuracy for Recommendation Trust Computations (RTC), in comparison with Direct Trust Computations (DTC) and Hybrid Trust Computations (HTC). Finally, we propose the future directions for research and highlight reliable techniques to build an efficient TMS in HDNs.
APA, Harvard, Vancouver, ISO, and other styles
25

Hariastuti, Ni Luh Putu, Pratikto Pratikto, Purnomo Budi Santoso, and Ishardita Pambudi Tama. "Analyzing the drivers of sustainable value creation, partnership strategies, and their impact on business competitive advantages of small & medium enterprises: a PLS-model." Eastern-European Journal of Enterprise Technologies 2, no. 13 (110) (April 30, 2021): 55–66. http://dx.doi.org/10.15587/1729-4061.2021.228864.

Full text
Abstract:
Sustainable manufacturing is a critical phenomenon in the process of creating sustainable value. This is a way to increase innovation and resource quality. On the other hand, the partnership strategy is an important factor in efforts to improve company performance. The involvement of the partnership strategy is one of the factors that strengthen the achievement of sustainable values. Furthermore, this affects the sustainability of a manufacturing company's competitiveness, including Small and Medium Enterprises (SMEs). In this study, we focus on creating sustainable value and the role of partnership strategies in improving the business performance of SMEs engaged in the metal manufacturing industry. The Partial Least Squares (PLS) approach to Structural Equation Modeling (SEM) is used to evaluate relationships and effects based on survey data from small and medium industries. The results show that the creation of sustainable value, including products, processes, production, equipment, organization, and human values, has a significant impact (β=0.522; ρ<0.001) on increasing the competitiveness of small and medium enterprises. The effect of sustainable value creation on sustainable competitiveness is fully moderated by the partnership strategy (β=0.179; ρ=0.03), especially in the technology & equipment, and human resources. Apart from being a moderating variable, the partnership strategy has also been shown to significantly act as a partial mediating variable (β=0.135; ρ<0.05) for sustainable value creation in enhancing competitiveness. The partnership strategy's simultaneous involvement proves that the partnership strategy plays an important role in value creation to increase the competitiveness of sustainable manufacturing SMEs
APA, Harvard, Vancouver, ISO, and other styles
26

Habeeb, Nada Jasim, and Shireen Talib Weli. "Combination of GIS with Different Technologies for Water Quality: An Overview." HighTech and Innovation Journal 2, no. 3 (September 1, 2021): 262–72. http://dx.doi.org/10.28991/hij-2021-02-03-10.

Full text
Abstract:
Water is one of the most important requirements in daily life, which represents the largest part of the Earth. As a result of the economic, industrial and social development in most countries led to increased pollution in water resources. It is, therefore, necessary to monitor water quality continuously to prevent a future catastrophe that adversely affects the quality and quantity of water wealth. Geographic Information System (GIS) is used in various fields to monitor, analyze data collected from different geographical locations. Integration of GIS and other technologies has become an indispensable tool. This gives us direct control over solution expansion, cost reduction, powerful complex case analysis as well as increased accuracy and efficiency of geospatial data. In recent years, many combinations of GIS with different technologies such as remote sensing, wireless sensor networks, internet of things approaches have been proposed due to the rapid progress of technology development in many applications. However, in last several years, there is no survey paper has been published about water quality using the integration of GIS and other technologies. Therefore, this paper investigates the status of continuous search in the field of GIS and its integration with other technologies (Remote Sensing, Internet of Things, Web, etc.) for water quality management and monitoring to maintain the water resources in a proper way. As conclusions, the integration of the GIS with these technologies is a powerful platform for analyzing and processing big data and mapping geographic remotely in less time, less cost, high speed and with more accurate details and in real time compared to traditional geographic information systems. This paper also highlights future research trends about the cooperation of GIS with other technologies for matters that are related to water quality. Doi: 10.28991/HIJ-2021-02-03-10 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
27

Hu, Qian. "A user-centred collaborative framework for integrated information services in China." Electronic Library 33, no. 6 (November 2, 2015): 990–1001. http://dx.doi.org/10.1108/el-04-2014-0060.

Full text
Abstract:
Purpose – The paper aims to propose a user-centred collaborative framework for providing integrated information services (IIS) to corporate users in China. The framework is conceptualized based on a literature review of IIS models and a case study. The authors provide suggestions with regard to the implementation of effective and efficient information services for corporate users based on the proposed framework. Design/methodology/approach – This paper reviews the efforts of investigating appropriate models for integrated information services (IIS) and proposes a user-centred collaborative framework for providing IIS for corporate users. It is organized as follows: first is an overview through a review of the related literature of the current status of information resource services in China. Then, a case study of IIS in Hubei Province is analysed. Next, a user-centred collaborative IIS framework is presented that aims to address the needs of corporate users. The paper concludes with a summary and suggestions for future study to build effective and efficient IIS systems. Findings – Through an exploratory survey conducted in 2009, it was discovered that, in general, corporate users need all kinds of information, not only scientific publications but also business and market information. Their channel to obtain needed information was mainly the Internet. Books and domain-specific databases were also used by most of the participants. The major challenges for corporate users to obtain needed information included the high cost of purchasing or leasing desired information resources, the low quality of information on the Internet, limited information workers or their skills and the quality of high-level information services. Research limitations/implications – The survey served as a tool to gather primitive information on user needs. It was an incomplete, unsystematic exploration. However, the authors could still gain some insights on the users’ information needs and directions for future IIS. The results showed that Hubei Science and Technology Information Sharing Service, which was an implementation of the agency collaboration-based IIS model, satisfied the needs of less than 30 per cent of the participants. It has much room for improvement. Practical implications – This paper proposes a user-centred collaborative integrated information services (UCIIS) framework. The UCIIS framework takes the idea of the user-centred integrated information service (IIS) model that the construction of IIS should start from understanding the users of the services, but it also takes important characteristics from the agency collaboration-based IIS model. Originality/value – The discussion in this paper is basically on the macro level, leaving a lot of interesting future work to design, develop and evaluate IIS systems based on the proposed framework. Specifically, interest is in developing user models through systematic and comprehensive investigation of corporate information users’ needs, and examining current library and information science curricula to produce qualified information professionals who can carry out user experience studies, and high-level knowledge discovery tasks using various advanced computational technologies.
APA, Harvard, Vancouver, ISO, and other styles
28

Kyfyak, Vasil, and Oleksandr Kyfyak. "Digitalization of processes of tourist destinations development in Western Ukrainian border regions." Herald of Ternopil National Economic University, no. 2(96) (July 10, 2020): 162–73. http://dx.doi.org/10.35774/visnyk2020.02.162.

Full text
Abstract:
Introduction. The processes of formation and development of tourist destinations in the border regions of Western Ukraine testify to the growing influence of Internet resources; mobile platforms and applications; use of various software and other digital products on tourism development. The urgent issue remains the introduction of information and communication technologies and the formation of a system of relations between the tourist and the tourist destination. In view of this; the article is devoted to the study of the use of digital technologies in the development of tourist destinations. Methods. The methodological basis of the study are general scientific and economic- statistical methods: analysis and synthesis – to explore the benefits of implementing digital technologies in tourism; peer review – based on a set of individual expert opinions; allowed to obtain an objective assessment of the need to introduce digital products in the development of tourist destinations; inductions; deductions – to determine the directions of development of tourist destinations; survey – to identify sources of information that prompted a tourist trip; etc. Results. On the basis of the analysis of the activity of tourist centers in the western Ukrainian border regions the advantages of digitization in the development of tourist destinations were determined and further possibilities of digital tourism were revealed. Through the expert evaluation; digital products were detailed and the need for their introduction into the development of tourist destinations was confirmed. A survey was conducted of respondents in the information tourist centers of some Western Ukrainian cities; which helped to identify the main sources of information that influenced the desire to make a tourist trip to a tourist destination. The international experience of using digital technologies in the functioning of local tourist destinations in Suceava County (Romania) is considered; which allowed establishing modern approaches to tourism development and introduction of new concepts; such as destination information systems. In order to fully meet the needs of modern tourists and efficient use of tourist resources; it is proposed to create «smart» territories where; through digital technology and the use of innovative devices; not only can full use of tourism potential and create new opportunities for its growth; but also make tourists stay at their destination more comfortable and secure. Prospects. The prospect of further research involves the development and implementation of a set of stimulating measures to intensify the processes of digitization of tourism destinations and search for tools to support the introduction of digital technologies in the tourism sector.
APA, Harvard, Vancouver, ISO, and other styles
29

Kocher, A., M. Simon, C. Chizzolini, O. Distler, A. A. Dwyer, P. Villiger, U. Walker, and D. Nicca. "SAT0652-HPR CHRONIC DISEASE MANAGEMENT AND HEALTH TECHNOLOGY READINESS OF PATIENTS WITH SYSTEMIC SCLEROSIS IN SWITZERLAND – A CROSS-SECTIONAL STUDY." Annals of the Rheumatic Diseases 79, Suppl 1 (June 2020): 1285.1–1285. http://dx.doi.org/10.1136/annrheumdis-2020-eular.3232.

Full text
Abstract:
Background:People living with systemic sclerosis (SSc) often lack access to coordinated, specialized care and self-management support from qualified healthcare professionals. Such gaps lead to significant unmet health needs and inability to get preventive services. The Chronic Care Model (CCM) has been used to guide disease management across a wide range of chronic conditions. The CCM often uses e-health technologies to address self-management problems, connect patients with clinicians and reduce patient travel requirements.Objectives:To evaluate current SSc care practice patterns and elicit patient health technology readiness to define relevant aspects and resources needed to improve SSc chronic disease management.Methods:We employed a cross-sectional survey using the 20-item Patient Assessment of Chronic Illness Care (PACIC) instrument to assess how aspects of SSc care align with key components of the CCM.1Six items drawn from the ‘5A’ (ask, advise, agree, assist, and arrange) model of behavioural counselling were included (all 26 items scored on 5-point scale, 1=never to 5=always). Acceptance of health technology was evaluated by adapting and combining questionnaires from Vanhoof2and Halwas3. German and French speaking SSc patients (>18 years) were recruited from university/cantonal hospitals and the Swiss scleroderma patients’ association. Participants completed anonymous paper/online questionnaires. Data were analysed descriptively.Results:Of 101 SSc patients, most were female (76%), spoke German (78%) and had a median age of 60 years (IQR: 50-68). Median disease duration was 8 years (IQR: 5-15), spanning a range of severity (31% limited SSc, 36% diffuse SSc, 3% overlap syndrome). One-quarter (25%) did not know their disease subset.The mean overall PACIC score was relatively low (2.91±0.95) indicating that care was ‘never’ to ‘generally not’ aligned with the CCM. Lowest mean subscale scores related to Follow-up/ Coordination (2.64±1.02), Goal setting (2.68±1.07) and Problem-solving/Contextual Counselling (2.94±1.22). The single items ‘Given a copy of my treatment plan’ (1.99±1.38) and ‘Encouraged to attend programs in the community’ (1.89±1.16) were given the lowest ratings. The ‘5A’ summary score was 2.84±0.97.In terms of technology readiness, 43% completed the survey online. Most participants owned a smartphone (81%), laptop (63%) and/or desktop computer (46%). The overwhelming majority of patients (91%) reported using the Internet in the last year – primarily for communication (e.g. emails, text messages). Participants indicated relatively little experience with e-health applications and participating in SSc online forums or self-help groups.Conclusion:To improve chronic disease management of SSc patients in Switzerland, current care practices warrant reengineering taking CCM components into account. Specific unmet needs relate to self-management support, help patients set individualized goals, and coordinate continuous care. Web-based technologies incorporating user-centred design principles may be a reasonable option for improving care.References:[1]Glasgow, RE, et al. Development and validation of the Patient Assessment of Chronic Illness Care (PACIC).Med Care2005; 43(5): 436-44[2]Vanhoof, JM, et al. Technology Experience of Solid Organ Transplant Patients and Their Overall Willingness to Use Interactive Health Technology. J Nurs Scholarsh2018; 50(2): 151-62[3]Halwas, N, et al. eHealth literacy, Internet and eHealth service usage: a survey among cancer patients and their relatives. J Cancer Res Clin Oncol2017; 143(11): 2291-99Disclosure of Interests:Agnes Kocher Grant/research support from: Sandoz to support the development of an eLearning module for patients with rheumatic diseases., Michael Simon: None declared, Carlo Chizzolini Consultant of: Boehringer Ingelheim, Roche, Oliver Distler Grant/research support from: Grants/Research support from Actelion, Bayer, Boehringer Ingelheim, Competitive Drug Development International Ltd. and Mitsubishi Tanabe; he also holds the issued Patent on mir-29 for the treatment of systemic sclerosis (US8247389, EP2331143)., Consultant of: Consultancy fees from Actelion, Acceleron Pharma, AnaMar, Bayer, Baecon Discovery, Blade Therapeutics, Boehringer, CSL Behring, Catenion, ChemomAb, Curzion Pharmaceuticals, Ergonex, Galapagos NV, GSK, Glenmark Pharmaceuticals, Inventiva, Italfarmaco, iQvia, medac, Medscape, Mitsubishi Tanabe Pharma, MSD, Roche, Sanofi and UCB, Speakers bureau: Speaker fees from Actelion, Bayer, Boehringer Ingelheim, Medscape, Pfizer and Roche, Andrew A. Dwyer: None declared, Peter Villiger Consultant of: MSD, Abbvie, Roche, Pfizer, Sanofi, Speakers bureau: Roche, MSD, Pfizer, Ulrich Walker Grant/research support from: Ulrich Walker has received an unrestricted research grant from Abbvie, Consultant of: Ulrich Walker has act as a consultant for Abbvie, Actelion, Boehringer Ingelheim, Bristol-Myers Squibb, Celgene, MSD, Novartis, Pfizer, Phadia, Roche, Sandoz, Sanofi, and ThermoFisher, Paid instructor for: Abbvie, Novartis, and Roche, Speakers bureau: Abbvie, Actelion, Bristol-Myers Squibb, Celgene, MSD, Novartis, Pfizer, Phadia, Roche, Sandoz, and ThermoFisher, Dunja Nicca: None declared
APA, Harvard, Vancouver, ISO, and other styles
30

Su, Ming-Feng, Kuo-Chih Cheng, Shao-Hsi Chung, and Der-Fa Chen. "Innovation capability configuration and its influence on the relationship between perceived innovation requirement and organizational performance." Journal of Manufacturing Technology Management 29, no. 8 (December 10, 2018): 1316–31. http://dx.doi.org/10.1108/jmtm-03-2018-0097.

Full text
Abstract:
Purpose When the management of an information technology (IT) manufacturing firm perceives a need for innovation due to any threat in the external environment, it will be prompted to use organizational resources to support innovation and improve organizational performance through the implementation of the innovation. The purpose of this paper is to explore whether an IT manufacturing firm’s budget slack, information quality of information system (IS), process innovation and product innovation would interact to collectively form an innovation capacity, which is termed “innovation capability configuration (ICC)”, and whether ICC mediates the relationship between perceived innovation requirement and organizational performance. Design/methodology/approach To answer these questions, a structural equation model was built and a questionnaire survey was conducted to collect data from research and development and production managers of IT manufacturing companies listed on the Taiwan Stock Exchange and Over-The-Counter markets. Findings The results showed that budget slack, IS information quality, process innovation and product innovation are all significantly related to ICC, in which high-quality information and low level of budget slack are the key factors that underpin the innovation capacity. In addition, ICC has a full mediation effect, that is, perceived innovation requirement positively influences ICC, which, in turn, improves organizational performance. Research limitations/implications Because all items in a questionnaire were answered by a manager, the common method variance might exist in this study. In addition, the effective recovery rate of the questionnaire was not high due to which the non-response bias might occur. Following the research limitations, several future research recommendations are proposed. Practical implications This study offers managerial implications for the development of an IT manufacturing firm’s innovation strategy and structure to smooth the implementation of innovation in the severe environment. Originality/value The study is the first attempt to integrate the four elements clearly illustrating the ICC, which is a more complete innovation strategy, thus contributing to improve the past fragmental studies and clarify some controversial points existing in the extant innovation research.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhu, Ruoning, Zhengqi Guo, and Xiaoli Zhang. "Forest 3D Reconstruction and Individual Tree Parameter Extraction Combining Close-Range Photo Enhancement and Feature Matching." Remote Sensing 13, no. 9 (April 22, 2021): 1633. http://dx.doi.org/10.3390/rs13091633.

Full text
Abstract:
An efficient and accurate forest sample plot survey is of great significance to understand the current status of forest resources at the stand or regional scale and the basis of scientific forest management. Close-range photogrammetry (CRP) technology can easily and quickly collect sequence images with high overlapping to reconstruct the 3D model of forest scenes and extract the individual tree parameters automatically and, therefore, can greatly improve the efficiency of forest investigation and has great application potential in forestry visualization management. However, it has some issues in practical forestry applications. First, the imaging quality is affected by the illumination in the forest, resulting in difficulty in feature matching and low accuracy of parameter extraction. Second, the efficiency of 3D forest model reconstruction is limited under complex understory vegetation or the topographic situation in the forest. In addition, the density of point clouds by dense matching directly affects the accuracy of individual tree parameter extraction. This research collected the sequence images of sample plots of four tree species by smartphones in Gaofeng Forest Farm in Guangxi and Wangyedian Forest Farm in Mongolia to analyze the effects of image enhancement, feature detection and dense point cloud algorithms on the efficiency of 3D forest reconstruction and accuracy of individual tree parameter extraction, then proposed a strategy of 3D reconstruction and parameter extraction suitable for different forest scenes. First, we compared the image enhancement effects of median–Gaussian (MG) filtering, single-scale retinex (SSR) and multi-scale retinex (MSR) filtering algorithms. Then, an improved algorithm combining Harris corner detection with speeded-up robust features (SURF) feature detection (Harris+SURF) is proposed, and the feature matching effect is compared with that of a scale invariant feature transform (SIFT) operator. Third, according to the morphological characteristics of the trees in the sequence images, we used the iterative interpolation algorithm of a planar triangulation network based on geometric constraints (GC-based IIPTN) to increase the density of point clouds and reconstruct the 3D forest model, and then extract the position and DBH of the individual trees. The results show that MSR image enhancement can significantly increase the number of matched point pairs. The improved Harris+SURF method can reduce the reconstruction time of the 3D forest model, and the GC-based IIPTN algorithm can improve the accuracy of individual tree parameter extraction. The extracted position of the individual tree is the same as the measured position with the bias within 0.2 m. The accuracy of extracted DBH of Eucalyptus grandis, Taxus chinensis, Larix gmelinii and Pinus tabuliformis is 94%, 95%, 96% and 90%, respectively, which proves that the proposed 3D model reconstruction method based on image enhancement has great potential for tree position and DBH extraction, and also provides effective support for forest resource investigation and visualization management in the future.
APA, Harvard, Vancouver, ISO, and other styles
32

Hang, Nguyen Thi Thu, Erdinc Oksum, Le Huy Minh, and Do Duc Thanh. "An improved space domain algorithm for determining the 3-D structure of the magnetic basement." VIETNAM JOURNAL OF EARTH SCIENCES 41, no. 1 (January 8, 2019): 69–80. http://dx.doi.org/10.15625/0866-7187/41/1/13550.

Full text
Abstract:
The paper presents an improved algorithm based on Bhaskara Rao and Ramesh Babu’s algorithm to invert magnetic anomalies of three-dimensional basement structures. The magnetic basement is approximated by an ensemble of juxtaposed vertical prisms whose bottom surface coincides with Curie surface with the known depth. The computer program operating with the proposed algorithm is built in Matlab environment. Test applications show that the proposed method can perform computations with fast and stable convergence rate where the results also coincide well with the actual model structure. The effectiveness of the method is demonstrated by inverting magnetic anomalies of the southeast part of Vietnam continental shelf. The calculated magnetic basement relief of the study area provides useful additional information for studies in the aim of dealing with the geological structure of the area.References Beiki M., 2010. Analytic signals of gravity gradient tensor and their application to estimate source location, Geophysics, 75(6), i59–i74.Bui C.Q. (chief author), Le T., Tran T. D., Nguyen T. H., Phi T.T., 2007. Map of deep structure of the Earth’s crust, Atlas of the characteristics of natural conditions and environment in Vietnam’s waters and adjacent region. Publisher of Science and Technology, Ha Noi. Do D.T., Nguyen T.T.H., 2011. Atempt the improvement of inversion of magnetic anomalies of two dimensional polygonal cross sections to determine the depth of magnetic basement in some data profile of middle off shelf of Vietnam. Journal of Science and Technology, Vietnam Academy of Science and Technology, 49(2), 125–132.Do D.T., 2013. Study for application of 3D magnetic and gravity method to determine density contribution of basement rock and depth of magnetic basement on Vietnam’s shelf for oil research and prospecting Vietnam National University, Hanoi, Project code QG-11-04. Keating P. and Pilkington M., 2000, Euler deconvolution of the analytic signal, 62nd Annual International Meeting, EAGE, Session P0193.Keating P., Zerbo L., 1996. An improved technique for reduction to the pole at low latitudes, Geophysics, 61, 131–137.Le H.M., Luu V.H., 2003. Preliminary interpretation of the magnetic anomalies of the Eastern Vietnam sea and adiacent regions. J. Sci. of the Earth, 25(2), 173–181. Mai T.T., Pham V.T., Dang V.B., Le D.B., Nguyen B., Le V.D., 2011. Characteristics of Pliocene - Quaternary geology and Geoengineering in the Center and Southeast parts of Continental Shelf of Vietnam. J. Sci. of the Earth, 33(2), 109-118.Mushayandebvu M.F., Lesur V., Reid A.B., Fairhead J.D., 2004. Grid Euler deconvolution with constraints for 2D structures, Geophysics, 69, 489–496.Nguyen N.T., Bui V.N., Nguyen T.T.H., Than D.L., 2014a. Application of power density spectrum of magnetic anomaly to estimate the structure of magnetic layer of the earth crust in the Bac Bo gulf. Journal of Marine Science and Technology, 14(4A), 137–148.Nguyen N.T., Bui V.N., Nguyen T.T.H., 2014b. Determining the depth to the magnetic basementand fault systems in Tu Chinh - Vung May area by magnetic data interpretation. Journal of Marine Science and Technology, 14(4A), 16–25.Nguyen T.T.H., Pham T.L., Do D.T., Le H.M., 2018. Improving algorithm of determining the coordinates of the vertices of the polygon to invert magnetic anomalies of two-dimensional basement structures in space domain, Journal of Marine Science and Technology (preparing to print).Parker R.L., 1973. The rapid calculation of potential anomalies, Geophys. J. Roy. Astron. Soc, 31, 447–455. Pilkington M., Gregotski M.E., Todoeschuck J.P., 1994. Using fractal crustal magnetization models in magnetic interpretation, Geophysical Prospecting, 42, 677–692.Pilkington M., 2006. Joint inversion of gravity and magnetic data for two-layer models, Geophysics, 71, L35–L42.Rao D.B., Babu N.R., 1993. A fortran 77 computer program for three dimensional inversion of magnetic anomalies resulting from multiple prismatic bodies, Computer & Geosciences, 19(8), 781–801.Tanaka A., Okubo Y., Matsubayashi O., 1999. Curie point depth based on spectrum analysis of the magnetic anomaly data in East and Southeast Asia, Tectonic Pphysics, 306, 461–470.Thompson D.T., 1982. EULDTH – A new technique for marking computer-assisted depth estimates from magnetic data, Geophysics, 47, 31–37.Vo T.S., Le H.M., Luu V.H., 2005. Determining the horizontal position and depth of the density discontinuties in Red River Delta by using the vertical derivative and Euler deconvolution for the gravity anomaly data, Vietnam. Journal of Geology, Series A, 287(3–4), 39–52. Werner S., 1955. Interpretation of magnetic anomalies of sheet-like bodies, Sveriges Geologiska Undersokning, Series C, Arsbok, 43, 6.Xu S.Z., 2006. The integral-iteration method for continuation of potential fields, Chinese journal of geophysics (in Chinese), 49(4), 1176–1182.Zhang C., Huang D.N., Zhang K., Pu Y.T., Yu P., 2016. Magnetic interface forward and inversion method based on Padé approximation, Applied Geophysics, 13(4), 712–720.CCOP, 1996. Magnetic anomaly map of East Asia, scale 1:4.000.000, Geological survey of Japan and Committee for co-ordination of joint prospecting for mineral resources in asian offshore areas.
APA, Harvard, Vancouver, ISO, and other styles
33

Nesrine, Lenchi, Kebbouche Salima, Khelfaoui Mohamed Lamine, Laddada Belaid, BKhemili Souad, Gana Mohamed Lamine, Akmoussi Sihem, and Ferioune Imène. "Phylogenetic characterization and screening of halophilic bacteria from Algerian salt lake for the production of biosurfactant and enzymes." World Journal of Biology and Biotechnology 5, no. 2 (August 15, 2020): 1. http://dx.doi.org/10.33865/wjb.005.02.0294.

Full text
Abstract:
Environments containing significant concentration of NaCl such as salt lakes harbor extremophiles microorganisms which have a great biotechnology interest. To explore the diversity of Bacteria in Chott Tinsilt (Algeria), an isolation program was performed. Water samples were collected from the saltern during the pre-salt harvesting phase. This Chott is high in salt (22.47% (w/v). Seven halophiles Bacteria were selected for further characterization. The isolated strains were able to grow optimally in media with 10–25% (w/v) total salts. Molecular identification of the isolates was performed by sequencing the 16S rRNA gene. It showed that these cultured isolates included members belonging to the Halomonas, Staphylococcus, Salinivibrio, Planococcus and Halobacillus genera with less than 98% of similarity with their closest phylogenetic relative. The halophilic bacterial isolates were also characterized for the production of biosurfactant and industrially important enzymes. Most isolates produced hydrolases and biosurfactants at high salt concentration. In fact, this is the first report on bacterial strains (A4 and B4) which were a good biosurfactant and coagulase producer at 20% and 25% ((w/v)) NaCl. In addition, the biosurfactant produced by the strain B4 at high salinity (25%) was also stable at high temperature (30-100°C) and high alkalinity (pH 11).Key word: Salt Lake, Bacteria, biosurfactant, Chott, halophiles, hydrolases, 16S rRNAINTRODUCTIONSaline lakes cover approximately 10% of the Earth’s surface area. The microbial populations of many hypersaline environments have already been studied in different geographical regions such as Great Salt Lake (USA), Dead Sea (Israel), Wadi Natrun Lake (Egypt), Lake Magadi (Kenya), Soda Lake (Antarctica) and Big Soda Lake and Mono Lake (California). Hypersaline regions differ from each other in terms of geographical location, salt concentration and chemical composition, which determine the nature of inhabitant microorganisms (Gupta et al., 2015). Then low taxonomic diversity is common to all these saline environments (Oren et al., 1993). Halophiles are found in nearly all major microbial clades, including prokaryotic (Bacteria and Archaea) and eukaryotic forms (DasSarma and Arora, 2001). They are classified as slight halophiles when they grow optimally at 0.2–0.85 M (2–5%) NaCl, as moderate halophiles when they grow at 0.85–3.4 M (5–20%) NaCl, and as extreme halophiles when they grow at 3.4–5.1 M (20–30%) NaCl. Hyper saline environments are inhabited by extremely halophilic and halotolerant microorganisms such as Halobacillus sp, Halobacterium sp., Haloarcula sp., Salinibacter ruber , Haloferax sp and Bacillus spp. (Solomon and Viswalingam, 2013). There is a tremendous demand for halophilic bacteria due to their biotechnological importance as sources of halophilic enzymes. Enzymes derived from halophiles are endowed with unique structural features and catalytic power to sustain the metabolic and physiological processes under high salt conditions. Some of these enzymes have been reported to be active and stable under more than one extreme condition (Karan and Khare, 2010). Applications are being considered in a range of industries such as food processing, washing, biosynthetic processes and environmental bioremediation. Halophilic proteases are widely used in the detergent and food industries (DasSarma and Arora, 2001). However, esterases and lipases have also been useful in laundry detergents for the removal of oil stains and are widely used as biocatalysts because of their ability to produce pure compounds. Likewise, amylases are used industrially in the first step of the production of high fructose corn syrup (hydrolysis of corn starch). They are also used in the textile industry in the de-sizing process and added to laundry detergents. Furthermore, for the environmental applications, the use of halophiles for bioremediation and biodegradation of various materials from industrial effluents to soil contaminants and accidental spills are being widely explored. In addition to enzymes, halophilic / halotolerants microorganisms living in saline environments, offer another potential applications in various fields of biotechnology like the production of biosurfactant. Biosurfactants are amphiphilic compounds synthesized from plants and microorganisms. They reduce surface tension and interfacial tension between individual molecules at the surface and interface respectively (Akbari et al., 2018). Comparing to the chemical surfactant, biosurfactant are promising alternative molecules due to their low toxicity, high biodegradability, environmental capability, mild production conditions, lower critical micelle concentration, higher selectivity, availability of resources and ability to function in wide ranges of pH, temperature and salinity (Rocha et al., 1992). They are used in various industries which include pharmaceuticals, petroleum, food, detergents, cosmetics, paints, paper products and water treatment (Akbari et al., 2018). The search for biosurfactants in extremophiles is particularly promising since these biomolecules can adapt and be stable in the harsh environments in which they are to be applied in biotechnology.OBJECTIVESEastern Algeria features numerous ecosystems including hypersaline environments, which are an important source of salt for food. The microbial diversity in Chott Tinsilt, a shallow Salt Lake with more than 200g/L salt concentration and a superficies of 2.154 Ha, has never yet been studied. The purpose of this research was to chemically analyse water samples collected from the Chott, isolate novel extremely or moderate halophilic Bacteria, and examine their phenotypic and phylogenetic characteristics with a view to screening for biosurfactants and enzymes of industrial interest.MATERIALS AND METHODSStudy area: The area is at 5 km of the Commune of Souk-Naâmane and 17 km in the South of the town of Aïn-Melila. This area skirts the trunk road 3 serving Constantine and Batna and the railway Constantine-Biskra. It is part the administrative jurisdiction of the Wilaya of Oum El Bouaghi. The Chott belongs to the wetlands of the High Plains of Constantine with a depth varying rather regularly without never exceeding 0.5 meter. Its length extends on 4 km with a width of 2.5 km (figure 1).Water samples and physico-chemical analysis: In February 2013, water samples were collected from various places at the Chott Tinsilt using Global Positioning System (GPS) coordinates of 35°53’14” N lat. and 06°28’44”E long. Samples were collected randomly in sterile polythene bags and transported immediately to the laboratory for isolation of halophilic microorganisms. All samples were treated within 24 h after collection. Temperature, pH and salinity were measured in situ using a multi-parameter probe (Hanna Instruments, Smithfield, RI, USA). The analytical methods used in this study to measure ions concentration (Ca2+, Mg2+, Fe2+, Na+, K+, Cl−, HCO3−, SO42−) were based on 4500-S-2 F standard methods described elsewhere (Association et al., 1920).Isolation of halophilic bacteria from water sample: The media (M1) used in the present study contain (g/L): 2.0 g of KCl, 100.0/200.0 g of NaCl, 1.0 g of MgSO4.7HO2, 3.0 g of Sodium Citrate, 0.36 g of MnCl2, 10.0 g of yeast extract and 15.0 g agar. The pH was adjusted to 8.0. Different dilutions of water samples were added to the above medium and incubated at 30°C during 2–7 days or more depending on growth. Appearance and growth of halophilic bacteria were monitored regularly. The growth was diluted 10 times and plated on complete medium agar (g/L): glucose 10.0; peptone 5.0; yeast extract 5.0; KH2PO4 5.0; agar 30.0; and NaCl 100.0/200.0. Resultant colonies were purified by repeated streaking on complete media agar. The pure cultures were preserved in 20% glycerol vials and stored at −80°C for long-term preservation.Biochemical characterisation of halophilic bacterial isolates: Bacterial isolates were studied for Gram’s reaction, cell morphology and pigmentation. Enzymatic assays (catalase, oxidase, nitrate reductase and urease), and assays for fermentation of lactose and mannitol were done as described by Smibert (1994).Optimization of growth conditions: Temperature, pH, and salt concentration were optimized for the growth of halophilic bacterial isolates. These growth parameters were studied quantitatively by growing the bacterial isolates in M1 medium with shaking at 200 rpm and measuring the cell density at 600 nm after 8 days of incubation. To study the effect of NaCl on the growth, bacterial isolates were inoculated on M1 medium supplemented with different concentration of NaCl: 1%-35% (w/v). The effect of pH on the growth of halophilic bacterial strains was studied by inoculating isolates on above described growth media containing NaCl and adjusted to acidic pH of 5 and 6 by using 1N HCl and alkaline pH of 8, 9, 10, 11 and 12 using 5N NaOH. The effect of temperature was studied by culturing the bacterial isolates in M1 medium at different temperatures of incubation (4°C–55°C).Screening of halophilic bacteria for hydrolytic enzymes: Hydrolase producing bacteria among the isolates were screened by plate assay on starch, tributyrin, gelatin and DNA agar plates respectively for amylase, lipase, protease and DNAse activities. Amylolytic activity of the cultures was screened on starch nutrient agar plates containing g/L: starch 10.0; peptone 5.0; yeast extract 3.0; agar 30.0; NaCl 100.0/250.0. The pH was 7.0. After incubation at 30 ºC for 7 days, the zone of clearance was determined by flooding the plates with iodine solution. The potential amylase producers were selected based on ratio of zone of clearance diameter to colony diameter. Lipase activity of the cultures was screened on tributyrin nutrient agar plates containing 1% (v/v) of tributyrin. Isolates that showed clear zones of tributyrin hydrolysis were identified as lipase producing bacteria. Proteolytic activity of the isolates was similarly screened on gelatin nutrient agar plates containing 10.0 g/L of gelatin. The isolates showing zones of gelatin clearance upon treatment with acidic mercuric chloride were selected and designated as protease producing bacteria. The presence of DNAse activity on plates was determined on DNAse test agar (BBL) containing 10%-25% (w/v) total salt. After incubation for 7days, the plates were flooded with 1N HCl solution. Clear halos around the colonies indicated DNAse activity (Jeffries et al., 1957).Milk clotting activity (coagulase activity) of the isolates was also determined following the procedure described (Berridge, 1952). Skim milk powder was reconstituted in 10 mM aqueous CaCl2 (pH 6.5) to a final concentration of 0.12 kg/L. Enzyme extracts were added at a rate of 0.1 mL per mL of milk. The coagulation point was determined by manual rotating of the test tube periodically, at short time intervals, and checking for visible clot formation.Screening of halophilic bacteria for biosurfactant production. Oil spread Assay: The Petridis base was filled with 50 mL of distilled water. On the water surface, 20μL of diesel and 10μl of culture were added respectively. The culture was introduced at different spots on the diesel, which is coated on the water surface. The occurrence of a clear zone was an indicator of positive result (Morikawa et al., 2000). The diameter of the oil expelling circles was measured by slide caliber (with a degree of accuracy of 0.02 mm).Surface tension and emulsification index (E24): Isolates were cultivated at 30 °C for 7 days on the enrichment medium containing 10-25% NaCl and diesel oil as the sole carbon source. The medium was centrifuged (7000 rpm for 20 min) and the surface tension of the cell-free culture broth was measured with a TS90000 surface tensiometer (Nima, Coventry, England) as a qualitative indicator of biosurfactant production. The culture broth was collected with a Pasteur pipette to remove the non-emulsified hydrocarbons. The emulsifying capacity was evaluated by an emulsification index (E24). The E24 of culture samples was determined by adding 2 mL of diesel oil to the same amount of culture, mixed for 2 min with a vortex, and allowed to stand for 24 h. E24 index is defined as the percentage of height of emulsified layer (mm) divided by the total height of the liquid column (mm).Biosurfactant stability studies : After growth on diesel oil as sole source of carbone, cultures supernatant obtained after centrifugation at 6,000 rpm for 15 min were considered as the source of crude biosurfactant. Its stability was determined by subjecting the culture supernatant to various temperature ranges (30, 40, 50, 60, 70, 80 and 100 °C) for 30 min then cooled to room temperature. Similarly, the effect of different pH (2–11) on the activity of the biosurfactant was tested. The activity of the biosurfactant was investigated by measuring the emulsification index (El-Sersy, 2012).Molecular identification of potential strains. DNA extraction and PCR amplification of 16S rDNA: Total cellular DNA was extracted from strains and purified as described by Sambrook et al. (1989). DNA was purified using Geneclean® Turbo (Q-BIO gene, Carlsbad, CA, USA) before use as a template in polymerase chain reaction (PCR) amplification. For the 16S rDNA gene sequence, the purified DNA was amplified using a universal primer set, forward primer (27f; 5′-AGA GTT TGA TCM TGG CTC AG) and a reverse primer (1492r; 5′-TAC GGY TAC CTT GTT ACG ACT T) (Lane, 1991). Agarose gel electrophoresis confirmed the amplification product as a 1400-bp DNA fragment.16S rDNA sequencing and Phylogenic analysis: Amplicons generated using primer pair 27f-1492r was sequenced using an automatic sequencer system at Macrogene Company (Seoul, Korea). The sequences were compared with those of the NCBI BLAST GenBank nucleotide sequence databases. Phylogenetic trees were constructed by the neighbor-joining method using MEGA version 5.05 software (Tamura et al., 2011). Bootstrap resembling analysis for 1,000 replicates was performed to estimate the confidence of tree topologies.Nucleotide sequence accession numbers: The nucleotide sequences reported in this work have been deposited in the EMBL Nucleotide Sequence Database. The accession numbers are represented in table 5.Statistics: All experiments were conducted in triplicates. Results were evaluated for statistical significance using ANOVA.RESULTSPhysico-chemical parameters of the collected water samples: The physicochemical properties of the collected water samples are reported in table 1. At the time of sampling, the temperature was 10.6°C and pH 7.89. The salinity of the sample, as determined in situ, was 224.70 g/L (22,47% (w/v)). Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions (table 1). SO4-2 and Mg+2 was present in much smaller amounts compared to Na +and Cl- concentration. Low levels of calcium, potassium and bicarbonate were also detected, often at less than 1 g/L.Characterization of isolates. Morphological and biochemical characteristic feature of halophilic bacterial isolates: Among 52 strains isolated from water of Chott Tinsilt, seven distinct bacteria (A1, A2, A3, A4, B1, B4 and B5) were chosen for further characterization (table 2). The colour of the isolates varied from beige, pale yellow, yellowish and orange. The bacterial isolates A1, A2, A4, B1 and B5 were rod shaped and gram negative (except B5), whereas A3 and B4 were cocci and gram positive. All strains were oxidase and catalase positive except for B1. Nitrate reductase and urease activities were observed in all the bacterial isolates, except B4. All the bacterial isolates were negative for H2S formation. B5 was the only strain positive for mannitol fermentation (table 2).We isolated halophilic bacteria on growth medium with NaCl supplementation at pH 7 and temperature of 30°C. We studied the effect of NaCl, temperature and pH on the growth of bacterial isolates. All the isolates exhibited growth only in the presence of NaCl indicating that these strains are halophilic. The optimum growth of isolates A3 and B1 was observed in the presence of 10% NaCl, whereas it was 15% NaCl for A1, A2 and B5. A4 and B4 showed optimum growth in the presence of 20% and 25% NaCl respectively. A4, B4 and B5 strains can tolerate up to 35% NaCl.The isolate B1 showed growth in medium supplemented with 10% NaCl and pH range of 7–10. The optimum pH for the growth B1 was 9 and they did not show any detectable growth at or below pH 6 (table 2), which indicates the alkaliphilic nature of B1 isolate. The bacterial isolates A1, A2 and A4 exhibited growth in the range of pH 6–10, while A3 and B4 did not show any growth at pH greater than 8. The optimum pH for growth of all strains (except B1) was pH 7.0 (table 2). These results indicate that A1, A2, A3, A4, B4 and B5 are neutrophilic in nature. All the bacterial isolates exhibited optimal growth at 30°C and no detectable growth at 55°C. Also, detectable growth of isolates A1, A2 and A4 was observed at 4°C. However, none of the bacterial strains could grow below 4°C and above 50°C (table 2).Screening of the halophilic enzymes: To characterize the diversity of halophiles able to produce hydrolytic enzymes among the population of microorganisms inhabiting the hypersaline habitats of East Algeria (Chott Tinsilt), a screening was performed. As described in Materials and Methods, samples were plated on solid media containing 10%-25% (w/v) of total salts and different substrates for the detection of amylase, protease, lipase and DNAse activities. However, coagulase activity was determined in liquid medium using milk as substrate (figure 3). Distributions of hydrolytic activity among the isolates are summarized in table 4.From the seven bacterial isolates, four strains A1, A2, A4 and B5 showed combined hydrolytic activities. They were positive for gelatinase, lipase and coagulase. A3 strain showed gelatinase and lipase activities. DNAse activities were detected with A1, A4, B1 and B5 isolates. B4 presented lipase and coagulase activity. Surprisingly, no amylase activity was detected among all the isolates.Screening for biosurfactant producing isolates: Oil spread assay: The results showed that all the strains could produce notable (>4 cm diameter) oil expelling circles (ranging from 4.11 cm to 4.67 cm). The average diameter for strain B5 was 4.67 cm, significantly (P < 0.05) higher than for the other strains.Surface tension and emulsification index (E24): The assimilation of hydrocarbons as the sole sources of carbon by the isolate strains led to the production of biosurfactants indicated by the emulsification index and the lowering of the surface tension of cell-free supernatant. Based on rapid growth on media containing diesel oil as sole carbon source, the seven isolates were tested for biosurfactant production and emulsification activity. The obtained values of the surface tension measurements as well as the emulsification index (E24) are shown in table 3. The highest reduction of surface tension was achieved with B5 and A3 isolates with values of 25.3 mN m−1 and 28.1 mN m−1 respectively. The emulsifying capacity evaluated by the E24 emulsification index was highest in the culture of isolate B4 (78%), B5 (77%) and A3 (76%) as shown in table 3 and figure 2. These emulsions were stable even after 4 months. The bacteria with emulsification indices higher than 50 % and/or reduction in the surface tension (under 30 mN/m) have been defined as potential biosurfactant producers. Based on surface tension and the E24 index results, isolates B5, B4, A3 and A4 are the best candidates for biosurfactant production. It is important to note that, strains B4 and A4 produce biosurfactant in medium containing respectively 25% and 20% (w/v) NaCl.Stability of biosurfactant activities: The applicability of biosurfactants in several biotechnological fields depends on their stability at different environmental conditions (temperatures, pH and NaCl). For this study, the strain B4 appear very interesting (It can produce biosurfactant at 25 % NaCl) and was choosen for futher analysis for biosurfactant stability. The effects of temperature and pH on the biosurfactant production by the strain B4 are shown in figure 4.biosurfactant in medium containing respectively 25% and 20% (w/v) NaCl.Stability of biosurfactant activities: The applicability of biosurfactants in several biotechnological fields depends on their stability at different environmental conditions (temperatures, pH and NaCl). For this study, the strain B4 appear very interesting (It can produce biosurfactant at 25 % NaCl) and was chosen for further analysis for biosurfactant stability. The effects of temperature and pH on the biosurfactant production by the strain B4 are shown in figure 4. The biosurfactant produced by this strain was shown to be thermostable giving an E-24 Index value greater than 78% (figure 4A). Heating of the biosurfactant to 100 °C caused no significant effect on the biosurfactant performance. Therefore, the surface activity of the crude biosurfactant supernatant remained relatively stable to pH changes between pH 6 and 11. At pH 11, the value of E24 showed almost 76% activity, whereas below pH 6 the activity was decreased up to 40% (figure 4A). The decreases of the emulsification activity by decreasing the pH value from basic to an acidic region; may be due to partial precipitation of the biosurfactant. This result indicated that biosurfactant produced by strain B4 show higher stability at alkaline than in acidic conditions.Molecular identification and phylogenies of potential isolates: To identify halophilic bacterial isolates, the 16S rDNA gene was amplified using gene-specific primers. A PCR product of ≈ 1.3 kb was detected in all the seven isolates. The 16S rDNA amplicons of each bacterial isolate was sequenced on both strands using 27F and 1492R primers. The complete nucleotide sequence of 1336,1374, 1377,1313, 1305,1308 and 1273 bp sequences were obtained from A1, A2, A3, A4, B1, B4 and B5 isolates respectively, and subjected to BLAST analysis. The 16S rDNA sequence analysis showed that the isolated strains belong to the genera Halomonas, Staphylococcus, Salinivibrio, Planococcus and Halobacillus as shown in table 5. The halophilic isolates A2 and A4 showed 97% similarity with the Halomonas variabilis strain GSP3 (accession no. AY505527) and the Halomonas sp. M59 (accession no. AM229319), respectively. As for A1, it showed 96% similarity with the Halomonas venusta strain GSP24 (accession no. AY553074). B1 and B4 showed for their part 96% similarity with the Salinivibrio costicola subsp. alcaliphilus strain 18AG DSM4743 (accession no. NR_042255) and the Planococcus citreus (accession no. JX122551), respectively. The bacterial isolate B5 showed 98% sequence similarity with the Halobacillus trueperi (accession no. HG931926), As for A3, it showed only 95% similarity with the Staphylococcus arlettae (accession no. KR047785). The 16S rDNA nucleotide sequences of all the seven halophilic bacterial strains have been submitted to the NCBI GenBank database under the accession number presented in table 5. The phylogenetic association of the isolates is shown in figure 5.DICUSSIONThe physicochemical properties of the collected water samples indicated that this water was relatively neutral (pH 7.89) similar to the Dead Sea and the Great Salt Lake (USA) and in contrast to the more basic lakes such as Lake Wadi Natrun (Egypt) (pH 11) and El Golea Salt Lake (Algeria) (pH 9). The salinity of the sample was 224.70 g/L (22,47% (w/v). This range of salinity (20-30%) for Chott Tinsilt is comparable to a number of well characterized hypersaline ecosystems including both natural and man-made habitats, such as the Great Salt Lake (USA) and solar salterns of Puerto Rico. Thus, Chott Tinsilt is a hypersaline environment, i.e. environments with salt concentrations well above that of seawater. Chemical analysis of water sample indicated that Na +and Cl- were the most abundant ions, as in most hypersaline ecosystems (with some exceptions such as the Dead Sea). These chemical water characteristics were consistent with the previously reported data in other hypersaline ecosystems (DasSarma and Arora, 2001; Oren, 2002; Hacěne et al., 2004). Among 52 strains isolated from this Chott, seven distinct bacteria (A1, A2, A3, A4, B1, B4 and B5) were chosen for phenotypique, genotypique and phylogenetique characterization.The 16S rDNA sequence analysis showed that the isolated strains belong to the genera Halomonas, Staphylococcus, Salinivibrio, Planococcus and Halobacillus. Genera obtained in the present study are commonly occurring in various saline habitats across the globe. Staphylococci have the ability to grow in a wide range of salt concentrations (Graham and Wilkinson, 1992; Morikawa et al., 2009; Roohi et al., 2014). For example, in Pakistan, Staphylococcus strains were isolated from various salt samples during the study conducted by Roohi et al. (2014) and these results agreed with previous reports. Halomonas, halophilic and/or halotolerant Gram-negative bacteria are typically found in saline environments (Kim et al., 2013). The presence of Planococcus and Halobacillus has been reported in studies about hypersaline lakes; like La Sal del Rey (USA) (Phillips et al., 2012) and Great Salt Lake (Spring et al., 1996), respectively. The Salinivibrio costicola was a representative model for studies on osmoregulatory and other physiological mechanisms of moderately halophilic bacteria (Oren, 2006).However, it is interesting to note that all strains shared less than 98.7% identity (the usual species cut-off proposed by Yarza et al. (2014) with their closest phylogenetic relative, suggesting that they could be considered as new species. Phenotypic, genetic and phylogenetic analyses have been suggested for the complete identification of these strains. Theses bacterial strains were tested for the production of industrially important enzymes (Amylase, protease, lipase, DNAse and coagulase). These isolates are good candidates as sources of novel enzymes with biotechnological potential as they can be used in different industrial processes at high salt concentration (up to 25% NaCl for B4). Prominent amylase, lipase, protease and DNAase activities have been reported from different hypersaline environments across the globe; e.g., Spain (Sánchez‐Porro et al., 2003), Iran (Rohban et al., 2009), Tunisia (Baati et al., 2010) and India (Gupta et al., 2016). However, to the best of our knowledge, the coagulase activity has never been detected in extreme halophilic bacteria. Isolation and characterization of crude enzymes (especially coagulase) to investigate their properties and stability are in progress.The finding of novel enzymes with optimal activities at various ranges of salt concentrations is of great importance. Besides being intrinsically stable and active at high salt concentrations, halophilic and halotolerant enzymes offer great opportunities in biotechnological applications, such as environmental bioremediation (marine, oilfiel) and food processing. The bacterial isolates were also characterized for production of biosurfactants by oil-spread assay, measurement of surface tension and emulsification index (E24). There are few reports on biosurfactant producers in hypersaline environments and in recent years, there has been a greater increase in interest and importance in halophilic bacteria for biomolecules (Donio et al., 2013; Sarafin et al., 2014). Halophiles, which have a unique lipid composition, may have an important role to play as surface-active agents. The archae bacterial ether-linked phytanyl membrane lipid of the extremely halophilic bacteria has been shown to have surfactant properties (Post and Collins, 1982). Yakimov et al. (1995) reported the production of biosurfactant by a halotolerant Bacillus licheniformis strain BAS 50 which was able to produce a lipopeptide surfactant when cultured at salinities up to 13% NaCl. From solar salt, Halomonas sp. BS4 and Kocuria marina BS-15 were found to be able to produce biosurfactant when cultured at salinities of 8% and 10% NaCl respectively (Donio et al., 2013; Sarafin et al., 2014). In the present work, strains B4 and A4 produce biosurfactant in medium containing respectively 25% and 20% NaCl. To our knowledge, this is the first report on biosurfactant production by bacteria under such salt concentration. Biosurfactants have a wide variety of industrial and environmental applications (Akbari et al., 2018) but their applicability depends on their stability at different environmental conditions. The strain B4 which can produce biosurfactant at 25% NaCl showed good stability in alkaline pH and at a temperature range of 30°C-100°C. Due to the enormous utilization of biosurfactant in detergent manufacture the choice of alkaline biosurfactant is researched (Elazzazy et al., 2015). On the other hand, the interesting finding was the thermostability of the produced biosurfactant even after heat treatment (100°C for 30 min) which suggests the use of this biosurfactant in industries where heating is of a paramount importance (Khopade et al., 2012). To date, more attention has been focused on biosurfactant producing bacteria under extreme conditions for industrial and commercial usefulness. In fact, the biosurfactant produce by strain B4 have promising usefulness in pharmaceutical, cosmetics and food industries and for bioremediation in marine environment and Microbial enhanced oil recovery (MEOR) where the salinity, temperature and pH are high.CONCLUSIONThis is the first study on the culturable halophilic bacteria community inhabiting Chott Tinsilt in Eastern Algeria. Different genera of halotolerant bacteria with different phylogeneticaly characteristics have been isolated from this Chott. Culturing of bacteria and their molecular analysis provides an opportunity to have a wide range of cultured microorganisms from extreme habitats like hypersaline environments. Enzymes produced by halophilic bacteria show interesting properties like their ability to remain functional in extreme conditions, such as high temperatures, wide range of pH, and high salt concentrations. These enzymes have great economical potential in industrial, agricultural, chemical, pharmaceutical, and biotechnological applications. Thus, the halophiles isolated from Chott Tinsilt offer an important potential for application in microbial and enzyme biotechnology. In addition, these halo bacterial biosurfactants producers isolated from this Chott will help to develop more valuable eco-friendly products to the pharmacological and food industries and will be usefulness for bioremediation in marine environment and petroleum industry.ACKNOWLEDGMENTSOur thanks to Professor Abdelhamid Zoubir for proofreading the English composition of the present paper.CONFLICT OF INTERESTThe authors declare that they have no conflict of interest.Akbari, S., N. H. Abdurahman, R. M. Yunus, F. Fayaz and O. R. Alara, 2018. Biosurfactants—a new frontier for social and environmental safety: A mini review. Biotechnology research innovation, 2(1): 81-90.Association, A. P. H., A. W. W. Association, W. P. C. Federation and W. E. Federation, 1920. Standard methods for the examination of water and wastewater. American Public Health Association.Baati, H., R. Amdouni, N. Gharsallah, A. Sghir and E. Ammar, 2010. Isolation and characterization of moderately halophilic bacteria from tunisian solar saltern. Current microbiology, 60(3): 157-161.Berridge, N., 1952. Some observations on the determination of the activity of rennet. Analyst, 77(911): 57b-62.DasSarma, S. and P. Arora, 2001. Halophiles. Encyclopedia of life sciences. Nature publishishing group: 1-9.Donio, M. B. S., F. A. Ronica, V. T. Viji, S. Velmurugan, J. S. C. A. Jenifer, M. Michaelbabu, P. Dhar and T. Citarasu, 2013. Halomonas sp. Bs4, a biosurfactant producing halophilic bacterium isolated from solar salt works in India and their biomedical importance. SpringerPlus, 2(1): 149.El-Sersy, N. A., 2012. Plackett-burman design to optimize biosurfactant production by marine Bacillus subtilis n10. Roman biotechnol lett, 17(2): 7049-7064.Elazzazy, A. M., T. Abdelmoneim and O. Almaghrabi, 2015. Isolation and characterization of biosurfactant production under extreme environmental conditions by alkali-halo-thermophilic bacteria from Saudi Arabia. Saudi journal of biological Sciences, 22(4): 466-475.Graham, J. E. and B. Wilkinson, 1992. Staphylococcus aureus osmoregulation: Roles for choline, glycine betaine, proline, and taurine. Journal of bacteriology, 174(8): 2711-2716.Gupta, S., P. Sharma, K. Dev and A. Sourirajan, 2016. Halophilic bacteria of lunsu produce an array of industrially important enzymes with salt tolerant activity. Biochemistry research international, 1: 1-10.Gupta, S., P. Sharma, K. Dev, M. Srivastava and A. Sourirajan, 2015. A diverse group of halophilic bacteria exist in lunsu, a natural salt water body of Himachal Pradesh, India. SpringerPlus 4(1): 274.Hacěne, H., F. Rafa, N. Chebhouni, S. Boutaiba, T. Bhatnagar, J. C. Baratti and B. Ollivier, 2004. Biodiversity of prokaryotic microflora in el golea salt lake, Algerian Sahara. Journal of arid environments, 58(3): 273-284.Jeffries, C. D., D. F. Holtman and D. G. Guse, 1957. Rapid method for determining the activity of microorgan-isms on nucleic acids. Journal of bacteriology, 73(4): 590.Karan, R. and S. Khare, 2010. Purification and characterization of a solvent‐stable protease from Geomicrobium sp. Emb2. Environmental technology, 31(10): 1061-1072.Khopade, A., R. Biao, X. Liu, K. Mahadik, L. Zhang and C. Kokare, 2012. Production and stability studies of the biosurfactant isolated from marine Nocardiopsis sp. B4. Desalination, 3: 198-204.Kim, K. K., J.-S. Lee and D. A. Stevens, 2013. Microbiology and epidemiology of Halomonas species. Future microbiology, 8(12): 1559-1573.Lane, D., 1991. 16s/23s rRNA sequencing in nucleic acid techniques in bacterial systematics. Stackebrandt e., editor;, and goodfellow m., editor. Chichester, UK: John Wiley & Sons.Morikawa, K., R. L. Ohniwa, T. Ohta, Y. Tanaka, K. Takeyasu and T. Msadek, 2009. Adaptation beyond the stress response: Cell structure dynamics and population heterogeneity in Staphylococcus aureus. Microbes environments, 25: 75-82.Morikawa, M., Y. Hirata and T. J. B. e. B. A.-M. Imanaka, 2000. A study on the structure–function relationship of lipopeptide biosurfactants. Biochimica et biophysica acta, 1488(3): 211-218.Oren, A., 2002. Diversity of halophilic microorganisms: Environments, phylogeny, physiology, and applications. Journal of industrial microbiology biotechnology, 28(1): 56-63.Oren, A., 2006. Halophilic microorganisms and their environments. Springer science & business media.Oren, A., R. Vreeland and L. Hochstein, 1993. Ecology of extremely halophilic microorganisms. The biology of halophilic bacteria, 2(1): 1-8.Phillips, K., F. Zaidan, O. R. Elizondo and K. L. Lowe, 2012. Phenotypic characterization and 16s rDNA identification of culturable non-obligate halophilic bacterial communities from a hypersaline lake, la sal del rey, in extreme south texas (USA). Aquatic biosystems, 8(1): 1-5.Post, F. and N. Collins, 1982. A preliminary investigation of the membrane lipid of Halobacterium halobium as a food additive 1. Journal of food biochemistry, 6(1): 25-38.Rocha, C., F. San-Blas, G. San-Blas and L. Vierma, 1992. Biosurfactant production by two isolates of Pseudomonas aeruginosa. World Journal of microbiology biotechnology, 8(2): 125-128.Rohban, R., M. A. Amoozegar and A. Ventosa, 2009. Screening and isolation of halophilic bacteria producing extracellular hydrolyses from howz soltan lake, Iran. Journal of industrial microbiology biotechnology, 36(3): 333-340.Roohi, A., I. Ahmed, N. Khalid, M. Iqbal and M. Jamil, 2014. Isolation and phylogenetic identification of halotolerant/halophilic bacteria from the salt mines of Karak, Pakistan. International journal of agricultural and biology, 16: 564-570.Sambrook, J., E. F. Fritsch and T. Maniatis, 1989. Molecular cloning: A laboratory manual, 2nd edn. Cold spring harbor laboratory, cold spring harbor, New York.Sánchez‐Porro, C., S. Martin, E. Mellado and A. Ventosa, 2003. Diversity of moderately halophilic bacteria producing extracellular hydrolytic enzymes. Journal of applied microbiology, 94(2): 295-300.Sarafin, Y., M. B. S. Donio, S. Velmurugan, M. Michaelbabu and T. Citarasu, 2014. Kocuria marina bs-15 a biosurfactant producing halophilic bacteria isolated from solar salt works in India. Saudi journal of biological sciences, 21(6): 511-519.Smibert, R., 1994. Phenotypic characterization. In methods for general and molecular bacteriology. American society for microbiology: 611-651.Solomon, E. and K. J. I. Viswalingam, 2013. Isolation, characterization of halotolerant bacteria and its biotechnological potentials. International journal scientific research paper publication sites, 4: 1-7.Spring, S., W. Ludwig, M. Marquez, A. Ventosa and K.-H. Schleifer, 1996. Halobacillus gen. Nov., with descriptions of Halobacillus litoralis sp. Nov. and Halobacillus trueperi sp. Nov., and transfer of Sporosarcina halophila to Halobacillus halophilus comb. Nov. International journal of systematic evolutionary microbiology, 46(2): 492-496.Tamura, K., D. Peterson, N. Peterson, G. Stecher, M. Nei and S. Kumar, 2011. Mega5: Molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Molecular biology evolution, 28(10): 2731-2739.Yakimov, M. M., K. N. Timmis, V. Wray and H. L. Fredrickson, 1995. Characterization of a new lipopeptide surfactant produced by thermotolerant and halotolerant subsurface Bacillus licheniformis bas50. Applied and environmental microbiology, 61(5): 1706-1713.Yarza, P., P. Yilmaz, E. Pruesse, F. O. Glöckner, W. Ludwig, K.-H. Schleifer, W. B. Whitman, J. Euzéby, R. Amann and R. Rosselló-Móra, 2014. Uniting the classification of cultured and uncultured bacteria and archaea using 16s rRNA gene sequences. Nature reviews microbiology, 12(9): 635-645
APA, Harvard, Vancouver, ISO, and other styles
34

Brassil, MSLS, MAT, AHIP, Ellen, Bridget Gunn, MSLS, MS, AHIP, Anant M. Shenoy, MD, and Rebecca Blanchard, PhD. "Unanswered clinical questions: a survey of specialists and primary care providers." Journal of the Medical Library Association 105, no. 1 (January 17, 2017). http://dx.doi.org/10.5195/jmla.2017.101.

Full text
Abstract:
Objective: With the myriad of cases presented to clinicians every day at the authors’ integrated academic health system, clinical questions are bound to arise. Clinicians need to recognize these knowledge gaps and act on them. However, for many reasons, clinicians might not seek answers to these questions. Our goal was to investigate the rationale and process behind these unanswered clinical questions. Subsequently, we explored the use of biomedical information resources among specialists and primary care providers and identified ways to promote more informed clinical decision making.Methods: We conducted a survey to assess how practitioners identify and respond to information gaps, their background knowledge of search tools and strategies, and their usage of and comfort level with technology.Results: Most of the 292 respondents encountered clinical questions at least a few times per week. While the vast majority often or always pursued answers, time was the biggest barrier for not following through on questions. Most respondents did not have any formal training in searching databases, were unaware of many digital resources, and indicated a need for resources and services that could be provided at the point of care.Conclusions: While the reasons for unanswered clinical questions varied, thoughtful review of the responses suggested that a combination of educational strategies, embedded librarian services, and technology applications could help providers pursue answers to their clinical questions, enhance patient safety, and contribute to patient-based, self-directed learning.
APA, Harvard, Vancouver, ISO, and other styles
35

Mitropoulos, Sarandis, Christos Mitsis, Petros Valacheas, and Christos Douligeris. "An online emergency medical management information system using mobile computing." Applied Computing and Informatics ahead-of-print, ahead-of-print (June 15, 2021). http://dx.doi.org/10.1108/aci-03-2021-0069.

Full text
Abstract:
PurposeThe purpose of this paper is to investigate the way technology affects the provision of prehospital emergency care, upgrading the quality of services offered and significantly reducing the risk of premature termination of the patients.Design/methodology/approachThe paper presents the development of the eEKAB, a pilot emergency medical information system that simulates the main services offered by the Greek National Instant Aid Centre (EKAB). The eEKAB was developed on an agile system methodology. From a technical perspective, the features and the technology were mainly chosen to provide reliable and user-friendly interfaces that will attract many users. eEKAB is based on three important pillars for offering health care to the patients: the “On-time Incident Reporting”, the “On-time Arrival at the Incident” and “Transfer to the Health Center”. According to the literature review, the emergency medical services (EMS) systems that combine all the features are very few.FindingsIt reduces the total time of the EMS procedures and it allows for an easier management of EMS, by providing a better allocation of human resources and a better geographical distribution of ambulances. The evaluation displayed that it is a very helpful application for the ambulance drivers as it reduces the ambulance response time to arrive in the patient's location and contributes significantly to the general performance of the prehospital medical care system. Also, the survey verified the importance of implementing eEKAB on a larger scale beyond the pilot usage. It is worth mentioning that the younger ambulance drivers had a more positive view for the purpose of the application.Research limitations/implicationsThe paper clearly identifies implications for further research. Regarding interoperability, the mobile app cooperates with the Operational Center of EKAB, while further collaboration could be achieved with other operational ambulance handling center, mainly, of the private sector. The system can evolve to include better communications among the EKAB departments. Particularly, the ambulance crew as well as the doctors should be informed with more incident features such as the emergency signal so that they know whether to open the siren, the patient's name, etc. The authors are currently working on implementing some features to provide effective medical health services to the patient in the ambulance.Practical implicationseEKAB will have very significant implications in case of its enforcement, such as the reduction of the total time of EMS procedures with a corresponding reduction of the operating costs of an accident management system and an ambulance fleet handling system while in parallel informing in time the doctors/clinics. It will provide better distribution of ambulances as well as of total human resources. It will greatly assist ambulance drivers, while reducing ambulance response time to reach the patient's location. In other words, the authors will have a better performance of the whole prehospital care system.Social implicationsProviding emergency care before the hospital is of great importance for upgrading the quality of health services provided at the accident site, thus significantly reducing the risk of premature death of patients. This in itself has a significant social implication.Originality/valueThe paper demonstrates a solid understanding in the field of the EMS systems and the corresponding medical services offered. It proposes the development of an effective, feasible and innovative EMS information system that will improve the existing emergency health care system in Greece (EKAB). An in depth literature review and presentation of the adopted new technologies and the respective architecture take place. An evaluation and statistical validation were conducted for proving the high applicability of eEKAB in case of real-life running.
APA, Harvard, Vancouver, ISO, and other styles
36

"DDOS Attack Detection Strategies in Cloud A Comparative Stud." VFAST Transactions on Software Engineering, November 1, 2017, 35–42. http://dx.doi.org/10.21015/vtse.v12i3.502.

Full text
Abstract:
Cloud is known as a highly-available platform that has become most popular among businesses for all information technology needs. Being a widely used platform, it’s also a hot target for cyber-attacks. Distributed Denial of Services (DDoS) is a great threat to a cloud in which cloud bandwidth, resources, and applications are attacked to cause service unavailability. In a DDoS attack, multiple botnets attack victim using spoofed IPs with a huge number of requests to a server. Since its discovery in 1980, numerous methods have been proposed for detection and prevention of network anomalies. This study provides a background of DDoS attack detection methods in past decade and a survey of some of the latest proposed strategies to detect DDoS attacks in the cloud, the methods are further compared for their detection accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

"Genetic Load Balancing Algorithms in Cloud Environment." International Journal of Innovative Technology and Exploring Engineering 8, no. 9S4 (October 1, 2019): 98–103. http://dx.doi.org/10.35940/ijitee.i1114.0789s419.

Full text
Abstract:
Cloud services are broadly used in accomplishment, logistics, and computerized applications. It is not an easy technology, it consists of lots of issues like virtual machine management, scheduling of virtual machines, data security, providing resources (like hardware and software) and load balancing. The issue of load balancing arises in abundant applications but essentially they play an essential role in the application of cloud environment. Load balancing distributes a task into subtasks that can be performed together and mapping each of these programs to computational resources like a computer or a processor, the complete processor time will be decreased with upgrade processor usage. To solve the issue of load balancing various algorithms are proposed by authors in the recent past and one of them is genetic algorithms. The paper describes insight survey some genetic load balancing algorithms used in a cloud environment by taking into consideration different factors, further we have analyzed and correlated all these factors in order to do a comparative assessment based upon different parameters so as to identify the proficiency of different genetic algorithms
APA, Harvard, Vancouver, ISO, and other styles
38

Aghaebrahimian, Ahmad, Andy Stauder, and Michael Ustaszewski. "Automatically extracted parallel corpora enriched with highly useful metadata? A Wikipedia case study combining machine learning and social technology." Digital Scholarship in the Humanities, March 3, 2020. http://dx.doi.org/10.1093/llc/fqaa002.

Full text
Abstract:
Abstract The extraction of large amounts of multilingual parallel text from web resources is a widely used technique in natural language processing. However, automatically collected parallel corpora usually lack precise metadata, which are crucial to accurate data analysis and interpretation. The combination of automated extraction procedures and manual metadata enrichment may help address this issue. Wikipedia is a promising candidate for the exploration of the potential of said combination of methods because it is a rich source of translations in a large number of language pairs and because its open and collaborative nature makes it possible to identify and contact the users who produce translations. This article tests to what extent translated texts automatically extracted from Wikipedia by means of neural networks can be enriched with pertinent metadata through a self-submission-based user survey. Special emphasis is placed on data usefulness, defined in terms of a catalogue of previously established assessment criteria, most prominently metadata quality. The results suggest that from a quantitative perspective, the proposed methodology is capable of capturing metadata otherwise not available. At the same time, the crowd-based collection of data and metadata may face important technical and social limitations.
APA, Harvard, Vancouver, ISO, and other styles
39

Bao, Haixu, Chunhsien Wang, and Ronggen Tao. "Examining the effects of governmental networking with environmental turbulence on the geographic searching of business model innovation generations." Journal of Knowledge Management ahead-of-print, ahead-of-print (November 11, 2020). http://dx.doi.org/10.1108/jkm-06-2020-0484.

Full text
Abstract:
Purpose This study aims to explore the relationship between geographic search and business model innovation and proposed a contingent framework to focus on how governmental networking and environment turbulence are interdependent moderate the relationship between geographic search and business model innovation. Design/methodology/approach A large-scale questionnaire survey was carried out among the firms in three high-tech parks of the Pearl River Delta, with a total of 287 firms as empirical samples. Hypotheses are tested using ordinary least squares analyzes on hierarchical multiple regression to find out how geographic search can drive business model innovation generations. Findings The empirical results showed that the more frequent geographic search is, the more favorable it is for firms to generate innovative business models, and firms may be more effective in geographic searching and business model innovation with better governmental networking. However, the above relationship may be weakened if the environment turbulence in emerging markets is further considered. It was argued that firms must take into account both the positive effects of governmental networking and the negative effects of environmental turbulence in conducting a geographic search for external knowledge resources to generate innovative business models. The study results showed how and why governmental networking can be a key catalyst for firms to generate innovative business models. Research limitations/implications This study contributes to the business model innovation literature by documenting the large-scale survey evidence that confirms the practicality of geographic search in the business model innovation generations. The findings advance previous studies in the business model innovation by identifying the moderating roles of governmental network and environment turbulence that predict business model innovation behaviors in the emerging market. Practical implications The results indicate that the geographic search can be easily operationalized for external resources acquisitions by managers in generating business model innovation. This has applications for external resource acquisitions on the basis of business model innovation in the emerging China market. In addition, to facilitate the business model innovation generations, the focus should be on critical contingency factors; on the one hand, to promote the continued use of external resources, the focus should be on enhancing benefits such as governmental networking. Originality/value The findings extend existing theory in three ways as the original value. First, the results show that geographic search is an important driver of business model innovation generations in an emerging market context. Second, this study is the first to take organizational learning and open innovation perspective to examine geographic search as a boundary-spanning search of external resources in business model innovation generations. Third, this study also explores the moderator role of governmental network and environmental turbulence on how to strengthen or impair the geographic search and business model innovation generations.
APA, Harvard, Vancouver, ISO, and other styles
40

Borowicz, Alex, Heather J. Lynch, Tyler Estro, Catherine Foley, Bento Gonçalves, Katelyn B. Herman, Stephanie K. Adamczak, Ian Stirling, and Lesley Thorne. "Social Sensors for Wildlife: Ecological Opportunities in the Era of Camera Ubiquity." Frontiers in Marine Science 8 (May 26, 2021). http://dx.doi.org/10.3389/fmars.2021.645288.

Full text
Abstract:
Expansive study areas, such as those used by highly-mobile species, provide numerous logistical challenges for researchers. Community science initiatives have been proposed as a means of overcoming some of these challenges but often suffer from low uptake or limited long-term participation rates. Nevertheless, there are many places where the public has a much higher visitation rate than do field researchers. Here we demonstrate a passive means of collecting community science data by sourcing ecological image data from the digital public, who act as “eco-social sensors,” via a public photo-sharing platform—Flickr. To achieve this, we use freely-available Python packages and simple applications of convolutional neural networks. Using the Weddell seal (Leptonychotes weddellii) on the Antarctic Peninsula as an example, we use these data with field survey data to demonstrate the viability of photo-identification for this species, supplement traditional field studies to better understand patterns of habitat use, describe spatial and sex-specific signals in molt phenology, and examine behavioral differences between the Antarctic Peninsula’s Weddell seal population and better-studied populations in the species’ more southerly fast-ice habitat. While our analyses are unavoidably limited by the relatively small volume of imagery currently available, this pilot study demonstrates the utility an eco-social sensors approach, the value of ad hoc wildlife photography, the role of geographic metadata for the incorporation of such imagery into ecological analyses, the remaining challenges of computer vision for ecological applications, and the viability of pelage patterns for use in individual recognition for this species.
APA, Harvard, Vancouver, ISO, and other styles
41

Hammond, Michelle M., Caroline Murphy, and Caitlin A. Demsky. "Stress mindset and the work–family interface." International Journal of Manpower ahead-of-print, ahead-of-print (June 23, 2020). http://dx.doi.org/10.1108/ijm-05-2018-0161.

Full text
Abstract:
PurposeThe current study aims to examine stress mindset as a moderator of the relationship between the work–family interface – work–family conflict (WFC) and enrichment (WFE) – and two work outcomes: job satisfaction and turnover intentions.Design/methodology/approachTo examine these relationships, a cross-sectional online survey was conducted in Ireland (N = 314). Bootstrapping in SPSS was used to test the proposed hypotheses.FindingsIn addition to direct relationships between WFC/WFE and job satisfaction and turnover intentions, analyses showed that stress mindset is a moderator of the relationships between WFC and job satisfaction and turnover intentions, as well as of the relationship between WFE and job satisfaction, but not WFE and turnover intentions.Research limitations/implicationsProviding general support of the propositions of the conservation of resources theory, stress mindset was found to act as a personal resource affecting the relationships between WFC/WFE and most outcomes. The study findings indicate a need to further examine stress mindset in relation to employees' work and family interface.Practical implicationsIn line with other research, this study recommends organizational efforts to reduce WFC and increase WFE. Further, as stress mindsets can be altered, practitioners may consider implementing stress mindset training to encourage employees' view of stress as enhancing rather than debilitating to reduce the negative impact of stress on employees in the workplace.Social implicationsBeliefs about the enhancing aspects of stress may allow employees to more effectively navigate transitions between work and family domains and maximize beneficial aspects of participating in both work and family roles.Originality/valueThis is the first paper to investigate the role of stress mindset as a moderator of the associations between the work–family interface and employee work-related outcomes. The findings are relevant to work–family researchers, managers and human resource professionals.
APA, Harvard, Vancouver, ISO, and other styles
42

Suasti Alcívar, Kenny Orlando, Sandy Raúl Chun Molina, Enrique Javier Macías Arias, and Nexar Bolívar Lucas Ostaiza. "Modelo de desarrollo esbelto de aplicaciones informáticas confiables." Revista Científica Sinapsis 2, no. 7 (June 1, 2015). http://dx.doi.org/10.37117/s.v2i7.72.

Full text
Abstract:
El artículo presenta una propuesta de herramienta de manufactura esbelta aplicadas al desarrollo de software con seguridad lógica, dentro del marco de métodos ágiles, en particular el desarrollo de software seguro; lo que permitirá a equipos de desarrollo obtener una calidad sistémica del software (producto, procesos y personas que intervienen). Se recomienda emplear estas herramientas y evaluarlas constantemente con la aplicación iterativa e incremental del ciclo de calidad propuesto por Deming: planificar, hacer, verificar y actuar. Esta investigación se basó del campo tecnológico, documental, bibliográfico de carácter descriptivo, tipo encuesta la misma que fueron realizadas a diferentes profesionales en la materia, programadores y empresas dedicadas a la creación de software. Con esta propuesta se contribuye al desarrollo de proyectos de software con calidad en entornos científicos - académicos, ajustados al tiempo planificado y Modelo para aplicaciones confiables con los recursos presupuestados; utilizando herramientas propias de la ingeniería. Con la investigación se logra concluir que el desarrollo y la creación de software con seguridad lógica son más confiables y se benefician varias entidades públicas y privadas de nuestro país. Palabras claves: Software seguro; Calidad de software; Método ágil; Ingeniería de software, calidad sistémica, seguridad lógica. Slim model development of reliable applications Abstract The article presents a proposal for lean manufacturing tool applied to software development with logical security within the framework of agile methods, including secure software development; allowing development teams to obtain a systemic software quality (product, processes and people involved). We recommend using these tools and constantly evaluate them with the iterative and incremental implementation proposed by Deming quality cycle: plan, do, check and act. This research was based technological, documentary, bibliographic descriptive field, type the same survey were conducted with various professionals in the field, programmers and companies dedicated to creating software. This proposal contributes to the development of quality software projects in scientific environments - academic, adjusted planned budgeted time and resources; using proprietary engineering tools. The investigation concluded that manages the development and creation of logical security software are more reliable and several public and private entities of our country benefit. Keywords: Secure software development, Quality software, Agile method, Software Engineering, systemic quality; logical security
APA, Harvard, Vancouver, ISO, and other styles
43

Raman, Prashant, and Kumar Aashish. "To continue or not to continue: a structural analysis of antecedents of mobile payment systems in India." International Journal of Bank Marketing ahead-of-print, ahead-of-print (January 1, 2021). http://dx.doi.org/10.1108/ijbm-04-2020-0167.

Full text
Abstract:
PurposeConsumers in India are increasingly using mobile payment systems (MPSs) to make online and offline payments. Digital payment applications are gradually being used as surrogates for cash, checks and plastic money. The motive behind this research is to analyze the different antecedents that impact the users' willingness to continue using the MPS in India.Design/methodology/approachAn extensive study of the literature review supports the creation of a framework that describes the continuance intention of using MPS. Data from a survey of 612 respondents from India were collected to assess the research model. The study used partial least squares (PLS)–structural equation modeling (SEM) technique to empirically validate the framework developed.FindingsThe outcomes of the research suggest that service quality, attitude, effort expectancy and perceived risk act as influencing antecedents of continuance intention to use MPS. Determinants like perceived trust, convenience and social value have no influence on users' continuance intention. SEM analysis has verified the proposed model, which explains 50.7% of the variance of the users' continuance intention of using MPSs.Research limitations/implicationsThe research is built upon cross-sectional data carried out in India. Hence, the outcomes of the study are limited to this region only.Practical implicationsEngaging with the consumers for a long time and enabling their continuance usage are extremely important for firms offering mobile payment services. The managerial implications provide insights into the different ways to capture new business opportunities to the firms rendering mobile payment services in the wake of changing consumer behavior.Originality/valueThis research tries to analyze users' continuance intention to use MPS in India. Although many research studies have investigated the willingness of the individuals to adopt novel technology in different frameworks, there are hardly any empirical studies carried out to analyze the antecedents of users' continuance intention to use MPSs.
APA, Harvard, Vancouver, ISO, and other styles
44

Nair, Lekhaa A. "Self-Tracking Technology as an Extension of Man." M/C Journal 22, no. 5 (October 9, 2019). http://dx.doi.org/10.5204/mcj.1594.

Full text
Abstract:
“Man has, as it were, become a kind of prosthetic God. When he puts on all his auxiliary organs he is truly magnificent; but those organs have not grown on to him and they still give him much trouble at times” (Freud 37-39).Introduction and Background Self-tracking is not a new phenomenon. For centuries, people have used self-examination and monitoring as a means to attain knowledge and understanding about themselves. People would often record their daily activities (like food consumption, sleep and physical exercise) and write down accompanying thoughts and reflections. However, the advent of digital technology in the past decades has drastically changed the self-tracking sphere. In fact, the popularisation of self-tracking technology (STT) in mobile applications and wearable devices has allowed users to track daily activities on a closer and more accurate scale than previously affordable. Gary Wolf, the founder of a niche movement called the ‘Quantified Self’, suggested that “if you want to replace the vagaries of intuition with something more reliable, you first need to gather data. Once you know the facts, you can live by them” (Wolf). This reveals that STT has the capacity to guide users by virtue of the data collected and insights provided by the technology. Thus, instead of using intuition, which is potentially unreliable and subjective, data – finite and objective by nature – can be used to guide the process by providing definitive facts, figures and patterns. Arguably, this technologises users, allowing them to enhance their performance and capabilities by using STTs to regulate and monitor their behaviour. Hence, in this article, I position self-tracking technology (STT) as an interactive media technology, a tool for surveillance and regulation, and an “extension of man”. However, the use of and reliance on STT can compromise personal autonomy, and this journal article will investigate how users’ personal autonomy has been affected due to STT’s function as an extension of man, or a “prosthetic”. I use case study vignettes to investigate impacts on personal autonomy in three spheres: the workspace, relationships and the physical environment. Extending ManSTTs reconfigure our bodies in data form and implicate our personhood and autonomy. Human physicality has changed now that technology and data have become so integral to how we experience and view our bodies. STTs technologise human bodies, transforming them into data bodies, augmented and reliant on digital media. As Marshall McLuhan (63) put it: “In this electric age we see ourselves being translated more and more into the form of information, moving toward the technological extension of consciousness”. With the integration of STT into our daily lives, consumers increasingly rely on cues from their devices and applications to inform them about their bodies. This potentially affects the autonomy of an individual – since STT becomes an extension of the human body. In the 1960s, when the mass media was burgeoning, Marshal McLuhan proposed the idea that the media acted as an extension of man. STTs similarly act as an extension of users’ embodied capabilities and senses, since the data collected by these technologies is shared with users, allowing them to alter their bodies and minds, aiming to be as productive and effective as possible. In Understanding Media, McLuhan’s interpretation of electronic media was prescient. He anticipated the development of so-called “smart” devices, noting that, in the information age man “wears [his] brain outside [his] skull and [his] nerves outside [his] hide” (63). This is reflective of STT’s heavy reliance on sensor technology and smart technology. Simply examining how a Fitbit – a popular wearable self-tracking device – operates is illustrative. For instance, some Fitbits have an altimeter sensor that detects when the wearer is elevated, and hence counts floors. Fitbits also count steps using a three-axis accelerometer, which turns the wearer’s movements into data. Furthermore, Fitbit devices are capable of analysing and interpreting this acceleration data to provide insights about “frequency, duration, intensity, and patterns of movement to determine [users’] steps taken, distance travelled, calories burned, and sleep quality” (“Fitbit”). Fitbit relies on sensor technologies (“nerves”) to detect and interpret activities, and such insights are then transmitted to users’ smart devices (“brains”) for storage, to be analysed at a time of convenience. This modus operandi is not exclusive to Fitbit, and in fact, is the framework for many STTs. Hence, STTs have the potential to extend the natural capabilities of the human body to regulate behaviour.The WorkplaceThis notion of STT as a regulatory prosthetic is seen in its ability to enforce standardised norms on individuals by using surveillance as a disciplinary measure. STTs can enforce norms on users by transforming the workplace into a panopticon, which is an institutional structure that allows a watchman to observe individuals without them knowing whether they are being watched or not. STTs are used to gather data about performance and behaviour, and users are monitored constantly. As a result, they adjust their behaviouraccordingly. US retail titan Amazon has repeatedly raised concerns over the past years because of its use of wearables to survey workers during shifts. Adam Littler, an Amazon employee, came forward in 2013 accusing his employers of forcing him to walk 11 miles during a single work shift. His distance travelled was measured and tracked using a pedometer, while a handheld scanner guided him around the warehouse and notified him if he was meeting his targets (Aspinall). Amazon also recently designed and patented a wristband that is capable of tracking wearers’ (employees’) movements, including hand placement (Kelly). The reliance on such tracking technology to guide actions and supplement users with information to increase productivity reveals how STT can serve as a prosthetic that is used to enhance man’s abilities and performance However, the flipside of such enhancement is exploitation – employers augment users with technology and force them to adhere to standards of performance that are difficult to achieve. For instance, documents have recently surfaced that suggest Amazon terminates employees based on productivity statistics. It was reported that around 300 full-time employees were fired for “failing to meet productivity quotas”. According to the documents, “Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors” (Lecher). This is reflective of how actors that are in power, like employers, can impose self-tracking practises onto employees that compromise their personal autonomy. Foucault finds that the panopticon’s utility and potency as a discipline mechanism lies in its efficiency as enforcers do not have to constantly survey people to ensure they conform. Thus, it manoeuvres existing power structures to achieve a particular goal – for instance, higher productivity or economic growth. Foucault also notes: The discipline of the workshop, while remaining a way of enforcing respect for the regulations and authorities, of preventing thefts and losses, tends to increase aptitudes, speeds, output and therefore profits; it still exerts a moral influence over behaviour, but more and more it treats actions in terms of their results, introduces bodies into a machinery, forces into an economy. (210) STTs in the workspace (or workshop) can act as prostheses, allowing employers to enhance their employee’s capabilities. Such technology creates an environment in which workers feel pressured to perform in adherence to certain set standards. Thus, employees are disciplined by STTs, and by the surveillance of their employers that follows. Arguably, such surveillance is detrimental to personal autonomy, as the surveyed feel that they have to behave in compliance to standards enforced by those in power (ie. their employers). Physical Environment With the aim of productivity and efficiency in mind, users grow dependant on devices to augment their realities with helpful technology. As mentioned earlier, McLuhan (90) ideates that “technologies are extensions of our physical and nervous systems to increase power and speed” is particularly significant. The iPhone is an example that illustrates this point very clearly as they are inbuilt with complex technology that includes a variety of sensors. The iPhone 7, for example, has a range of sensors including an accelerometer, a gyroscope, a magnetometer, a GPS, a barometer, and an ambient light sensor (Nield). These gather information about users’ surroundings and feed it back to them, and they are then able to make informed decisions. Hence, if a user wants to travel to a certain place, the phone has the ability to point out the quickest route possible, or which route to take if they would like to stop by a certain location along the way. This cultivates a reliance on navigational technologies that use automated self-tracking to direct users’ daily lives, functioning as an extension and enhancement of their geographical memory and sense of direction. However, using these technologies may in fact be dulling our body’s abilities. For instance, anthropologist Tim Ingold posits that relying on navigation technology has reduced humans’ inborn wayfaring capabilities (Ingold). These satellite navigation technologies are one of the most popular ways in which people track their movements and move through space; for instance, a whole market of rideshare applications like Uber and OlaCabs rely on this technology. Using this technology has allowed people to navigate and travel with ease. However, this can be seen to lead to a lack of “spatial awareness and cartographic literacy”. Essentially, traditional maps skills are viewed as redundant and it can encourage an over-reliance on technology (Speake and Axon). According to McKinlay navigation is a “use-it-or-lose-it skill” and “automatic wayfinding” was reducing natural navigation abilities. A UCL neuroscience study found that licensed London taxi drivers have a larger than average hippocampus in their brains, as they are capable of storing a mental map of the city in their minds, by learning street layouts and locations of places of interest. The hippocampus is the part of the brain that is linked to spatial memory and navigation skills (Maguire, Woollett and Spiers 1093). Dr Eleanor Maguire, the neuroscientist who led the study, noted that if the taxi drivers started “using GPS, that knowledge base will be less and possibly affect the brain changes we are seeing” (Dobson). In turn, an increasing reliance on GPS and navigation technologies in self-tracking devices may result in a diminishing hippocampus, according to neuroscientist Veronique Bohbot of McGill University. The atrophy of the hippocampus has also been linked to the risk of dementia (Weeks), which reveals how the technologies that augment space may atrophy the “natural abilities” (McKinlay) and thus, the autonomy of users. RelationshipsAs with areas like the workspace and spatial environments, sociality and intimacy are increasingly being mediated by technology – the digital capabilities of new media have expanded users’ options and provided a variety of technological tools that allow us to streamline and reflect on social interactions and behaviour, serving as a social prosthetic. This is especially significant in the sphere of self-tracking. However, relying on STT to gain insight into sociality may alter the ways in which we think of intimacy and communication, and may also have an impact on users’ independence and trust. Hasinoff (497-98) notes that using tracking technologies within families and intimate relationships can have potentially harmful effects, such as a loss of trust. In particular, children who are pushed into self-tracking by their families may suffer from a loss of independence as well as an inability to perceive and react to risk. In such a situation, STT serves as a prosthetic that aims to ensure safety, however, surveillance through STTs enforces power disparities and simultaneously creates a dependency between the watched and watchers, and this would affect users’ personal autonomy as they are viewed under a panoptic lens. In fact, Hasinoff finds that “[family tracking and monitoring apps] exaggerate risks, offer illusory promises of safety, and normalize surveillance and excessive control in familial relationships”. I argue that this is the consequence of pushed self-tracking in the sphere of sociality and intimacy. Users may feel pressure from their families or partners to participate in self-tracking and allow their data to be accessed by them. However, the process of participating in such a mediated and monitored relationship could create “asymmetrical relations of visibility” (Trottier 320), as this sharing of information may not always be two sided. For instance, on the app Life360, parents can enforce that their children share their locations at all times, while they are able to conceal their own locations. This intensifies the watcher’s control and diminishes the watched’s privacy and autonomy. Quite ironically, Life360’s tagline is “feel free, together”. As an app geared at family safety, Life360 assumes that the family is a safe space – however, families too may pose a significant risk to vulnerable users’ (such as young children and women) autonomy and privacy. User complaints about inaccurate location information reveal “controlling, asymmetrical, and potentially abusive uses of the app” that can aggravate dysfunctional power dynamics in intimate and familial relationships. For instance, jealous partners or overprotective parents could grow increasingly suspicious or even aggressive (Hasinoff 504). Critical users who reviewed the app claimed that the app “ruined [their] social life” and enabled their “family to stalk [them] 24/7”. In another case, a user claimed the app was “toxic”, noting it would “destroy their [children’s] trust” (App Store; Life360). While the app asserts that each user does have control over the extent of location sharing, they may feel the need to remain visible because of familial pressure and expectations, since their family relies visibility on the app as an indicator of safety. This too, is problematic – self-tracking one’s locations provides just that – a geolocation pin, which is not a clear measure or indicator of the well-being or safety of the user. Simpson argues that constructing location information as safety information is not reliable because it could “promote a false sense of security based on the sense that if you know where your child is then that means they are safe” (277). Additionally, this also sets an imperative that users need to be monitored or monitor themselves at all times to ensure safety, and such a use of surveillance technology could result in users being hyperalert and anxious (Hasinoff 497). Extending man’s awareness to this degree and engaging in such surveillance may create a false sense of security and dependency, that ultimately puts everyone’s autonomy at risk.ConclusionSTT performs as an informational prosthetic for man. We conventionally tend to think of prostheses as extensions of our physical and sensory abilities, used to enhance or replace missing functions. In the case of STT, they have inbuilt decision-making and guidance capabilities, enhancing humans’ ability to process and understand information. This is a new type of digital prosthetic that has not existed before. It thus seems that the new generation of prostheses are no longer just physical and material – they operate as intellectual and cognitive extensions of our bodies. However, when users’ decision-making processes are increasingly displaced by informational prostheses, it is important to determine the extent to which they are impairing our organic capacity for orienting, sense-making and intimacy. ReferencesApp Store. Mobile app. Apple Inc. Accessed 1 Jun. 2019.Aspinall, Adam. “Amazon Forces Warehouse Staff to Walk 11 Miles per Shift Says Former Employee.” Mirror 25 Nov. 2013. <https://www.mirror.co.uk/money/city-news/amazon-worker-rights-retail-giant-2851079>.Dobson, Roger. “Cabbies Really Do Have More Grey Matter to Store All That Information, Scientists Say.” Independent 17 Dec. 2006. <https://www.independent.co.uk/life-style/health-and-families/health-news/taxi-drivers-knowledge-helps-their-brains-grow-428834.html>.Fitbit. “How Does My Fitbit Device Calculate My Daily Activity?” 1 June 2019 <https://help.fitbit.com/articles/en_US/Help_article/1141>.Foucault, Michel. Discipline and Punish: The Birth of the Prison. London: Penguin, 1977. Freud, Sigmund. Civilization and Its Discontents. New York: Picador, 1930.Hasinoff, Amy Adele. “Where Are You? Location Tracking and the Promise of Child Safety.” Television & New Media 18.6 (2016): 496-512. DOI: 10.1177/1527476416680450.Ingold, Tim. Being Alive: Essays on Movement, Knowledge and Description. London: Routledge, 2011.Kelly, Heather. “Amazon's Idea for Employee-Tracking Wearables Raises Concerns.” CNN Business 2 Feb. 2018. <https://money.cnn.com/2018/02/02/technology/amazon-employee-tracker/index.html>. Lecher, Colin. “How Amazon Automatically Tracks and Fires Warehouse Workers for ‘Productivity’.” The Verge 25 Apr. 2019. <https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations>.Life360. “Life360 – Feel Free, Together.” 1 June 2019 <https://www.life360.com/>.Lupton, Deborah. The Quantified Self. Malden: Polity, 2016.Maguire, Eleanor, Katherine Woollett, and Hugo Spiers. “London Taxi Drivers and Bus Drivers: A Structural MRI and Neuropsychological Analysis.” Wiley Interscience 16.12 (2006): 1091-1101. DOI: 10.1002/hipo.20233.McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge & Kegan Paul, 1964.McKinlay, Roger. “Technology: Use or Lose Our Navigation Skills.” Nature 30 Mar. 2016. <https://www.nature.com/news/technology-use-or-lose-our-navigation-skills-1.19632>.Nield, David. “All the Sensors in Your Smartphone, and How They Work.” Gizmodo Australia 28 July 2017. <https://www.gizmodo.com.au/2017/07/all-the-sensors-in-your-smartphone-and-how-they-work/>.Satariano, Adam. “Would You Wear a FitBit So Your Boss Could Track Your Weight Loss?” Daily Herald 9 Jan. 2014. <https://www.dailyherald.com/article/20140901/business/140909985/>.Simpson, Brian. “Tracking Children, Constructing Fear: GPS and the Manufacture of Family Safety.” Information & Communications Technology Law 23.3 (2014): 273–285. DOI: 10.1080/13600834.2014.970377.Speake, Janet, and Stephen Axon. “‘I Never Use ‘Maps’ Anymore’: Engaging with Sat Nav Technologies and the Implications for Cartographic Literacy and Spatial Awareness.” The Cartographic Journal 49.4 (2013): 326-336. DOI: 10.1179/1743277412Y.0000000021.Trottier, Daniel. “Interpersonal Surveillance on Social Media.” Canadian Journal of Communication 37.2 (2012): 319–332. DOI: 10.22230/cjc.2012v37n2a2536.Weeks, Linton. “From Maps to Apps: Where Are We Headed?” NPR 4 May 2010. <https://www.npr.org/templates/story/story.php?storyId=124608376>.Wolf, Gary. “The Data-Driven Life.” The New York Times Magazine 28 Apr. 2010. <https://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html>.
APA, Harvard, Vancouver, ISO, and other styles
45

Paull, John. "Beyond Equal: From Same But Different to the Doctrine of Substantial Equivalence." M/C Journal 11, no. 2 (June 1, 2008). http://dx.doi.org/10.5204/mcj.36.

Full text
Abstract:
A same-but-different dichotomy has recently been encapsulated within the US Food and Drug Administration’s ill-defined concept of “substantial equivalence” (USFDA, FDA). By invoking this concept the genetically modified organism (GMO) industry has escaped the rigors of safety testing that might otherwise apply. The curious concept of “substantial equivalence” grants a presumption of safety to GMO food. This presumption has yet to be earned, and has been used to constrain labelling of both GMO and non-GMO food. It is an idea that well serves corporatism. It enables the claim of difference to secure patent protection, while upholding the contrary claim of sameness to avoid labelling and safety scrutiny. It offers the best of both worlds for corporate food entrepreneurs, and delivers the worst of both worlds to consumers. The term “substantial equivalence” has established its currency within the GMO discourse. As the opportunities for patenting food technologies expand, the GMO recruitment of this concept will likely be a dress rehearsal for the developing debates on the labelling and testing of other techno-foods – including nano-foods and clone-foods. “Substantial Equivalence” “Are the Seven Commandments the same as they used to be, Benjamin?” asks Clover in George Orwell’s “Animal Farm”. By way of response, Benjamin “read out to her what was written on the wall. There was nothing there now except a single Commandment. It ran: ALL ANIMALS ARE EQUAL BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS”. After this reductionist revelation, further novel and curious events at Manor Farm, “did not seem strange” (Orwell, ch. X). Equality is a concept at the very core of mathematics, but beyond the domain of logic, equality becomes a hotly contested notion – and the domain of food is no exception. A novel food has a regulatory advantage if it can claim to be the same as an established food – a food that has proven its worth over centuries, perhaps even millennia – and thus does not trigger new, perhaps costly and onerous, testing, compliance, and even new and burdensome regulations. On the other hand, such a novel food has an intellectual property (IP) advantage only in terms of its difference. And thus there is an entrenched dissonance for newly technologised foods, between claiming sameness, and claiming difference. The same/different dilemma is erased, so some would have it, by appeal to the curious new dualist doctrine of “substantial equivalence” whereby sameness and difference are claimed simultaneously, thereby creating a win/win for corporatism, and a loss/loss for consumerism. This ground has been pioneered, and to some extent conquered, by the GMO industry. The conquest has ramifications for other cryptic food technologies, that is technologies that are invisible to the consumer and that are not evident to the consumer other than via labelling. Cryptic technologies pertaining to food include GMOs, pesticides, hormone treatments, irradiation and, most recently, manufactured nano-particles introduced into the food production and delivery stream. Genetic modification of plants was reported as early as 1984 by Horsch et al. The case of Diamond v. Chakrabarty resulted in a US Supreme Court decision that upheld the prior decision of the US Court of Customs and Patent Appeal that “the fact that micro-organisms are alive is without legal significance for purposes of the patent law”, and ruled that the “respondent’s micro-organism plainly qualifies as patentable subject matter”. This was a majority decision of nine judges, with four judges dissenting (Burger). It was this Chakrabarty judgement that has seriously opened the Pandora’s box of GMOs because patenting rights makes GMOs an attractive corporate proposition by offering potentially unique monopoly rights over food. The rear guard action against GMOs has most often focussed on health repercussions (Smith, Genetic), food security issues, and also the potential for corporate malfeasance to hide behind a cloak of secrecy citing commercial confidentiality (Smith, Seeds). Others have tilted at the foundational plank on which the economics of the GMO industry sits: “I suggest that the main concern is that we do not want a single molecule of anything we eat to contribute to, or be patented and owned by, a reckless, ruthless chemical organisation” (Grist 22). The GMO industry exhibits bipolar behaviour, invoking the concept of “substantial difference” to claim patent rights by way of “novelty”, and then claiming “substantial equivalence” when dealing with other regulatory authorities including food, drug and pesticide agencies; a case of “having their cake and eating it too” (Engdahl 8). This is a clever slight-of-rhetoric, laying claim to the best of both worlds for corporations, and the worst of both worlds for consumers. Corporations achieve patent protection and no concomitant specific regulatory oversight; while consumers pay the cost of patent monopolization, and are not necessarily apprised, by way of labelling or otherwise, that they are purchasing and eating GMOs, and thereby financing the GMO industry. The lemma of “substantial equivalence” does not bear close scrutiny. It is a fuzzy concept that lacks a tight testable definition. It is exactly this fuzziness that allows lots of wriggle room to keep GMOs out of rigorous testing regimes. Millstone et al. argue that “substantial equivalence is a pseudo-scientific concept because it is a commercial and political judgement masquerading as if it is scientific. It is moreover, inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests. It therefore serves to discourage and inhibit informative scientific research” (526). “Substantial equivalence” grants GMOs the benefit of the doubt regarding safety, and thereby leaves unexamined the ramifications for human consumer health, for farm labourer and food-processor health, for the welfare of farm animals fed a diet of GMO grain, and for the well-being of the ecosystem, both in general and in its particularities. “Substantial equivalence” was introduced into the food discourse by an Organisation for Economic Co-operation and Development (OECD) report: “safety evaluation of foods derived by modern biotechnology: concepts and principles”. It is from this document that the ongoing mantra of assumed safety of GMOs derives: “modern biotechnology … does not inherently lead to foods that are less safe … . Therefore evaluation of foods and food components obtained from organisms developed by the application of the newer techniques does not necessitate a fundamental change in established principles, nor does it require a different standard of safety” (OECD, “Safety” 10). This was at the time, and remains, an act of faith, a pro-corporatist and a post-cautionary approach. The OECD motto reveals where their priorities lean: “for a better world economy” (OECD, “Better”). The term “substantial equivalence” was preceded by the 1992 USFDA concept of “substantial similarity” (Levidow, Murphy and Carr) and was adopted from a prior usage by the US Food and Drug Agency (USFDA) where it was used pertaining to medical devices (Miller). Even GMO proponents accept that “Substantial equivalence is not intended to be a scientific formulation; it is a conceptual tool for food producers and government regulators” (Miller 1043). And there’s the rub – there is no scientific definition of “substantial equivalence”, no scientific test of proof of concept, and nor is there likely to be, since this is a ‘spinmeister’ term. And yet this is the cornerstone on which rests the presumption of safety of GMOs. Absence of evidence is taken to be evidence of absence. History suggests that this is a fraught presumption. By way of contrast, the patenting of GMOs depends on the antithesis of assumed ‘sameness’. Patenting rests on proven, scrutinised, challengeable and robust tests of difference and novelty. Lightfoot et al. report that transgenic plants exhibit “unexpected changes [that] challenge the usual assumptions of GMO equivalence and suggest genomic, proteomic and metanomic characterization of transgenics is advisable” (1). GMO Milk and Contested Labelling Pesticide company Monsanto markets the genetically engineered hormone rBST (recombinant Bovine Somatotropin; also known as: rbST; rBGH, recombinant Bovine Growth Hormone; and the brand name Prosilac) to dairy farmers who inject it into their cows to increase milk production. This product is not approved for use in many jurisdictions, including Europe, Australia, New Zealand, Canada and Japan. Even Monsanto accepts that rBST leads to mastitis (inflammation and pus in the udder) and other “cow health problems”, however, it maintains that “these problems did not occur at rates that would prohibit the use of Prosilac” (Monsanto). A European Union study identified an extensive list of health concerns of rBST use (European Commission). The US Dairy Export Council however entertain no doubt. In their background document they ask “is milk from cows treated with rBST safe?” and answer “Absolutely” (USDEC). Meanwhile, Monsanto’s website raises and answers the question: “Is the milk from cows treated with rbST any different from milk from untreated cows? No” (Monsanto). Injecting cows with genetically modified hormones to boost their milk production remains a contested practice, banned in many countries. It is the claimed equivalence that has kept consumers of US dairy products in the dark, shielded rBST dairy farmers from having to declare that their milk production is GMO-enhanced, and has inhibited non-GMO producers from declaring their milk as non-GMO, non rBST, or not hormone enhanced. This is a battle that has simmered, and sometimes raged, for a decade in the US. Finally there is a modest victory for consumers: the Pennsylvania Department of Agriculture (PDA) requires all labels used on milk products to be approved in advance by the department. The standard issued in October 2007 (PDA, “Standards”) signalled to producers that any milk labels claiming rBST-free status would be rejected. This advice was rescinded in January 2008 with new, specific, department-approved textual constructions allowed, and ensuring that any “no rBST” style claim was paired with a PDA-prescribed disclaimer (PDA, “Revised Standards”). However, parsimonious labelling is prohibited: No labeling may contain references such as ‘No Hormones’, ‘Hormone Free’, ‘Free of Hormones’, ‘No BST’, ‘Free of BST’, ‘BST Free’,’No added BST’, or any statement which indicates, implies or could be construed to mean that no natural bovine somatotropin (BST) or synthetic bovine somatotropin (rBST) are contained in or added to the product. (PDA, “Revised Standards” 3) Difference claims are prohibited: In no instance shall any label state or imply that milk from cows not treated with recombinant bovine somatotropin (rBST, rbST, RBST or rbst) differs in composition from milk or products made with milk from treated cows, or that rBST is not contained in or added to the product. If a product is represented as, or intended to be represented to consumers as, containing or produced from milk from cows not treated with rBST any labeling information must convey only a difference in farming practices or dairy herd management methods. (PDA, “Revised Standards” 3) The PDA-approved labelling text for non-GMO dairy farmers is specified as follows: ‘From cows not treated with rBST. No significant difference has been shown between milk derived from rBST-treated and non-rBST-treated cows’ or a substantial equivalent. Hereinafter, the first sentence shall be referred to as the ‘Claim’, and the second sentence shall be referred to as the ‘Disclaimer’. (PDA, “Revised Standards” 4) It is onto the non-GMO dairy farmer alone, that the costs of compliance fall. These costs include label preparation and approval, proving non-usage of GMOs, and of creating and maintaining an audit trail. In nearby Ohio a similar consumer versus corporatist pantomime is playing out. This time with the Ohio Department of Agriculture (ODA) calling the shots, and again serving the GMO industry. The ODA prescribed text allowed to non-GMO dairy farmers is “from cows not supplemented with rbST” and this is to be conjoined with the mandatory disclaimer “no significant difference has been shown between milk derived from rbST-supplemented and non-rbST supplemented cows” (Curet). These are “emergency rules”: they apply for 90 days, and are proposed as permanent. Once again, the onus is on the non-GMO dairy farmers to document and prove their claims. GMO dairy farmers face no such governmental requirements, including no disclosure requirement, and thus an asymmetric regulatory impost is placed on the non-GMO farmer which opens up new opportunities for administrative demands and technocratic harassment. Levidow et al. argue, somewhat Eurocentrically, that from its 1990s adoption “as the basis for a harmonized science-based approach to risk assessment” (26) the concept of “substantial equivalence” has “been recast in at least three ways” (58). It is true that the GMO debate has evolved differently in the US and Europe, and with other jurisdictions usually adopting intermediate positions, yet the concept persists. Levidow et al. nominate their three recastings as: firstly an “implicit redefinition” by the appending of “extra phrases in official documents”; secondly, “it has been reinterpreted, as risk assessment processes have … required more evidence of safety than before, especially in Europe”; and thirdly, “it has been demoted in the European Union regulatory procedures so that it can no longer be used to justify the claim that a risk assessment is unnecessary” (58). Romeis et al. have proposed a decision tree approach to GMO risks based on cascading tiers of risk assessment. However what remains is that the defects of the concept of “substantial equivalence” persist. Schauzu identified that: such decisions are a matter of “opinion”; that there is “no clear definition of the term ‘substantial’”; that because genetic modification “is aimed at introducing new traits into organisms, the result will always be a different combination of genes and proteins”; and that “there is no general checklist that could be followed by those who are responsible for allowing a product to be placed on the market” (2). Benchmark for Further Food Novelties? The discourse, contestation, and debate about “substantial equivalence” have largely focussed on the introduction of GMOs into food production processes. GM can best be regarded as the test case, and proof of concept, for establishing “substantial equivalence” as a benchmark for evaluating new and forthcoming food technologies. This is of concern, because the concept of “substantial equivalence” is scientific hokum, and yet its persistence, even entrenchment, within regulatory agencies may be a harbinger of forthcoming same-but-different debates for nanotechnology and other future bioengineering. The appeal of “substantial equivalence” has been a brake on the creation of GMO-specific regulations and on rigorous GMO testing. The food nanotechnology industry can be expected to look to the precedent of the GMO debate to head off specific nano-regulations and nano-testing. As cloning becomes economically viable, then this may be another wave of food innovation that muddies the regulatory waters with the confused – and ultimately self-contradictory – concept of “substantial equivalence”. Nanotechnology engineers particles in the size range 1 to 100 nanometres – a nanometre is one billionth of a metre. This is interesting for manufacturers because at this size chemicals behave differently, or as the Australian Office of Nanotechnology expresses it, “new functionalities are obtained” (AON). Globally, government expenditure on nanotechnology research reached US$4.6 billion in 2006 (Roco 3.12). While there are now many patents (ETC Group; Roco), regulation specific to nanoparticles is lacking (Bowman and Hodge; Miller and Senjen). The USFDA advises that nano-manufacturers “must show a reasonable assurance of safety … or substantial equivalence” (FDA). A recent inventory of nano-products already on the market identified 580 products. Of these 11.4% were categorised as “Food and Beverage” (WWICS). This is at a time when public confidence in regulatory bodies is declining (HRA). In an Australian consumer survey on nanotechnology, 65% of respondents indicated they were concerned about “unknown and long term side effects”, and 71% agreed that it is important “to know if products are made with nanotechnology” (MARS 22). Cloned animals are currently more expensive to produce than traditional animal progeny. In the course of 678 pages, the USFDA Animal Cloning: A Draft Risk Assessment has not a single mention of “substantial equivalence”. However the Federation of Animal Science Societies (FASS) in its single page “Statement in Support of USFDA’s Risk Assessment Conclusion That Food from Cloned Animals Is Safe for Human Consumption” states that “FASS endorses the use of this comparative evaluation process as the foundation of establishing substantial equivalence of any food being evaluated. It must be emphasized that it is the food product itself that should be the focus of the evaluation rather than the technology used to generate cloned animals” (FASS 1). Contrary to the FASS derogation of the importance of process in food production, for consumers both the process and provenance of production is an important and integral aspect of a food product’s value and identity. Some consumers will legitimately insist that their Kalamata olives are from Greece, or their balsamic vinegar is from Modena. It was the British public’s growing awareness that their sugar was being produced by slave labour that enabled the boycotting of the product, and ultimately the outlawing of slavery (Hochschild). When consumers boycott Nestle, because of past or present marketing practices, or boycott produce of USA because of, for example, US foreign policy or animal welfare concerns, they are distinguishing the food based on the narrative of the food, the production process and/or production context which are a part of the identity of the food. Consumers attribute value to food based on production process and provenance information (Paull). Products produced by slave labour, by child labour, by political prisoners, by means of torture, theft, immoral, unethical or unsustainable practices are different from their alternatives. The process of production is a part of the identity of a product and consumers are increasingly interested in food narrative. It requires vigilance to ensure that these narratives are delivered with the product to the consumer, and are neither lost nor suppressed. Throughout the GM debate, the organic sector has successfully skirted the “substantial equivalence” debate by excluding GMOs from the certified organic food production process. This GMO-exclusion from the organic food stream is the one reprieve available to consumers worldwide who are keen to avoid GMOs in their diet. The organic industry carries the expectation of providing food produced without artificial pesticides and fertilizers, and by extension, without GMOs. Most recently, the Soil Association, the leading organic certifier in the UK, claims to be the first organisation in the world to exclude manufactured nonoparticles from their products (Soil Association). There has been the call that engineered nanoparticles be excluded from organic standards worldwide, given that there is no mandatory safety testing and no compulsory labelling in place (Paull and Lyons). The twisted rhetoric of oxymorons does not make the ideal foundation for policy. Setting food policy on the shifting sands of “substantial equivalence” seems foolhardy when we consider the potentially profound ramifications of globally mass marketing a dysfunctional food. If there is a 2×2 matrix of terms – “substantial equivalence”, substantial difference, insubstantial equivalence, insubstantial difference – while only one corner of this matrix is engaged for food policy, and while the elements remain matters of opinion rather than being testable by science, or by some other regime, then the public is the dupe, and potentially the victim. “Substantial equivalence” has served the GMO corporates well and the public poorly, and this asymmetry is slated to escalate if nano-food and clone-food are also folded into the “substantial equivalence” paradigm. Only in Orwellian Newspeak is war peace, or is same different. It is time to jettison the pseudo-scientific doctrine of “substantial equivalence”, as a convenient oxymoron, and embrace full disclosure of provenance, process and difference, so that consumers are not collateral in a continuing asymmetric knowledge war. References Australian Office of Nanotechnology (AON). Department of Industry, Tourism and Resources (DITR) 6 Aug. 2007. 24 Apr. 2008 < http://www.innovation.gov.au/Section/Innovation/Pages/ AustralianOfficeofNanotechnology.aspx >.Bowman, Diana, and Graeme Hodge. “A Small Matter of Regulation: An International Review of Nanotechnology Regulation.” Columbia Science and Technology Law Review 8 (2007): 1-32.Burger, Warren. “Sidney A. Diamond, Commissioner of Patents and Trademarks v. Ananda M. Chakrabarty, et al.” Supreme Court of the United States, decided 16 June 1980. 24 Apr. 2008 < http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=US&vol=447&invol=303 >.Curet, Monique. “New Rules Allow Dairy-Product Labels to Include Hormone Info.” The Columbus Dispatch 7 Feb. 2008. 24 Apr. 2008 < http://www.dispatch.com/live/content/business/stories/2008/02/07/dairy.html >.Engdahl, F. William. Seeds of Destruction. Montréal: Global Research, 2007.ETC Group. Down on the Farm: The Impact of Nano-Scale Technologies on Food and Agriculture. Ottawa: Action Group on Erosion, Technology and Conservation, November, 2004. European Commission. Report on Public Health Aspects of the Use of Bovine Somatotropin. Brussels: European Commission, 15-16 March 1999.Federation of Animal Science Societies (FASS). Statement in Support of FDA’s Risk Assessment Conclusion That Cloned Animals Are Safe for Human Consumption. 2007. 24 Apr. 2008 < http://www.fass.org/page.asp?pageID=191 >.Grist, Stuart. “True Threats to Reason.” New Scientist 197.2643 (16 Feb. 2008): 22-23.Hochschild, Adam. Bury the Chains: The British Struggle to Abolish Slavery. London: Pan Books, 2006.Horsch, Robert, Robert Fraley, Stephen Rogers, Patricia Sanders, Alan Lloyd, and Nancy Hoffman. “Inheritance of Functional Foreign Genes in Plants.” Science 223 (1984): 496-498.HRA. Awareness of and Attitudes toward Nanotechnology and Federal Regulatory Agencies: A Report of Findings. Washington: Peter D. Hart Research Associates, 25 Sep. 2007.Levidow, Les, Joseph Murphy, and Susan Carr. “Recasting ‘Substantial Equivalence’: Transatlantic Governance of GM Food.” Science, Technology, and Human Values 32.1 (Jan. 2007): 26-64.Lightfoot, David, Rajsree Mungur, Rafiqa Ameziane, Anthony Glass, and Karen Berhard. “Transgenic Manipulation of C and N Metabolism: Stretching the GMO Equivalence.” American Society of Plant Biologists Conference: Plant Biology, 2000.MARS. “Final Report: Australian Community Attitudes Held about Nanotechnology – Trends 2005-2007.” Report prepared for Department of Industry, Tourism and Resources (DITR). Miranda, NSW: Market Attitude Research Services, 12 June 2007.Miller, Georgia, and Rye Senjen. “Out of the Laboratory and on to Our Plates: Nanotechnology in Food and Agriculture.” Friends of the Earth, 2008. 24 Apr. 2008 < http://nano.foe.org.au/node/220 >.Miller, Henry. “Substantial Equivalence: Its Uses and Abuses.” Nature Biotechnology 17 (7 Nov. 1999): 1042-1043.Millstone, Erik, Eric Brunner, and Sue Mayer. “Beyond ‘Substantial Equivalence’.” Nature 401 (7 Oct. 1999): 525-526.Monsanto. “Posilac, Bovine Somatotropin by Monsanto: Questions and Answers about bST from the United States Food and Drug Administration.” 2007. 24 Apr. 2008 < http://www.monsantodairy.com/faqs/fda_safety.html >.Organisation for Economic Co-operation and Development (OECD). “For a Better World Economy.” Paris: OECD, 2008. 24 Apr. 2008 < http://www.oecd.org/ >.———. “Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.” Paris: OECD, 1993.Orwell, George. Animal Farm. Adelaide: ebooks@Adelaide, 2004 (1945). 30 Apr. 2008 < http://ebooks.adelaide.edu.au/o/orwell/george >.Paull, John. “Provenance, Purity and Price Premiums: Consumer Valuations of Organic and Place-of-Origin Food Labelling.” Research Masters thesis, University of Tasmania, Hobart, 2006. 24 Apr. 2008 < http://eprints.utas.edu.au/690/ >.Paull, John, and Kristen Lyons. “Nanotechnology: The Next Challenge for Organics.” Journal of Organic Systems (in press).Pennsylvania Department of Agriculture (PDA). “Revised Standards and Procedure for Approval of Proposed Labeling of Fluid Milk.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 17 Jan. 2008. ———. “Standards and Procedure for Approval of Proposed Labeling of Fluid Milk, Milk Products and Manufactured Dairy Products.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 22 Oct. 2007.Roco, Mihail. “National Nanotechnology Initiative – Past, Present, Future.” In William Goddard, Donald Brenner, Sergy Lyshevski and Gerald Iafrate, eds. Handbook of Nanoscience, Engineering and Technology. 2nd ed. Boca Raton, FL: CRC Press, 2007.Romeis, Jorg, Detlef Bartsch, Franz Bigler, Marco Candolfi, Marco Gielkins, et al. “Assessment of Risk of Insect-Resistant Transgenic Crops to Nontarget Arthropods.” Nature Biotechnology 26.2 (Feb. 2008): 203-208.Schauzu, Marianna. “The Concept of Substantial Equivalence in Safety Assessment of Food Derived from Genetically Modified Organisms.” AgBiotechNet 2 (Apr. 2000): 1-4.Soil Association. “Soil Association First Organisation in the World to Ban Nanoparticles – Potentially Toxic Beauty Products That Get Right under Your Skin.” London: Soil Association, 17 Jan. 2008. 24 Apr. 2008 < http://www.soilassociation.org/web/sa/saweb.nsf/848d689047 cb466780256a6b00298980/42308d944a3088a6802573d100351790!OpenDocument >.Smith, Jeffrey. Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods. Fairfield, Iowa: Yes! Books, 2007.———. Seeds of Deception. Melbourne: Scribe, 2004.U.S. Dairy Export Council (USDEC). Bovine Somatotropin (BST) Backgrounder. Arlington, VA: U.S. Dairy Export Council, 2006.U.S. Food and Drug Administration (USFDA). Animal Cloning: A Draft Risk Assessment. Rockville, MD: Center for Veterinary Medicine, U.S. Food and Drug Administration, 28 Dec. 2006.———. FDA and Nanotechnology Products. U.S. Department of Health and Human Services, U.S. Food and Drug Administration, 2008. 24 Apr. 2008 < http://www.fda.gov/nanotechnology/faqs.html >.Woodrow Wilson International Center for Scholars (WWICS). “A Nanotechnology Consumer Products Inventory.” Data set as at Sep. 2007. Woodrow Wilson International Center for Scholars, Project on Emerging Technologies, Sep. 2007. 24 Apr. 2008 < http://www.nanotechproject.org/inventories/consumer >.
APA, Harvard, Vancouver, ISO, and other styles
46

Antonio, Amy Brooke, and David Tuffley. "Promoting Information Literacy in Higher Education through Digital Curation." M/C Journal 18, no. 4 (August 10, 2015). http://dx.doi.org/10.5204/mcj.987.

Full text
Abstract:
This article argues that digital curation—the art and science of searching, analysing, selecting, and organising content—can be used to promote the development of digital information literacy skills among higher education students. Rather than relying on institutionally approved journal articles that have been pre-ordained as suitable for a given purpose, digital curation tools allow students to evaluate the quality of Web based-based content and then present it in an attractive form, all of which contributes to the cultivation of their digital literacy skills. We draw on a case study in which first- year information and communications technology (ICT) students used the digital curation platform Scoop.it to curate an annotated collection of resources pertaining to a particular topic. The notion of curation has undergone a significant transformation in the wake of an increasingly digital society. To “curate,” traditionally referred to as “taking care,” has morphed into a process of cataloguing, accessing, and representing artefacts. In the digital age, curation is a way of sifting, organising, and making sense of the plethora of information; it has become an important life skill without which one cannot fully participate in digital life. Moreover, the ready availability of information, made possible by the ubiquity of Internet technology, makes digital curation an essential skill for the twenty-first 21st century learner. In answer to this need, we are seeing the emergence of suites of digital tools, dubbed “‘curation”’ tools, that meet the perceived need to locate, select, and synthesise Web content into open, user-organised collections. With information overload, a distinctive feature of the Internet, the ability to sift through the noise and dross to select high- quality, relevant content—selected on the basis of authority, currency, and fitness-for-purpose—is indeed a valuable skill. To examine this issue, we performed a case study in which a group of first- year Information and Communication Technology (ICT) students curated Web- based resources to inform an assessment task. We argue that curation platforms, such as Scoop.it, can be effective at cultivating the digital information literacy skills of higher education students. Digital Curation Traditionally, curation is a practice most commonly associated with the Art world— something reserved for the curators of art exhibitions and museums. However, in today’s world, digital curation tools, such as Scoop.it, make it possible for the amateur curator to collect and arrange content pertaining to a particular topic in a professional way. While definitions of curation in the context of the online environment have been proposed (Scime; Wheeler; Rosenbaum), these have not been aligned to the building of core digital information literacy competencies. The digital curator must give due consideration to the materials they choose to include in a digital collection, which necessitates engaging in a certain amount of metacognitive-cognitive reasoning. For the purpose of this article, the following definition of digital curation is proposed: “Curation can be summarised as an active process whereby content/artefacts are purposely selected to be preserved for future access. In the digital environment, additional elements can be leveraged, such as the inclusion of social media to disseminate collected content, the ability for other users to suggest content or leave comments and the critical evaluation and selection of the aggregated content”. (Antonio, Martin, and Stagg).This definition exemplifies the digital information literacy skills at work in the curation of digital content. It can be further broken down to elucidate the core competencies involved: “Curation can be summarised as an active process whereby content/artefacts are purposely selected.” (Antonio, Martin and Stagg). The user, who curates a particular topic, actively chooses the content they want to appear in their collection. The content must be relevant, up-to-date, and from reputable sources or databases. Achieving this requires a degree of information literacy both in terms of justifying the content that is selected and, conversely, that which is not. The second part of the definition is: “In the digital environment, additional elements can be leveraged, such as the inclusion of social media to disseminate collected content, the ability for other users to suggest content or leave comments.” (Antonio, Martin and Stagg). The digital curator is engaged and immersed in Web 2.0 technologies, ranging from the curation tools themselves to social media platforms such as Facebook and Twitter. The use of these tools thus requires at least basic digital literacy skills, which can potentially be further developed through continued engagement with them. Finally, curation involves the “human-mediated automation of content collection.” (Antonio, Martin and Stagg). The curator must accept or reject the content generated by the search algorithm, which necessitates a level of metacognitive-cognitive analysis to determine the value of a piece of content. While there are countless tools laying claim to the digital curation label, including Pinterest, Storify, and Pearltrees, Scoop.it was selected for this study, as the authors consider that it adheres most closely to the stated definition of curation. Scoop.it requires the user to define the sources from which content will be suggested and to make an informed decision about which pieces of content are appropriate for the collection they are creating. This requires the curator to critically evaluate the relevance, currency, and validity (information literacy) of the suggested materials. Additionally, users can include content from other Scoop.it pages, which is referred to as “re-scooping”. Scoop.it therefore relies on an active editorial role undertaken by the user in the selection, or rejection, of content. That is, the owner of a particular collection makes the final decision regarding what will appear on their Scoop.it page. The content is then displayed visually with the collection growing as new content is added. The successful use of Scoop.it depends on the curator’s ability to interpret and critically assess digital information. This study is thus built on the premise that the metacognitive processes inherent in the discovery of traditional, non-Web based information are transferable to the digital environment and Scoop.it can, as such, be utilised for the cultivation of digital information literacy skills. Digital Information Literacy According to the Laboratory for Innovative Technology in Education at the University of Houston, “digital information literacy” refers to the ability to effectively analyse and evaluate evidence; to analyse and evaluate alternate points of view; to synthesise and make connections between information and arguments; and to reflect critically, interpret, and draw conclusions based on analysis. Research suggests that the digital information literacy skills of higher education students are inadequate (White; Antonio, Tuffley and Martin) and that further training in how to assess the value, credibility, and reliability of information is required. According to the CIBER’s Information Behaviour report, students’ often believe that they are information literate (based on their ability to check the validity of sources) and yet, in reality, their methods may not be sufficiently rigorous to qualify. Students may not be adequately equipped with the information literacy skills required to retrieve and critically evaluate sources outside of those that are institutionally provided, such as textbooks and assigned readings. Moreover, a report by the Committee of Inquiry (Hughes) addresses both the digital divide among students and the responsibility of the higher education sector to ensure that students are equipped with the information literacy skills required to search, authenticate, and critically evaluate material from multiple sources. Throughout history, educators have been teaching traditional literacy skills—reading, writing, finding information in libraries—to students. However, in an increasingly digital society, where a wealth of information is available online, higher education institutions need to teach students how to apply these metacognitive skills—searching, retrieving, authenticating, critically evaluating, and attributing material—to the online environment. Many institutions continue to adhere to the age-old practice of exclusive use of peer-reviewed sources for assessment tasks (Antonio and Tuffley). We argue that this is an unnecessary limitation; when students are denied access to non- peer-reviewed Web -based resources, they are not developing the skills they need to determine the credibility of digital information. While it is not suggested that the solution is to simply allow students to use Wikipedia as their primary reference point, we acknowledge that printed texts and journal articles are not the only source of credible, authoritative information. The current study is thus built on the premise that students need opportunities to help them develop their digital information literacy skills and, in order to do this, they must interact with and utilise Web -based content. The desirability of using curation tools for developing students’ digital information literacy skills thus forms the foundation of this article. Method For the purpose of this study, a group of 258 first-year students enrolled in a Communications for ICT course curated digital content for the research component of an assessment task. These ICT students were selected, firstly, because a level of proficiency with digital technology was assumed and, secondly, because previous course evaluations indicated a desire on the part of the students for technology to be integrated into the course, as a traditional essay was deemed unsuitable for ICT students. The assignment consisted of two parts: a written essay about an emerging technology and an annotated bibliography. The students were required to create a Scoop.it presentation on a particular area of technology and curate content that would assist the essay-writing component of the task. On completion of the assessment task, the students submitted their Scoop.it URL to the course lecturer and were invited to complete an anonymous online survey. The survey consisted of 20 questions—eight addressed demographic factors, three were open- ended (qualitative), and nine multiple choice items specifically assessed the students’ beliefs about whether or not the digital curation task had helped them develop their digital information literacy skills. The analysis below pertains to these nine multiple -choice items. Results and Discussion Of the 258 students who completed the assessment task, 89 participated in the survey. The students were asked: “What were the primary benefits of using the curation tool Scoop.it?” The students were permitted to select multiple responses for this item: 69% of participants said the primary benefit of using Scoop.it was “Engaging with my topic”, while 62% said “Learning how to use a new tool”; and 53% said “Learning how to assess the value of Web- based content” was the primary benefit of the curation task. This suggests that the process of digital curation as described in this project could, potentially, be used to enhance students’ digital information literacy skills. It is noteworthy that the participants in this study were not given any specific instructions on how to assess online information before doing the assignment. They were presented with a one-page summary of what constitutes an annotated bibliography; however, a specific set of guidelines for the types of processes that could be considered indicative of digital information literacy skills was not provided. This might have included the date, for currency; author credentials; cross-checking with other sources etc. It is therefore remarkable that more than 50% of respondents believed that the act of curation had positively impacted their performance on this assessment task and enhanced their ability to critically assess the value of Web -based content. This strongly suggests that the simple act of being exposed to online information, and using it in a purposeful way (in this case to research an emerging technology), can aid the development of critical thinking skills. The students were asked to indicate the extent to which they agreed or disagreed with a series of eight statements, each of which addressed a specific component of digital information literacy. Responses were presented on a Likert scale ranging from strongly agree to strongly disagree. The students’ responses for strongly agree and agree and strongly disagree and disagree were conflated. Statement 1: The use of Scoop.it helped me develop my critical thinking skills. 44% of respondents agreed that the curation tool Scoop.it had helped them develop their critical thinking skills and 30% disagreed. Statement 2: As a result of using Scoop.it, I feel I can make judgments about the value of digital content. 43% of respondents agreed and compared to 22% who disagreed that the curation tool Scoop.it had helped them make judgments about the value of digital content. Statement 3: As a result of using Scoop.it, I feel I can synthesise and organise ideas and information. 58% of respondents agreed that curation via Scoop.it helped them synthesise and organise ideas and information, while and 14% disagreed. Statement 4: As a result of using Scoop.it, I feel I can make judgments about the currency of information. 43% of respondents agreed and 21% disagreed that using Scoop.it had assisted them in their ability to make judgments about the currency of information. Statement 5: As a result of using Scoop.it, I feel I can analyse content in depth. 37% of respondents agreed that the curation task had helped them analyse content in-depth. In contrast, 21% disagreed. Statements 1 to 5 each address a specific component of digital information literacy—the ability to think critically; to make judgments about the value of content; to synthesise and organise ideas and information; to make judgments about the currency of the information; and to analyse content in depth. In response to each of these five components, a greater percentage of students agreed than disagreed that the Scoop.it task helped them develop their digital information literacy skills. By its very nature, Scoop.it generates content based on the key-word parameters entered by the user when creating a given topic. The user is then responsible from for trawling through and evaluating this content in order to make an informed decision about what content they wish to appear on their Scoop.it page. As such, it is perhaps not particularly surprising that the students in this study indicated that the practice of curating content helped them develop their digital information literacy skills. It would, however, be interesting to explore whether or not these students were confident in their abilities prior to undertaking the Scoop.it task, as previous research (CIBER) suggests. Without this information, it is difficult to draw conclusions about the success, or otherwise, of the curation task for cultivating the digital information literacy skills of higher education students. Statement 6: As a result of using Scoop.it, I feel able to cite Web-based information. 48% of respondents agreed that using Scoop.it had assisted them in citing Web-based information, while and 24% disagreed. Statement 7: As a result of using Scoop.it, I feel confident in my ability to use Web -based content in my assignments. 52% of respondents believed that using Scoop.it to curate resources had positively contributed to their confidence in using web-based content for their assignments, compared to 17% who disagreed. The results of statements 6 and 7 indicate that further instruction in using and citing non- peer-reviewed online resources may be required; however, this will not be possible if higher education institutions continue to mandate the exclusive use of journal articles and textbooks, to the exclusion of other non-peer- reviewed Web-based information, such as blogs and wikis. More than half of the students were more confident using digital information following the Scoop.it task, which suggests that the opportunity to engage with the alternate sources of information generated by the Scoop.it platform (such as blogs and wikis and digital newspapers) encouraged the students to think critically about how such sources can be incorporated into academic writing. Statement 8: As a result of using Scoop.it, I feel I can distinguish between good and bad Web-based content. While 38% of students who responded to the survey said that using Scoop.it to curate content had enabled them to distinguish between high- and low- quality information, 25% did not believe that this was the case, and an additional 37% were neither confident nor unconfident about distinguishing between good and bad Web-based content. In keeping with previous research (White), the results of this case study suggest that, while many students believed that the Scoop.it task encouraged them to think critically about the quality of non- peer-reviewed digital resources, they are were not necessarily confident in their ability to distinguish good from poor content. The implication, as Hughes contends, is that there is a need for educators to ensure that higher education students are equipped with these metacognitive-cognitive skills prior to leaving university, as it is imperative that we produce graduates who can function in an increasingly digital society. Conclusion The rising tide of digital information in the twenty-first 21st century necessitates the development of new approaches to making sense of the information found on the World Wide Web. Such is the exponentially expanding volume of this information on the Web—so-called ‘big data’—, that, unless a new breed of tools for sifting and arranging information is made available to those who use the Web for information- gathering, their capacity to deal with the volume will be overwhelmed. The new breed of digital curation tools, such as Scoop.it, are a rational response to this emerging issue. We have made the case that, by using digital curation tools to make sense of data, users are able to discern, at least to some extent, the quality and reliability of information. While this is not a substitute for peer-reviewed, academically rigorous sources, digital curation tools arguably have a supplementary role in an educational context—perhaps as a preliminary method for gathering general information about a topic area before diving deeper with peer-reviewed articles in the second pass. This dual perspective may prove to be a beneficial approach, as it has the virtue of considering both the breadth and depth of a topic. Even without formal instruction on assessing the value of Web content, no less than 53% of participants felt the primary benefit of using the digital curation tool was assessing the value of such content. This result strongly indicates the potential benefits of combining digital curation tools with formal, content-evaluation instruction. This represents a promising avenue for future research.References Antonio, A., N. Martin, and A. Stagg. “Engaging Higher Education Students via Digital Curation.” Future Challenges, Sustainable Futures (2012). 10 Feb. 2013 ‹http://www.ascilite.org.au/conferences/wellington12/2012/pagec16a.html›. Antonio, A., D. Tuffley, and N. Martin. “Creating Active Engagement and Cultivating Information Literacy Skills via Scoop.it.” Electric Dreams (2013). 4 May 2015 ‹http://www.ascilite.org.au/conferences/sydney13/program/proceedings.pdf›. Antonio, A., and D. Tuffley. “Creating Educational Networking Opportunities with Scoop.it.” Journal of Creative Communications 9.2 (2014): 185-97. CIBER. “Information Behaviour of the Researcher of the Future.” JISC (2008). 5 May 2015 ‹http://partners.becta.org.uk/index.php?section=rh&catcode=_re_rp_02&rid=15879›. Hughes, A. “Higher Education in a Web 2.0 World.” JISC (2009). 6 May 2015 ‹http://www.jisc.ac.uk/publications/generalpublications/2009/heweb2.aspx›. Laboratory for Innovative Technology in Education. “New Technologies and 21st Century Skills.” (2013). 5 May 2015 ‹http://newtech.coe.uh.edu/›. Rosenbaum, S. “Why Content Curation Is Here to Stay.” Mashable 3 Mar. 2010. 7 May 2015 ‹http://mashable.com/2010/05/03/content-curation-creation/›. Scime, E. “The Content Strategist as Digital Curator.” Dopedata 8 Dec. 2009. 30 May 2012 ‹http://www.dopedata.com/2009/12/08/the-content-strategits-as-digital-curator/›. Wheeler, S. “The Great Collective.” 2011. ‹http://stevewheeler.blogspot.com.au/2011/07/great-collective.html#!/2011/07/great-collective.html›. White, D. “Digital Visitors and Residents: Progress Report.” JISC (2012). 28 May 2012 ‹http://www.oclc.org/research/activities/vandr.html›.
APA, Harvard, Vancouver, ISO, and other styles
47

Wallace, Derek. "E-Mail and the Problems of Communication." M/C Journal 3, no. 4 (August 1, 2000). http://dx.doi.org/10.5204/mcj.1862.

Full text
Abstract:
The Language in the Workplace project, based in the School of Linguistics and Applied Language Studies at Victoria University of Wellington, New Zealand, has for most of its history concentrated on oral interaction in professional and manufacturing organisations. Recently, however, the project team widened its scope to include an introductory investigation of e-mail as a mode of workplace interaction. The ultimate intention is to extend the project's purview to encompass all written modes, thereby allowing a fuller focus on the complex interrelationships between communication media in the workplace. The Problems of Communication In an illuminating recent study, John Durham Peters explores problems that have dogged the notion of 'communication' (the term in this sense originating only in the late nineteenth century) from the time of Plato. The overarching historical problem he discusses is the recurrent desire for complete communication, the illusionary dream of transferring completely and without modification any idea, thought, or intention from one mind to another. There are two further and related problems that are particularly germane to my purposes here. A belief, at one extreme, that communication 'technologies' will interfere with the 'natural' processes of oral face-to-face interaction; together with its obverse, that communications plural (new technologies) will solve the problems of communication singular (self-other relations). A notion that dissemination (communication from one to many)1 is an inferior and distorting mode, inherently deterministic, compared with the openness of (preferably one-on-one) dialogue. Perhaps first formulated in Plato's Phaedrus, this lament has reverberated ever since, radio providing the instance par excellence.2 Yet another problem are the oppositions creating and sustaining these perceived problems, and their resultant social polarisations. Peters argues eloquently that technologies will never solve the differences in intention and reception amongst socially and therefore differentially positioned interlocutors. (Indeed, he counts it as a benefit that human beings cannot exempt themselves from the recognition and negotiation of individual and collective difference.) And he demonstrates that dialogue and dissemination are equally subject to imperfections and benefits. However, the perceptions remain, and that brings its own problems, given that people continue to act on the basis of unrealistic assumptions about communication. Looked at in this context, electronic mail (which Peters does not include in his historical studies) is a particularly fruitful site of investigation. I will focus on discussing the two problems enumerated above with reference to some of the academic and business literature on e-mail in the workplace; a survey conducted in part of a relatively large organisation in Wellington; and a public e-mail forum of primarily scientists and business people concerning New Zealand's future development. Communicative Distortion The first communication technology to be extensively critiqued for its corruption of social intercourse was writing (by Socrates in Phaedrus). Significantly, e-mail has often been characterised, not unreasonably, as a hybrid of speech and writing, and as returning written communication in the workplace toward the 'immediacy' and 'simplicity' of speech. In fact, as many practitioners do not sufficiently appreciate, informality and intimacy in e-mail communication have to be worked at. Efforts are made by some to use friendly salutations; a chatty, colloquial style; typographical representations of body language; and to refrain from tidying up errors and poor expression (which backfires on them when addressing sticklers for correctness, or when, as often happens, the message is full of obscurities and lacunae). When these attempts are not made, receivers impute to the messages the coldness and impersonality of the most functional letters and notes -- and this is only enhanced by the fact that so much e-mail in the workplace is used for directives (instructions and requests) or announcements (more specifically, proclamations; see below). In contrast to the initial reception of some earlier communication technologies, e-mail was widely welcomed at first. It was predicted to usher in a new egalitarian and democratic order of communication by flattening out or even by-passing hierarchical relations (Sproull and Kiesler; any issue of Wired magazine [see Frau-Meigs]). The realisation that other commercial factors were also contributing to this flattening out no doubt helped to dispel the utopian view (Casey; Gee)3. Subsequent literature has given more emphasis to the sinister aspects of e-mail -- its deployment by managers in the surveillance, monitoring, and performance measurement of employees, its capacity to support convenient and efficient reporting regimes, its durability, and its traceability (Brigham and Corbett; Corbett). This historical trajectory in attitudes towards, and uses of, e-mail, together with the potential variation in the readers' interpretations of the writer's feelings, means that people are quite as likely to conceive of e-mail as cold and impersonal as they are to impute to it more positive feelings. This is borne out in the organisational survey carried out as a part of this research. Of the respondents working in what I will call a professional capacity, 50 percent (the same proportion for both male and female) agreed that e-mail creates a friendlier environment, while only a small percentage of the remainder were neutral. Most disagreed. Interestingly, only a third of clerical staff agreed. One can readily speculate that the differences between these two occupational classes were a significant factor with regard to the uses e-mail is put to (more information sharing as equals on the part of professionals). Those who felt that e-mail contributed to a less friendly environment typically referred to the 'loss of personal contact', and to its ability to allow people to distance themselves from others or 'hide behind' the technology. In a somewhat paradoxical twist of this perceived characteristic, it appears that e-mail can reinforce the prevailing power relations in an organisation by giving employees a way of avoiding the (physical) brunt of these relations, and therefore of tolerating them. Employees have the sense that they can approach a superior through e-mail in a way that is both comfortable for the employee (not have to physically encounter their superior or, as one informant put it, "not have to cope with the boss's body language"), and convenient for the superior.4 At the same time, interestingly, respondents to our surveys have generally been adamant that e-mail is not the medium for conflict resolution or discussion of significant or sensitive matters pertaining to a manager's relationship with an individual employee. In the large Wellington service organisation surveyed for this study, 70% of the sample said they never or almost never used e-mail for these purposes. It was notable, however, that for professional employees, where a gender distinction used in the survey, 80% of women were of this view, compared with 60% of men. Indeed, nearly 10% of men reported using e-mail frequently for conflict resolution purposes. In sum, there is the potential in e-mail for a fundamental distortion; one that is seemingly the opposite of the anti-technologists' charge of corruption of communication by writing (but arguably with the same result), and one that very subtly contradictory, appearing to support, the utopianism of the digerati. The conventions of e-mail can allow employees to have a sense of participation and equality while denying them any real power or influence over important matters or directions of the organisation. E-mail, in other words, may allow co-workers to communicate across underlying tensions and conflicts by effectively suppressing conflict. This may have advantages for enabling an organisation's work to continue in the face of inevitable personality differences. It may also damage the chances of sustaining effective workplace relationships, especially if individuals generalise their use of e-mail, rather than selecting strategically from all the communicational resources available to them. Dialogue and Dissemination Notwithstanding the point made earlier in relation to radio about the flexibility of technology as a societal accomplishment (see note 2), e-mail, I suggest, is unique in the extent of its inherent ability to alternate freely between both poles of the dialogue -- dissemination dichotomy. It is equally adept at allowing one to broadcast to many as it is at enabling two or more people to conduct a conversation. What complicates this ambidexterity of e-mail is that, as Peters points out, in contradistinction to the contemporary tendency to valorise the reciprocity and interaction of dialogue, "dialogue can be tyrannical and dissemination can be just" (34). Consequently, one cannot make easy assumptions about the manner in which e-mail is being used. It is tempting, for example, to conclude from the preponderance of e-mail being used for announcements and simple requests that the supposed benefits of dialogue are not being achieved. This conclusion is demonstrably wrong on two related counts: If e-mail is encouraging widespread dissemination of information which could have been held back (and arguably would have been held back in large organisations lacking e-mail's facilitative qualities), then the workforce will be better informed, and hence more able -- and more inclined! -- to engage in dialogue. The uses to which e-mail is put must not be viewed in isolation from the associated use of other media. If communication per se (including dialogue) is increasing, it may be that e-mail (as dissemination) is making that possible. Indeed, our research showed a considerable unanimity of perception that communication overall has significantly increased since the introduction of e-mail. This is not to necessarily claim that the quality of communication has increased (there is a degree of e-mail communication that is regarded as unwanted). But the fact that a majority of respondents reported increases in use or stability of use across almost all media, including face-to-face interaction, suggests that a more communicative climate may be emerging. We need then to be more precise about the genre of announcements when discussing their organisational implications. Responses in focus group discussions indicate that the use of e-mail for homilies or 'feel good' messages from the CEO (rather than making the effort to talk face-to-face to employees) is not appreciated. Proclamations, too, are better delivered off-line. Similarly, instructions are better formulated as requests (i.e. with a dialogic tone). As I noted earlier, clerical staff, who are more likely to be on the receiving end of instructions, were less inclined to agree that e-mail creates a friendlier environment. Similarly, instructions are better formulated as requests (i.e. with a dialogic tone). As I noted earlier, clerical staff, who are more likely to be on the receiving end of instructions, were less inclined to agree that e-mail creates a friendlier environment. Even more than face-to-face, group interaction by e-mail allows certain voices to be ignored. Where, as often, there are multiple responses to a particular message, subsequent contributors can use selective responses to strongly influence the direction of the discussion. An analysis of a lengthy portion of the corpus reveals that certain key participants -- often effectively in alliance with like-minded members who endorse their interventions -- will regularly turn the dialogue back to a preferred thread by swift and judicious responses. The conversation can move very quickly away from a new perspective not favoured by regular respondents. It is also possible for a participant sufficiently well regarded by a number of other members to leave the discussion for a time (as much as two or three weeks) and on their return resurrect their favoured perspective by retrieving and responding to a relatively old message. It is clear from this forum that individual reputation and status can carry as much weight on line as it can in face-to-face discussion. Conclusion Peters points out that since the late nineteenth century, of which the invention of the words 'telepathy' and 'solipsism' are emblematic, 'communication' "has simultaneously called up the dream of instantaneous access and the nightmare of the labyrinth of solitude" (5). The ambivalence shown towards e-mail by many of its users is clearly the result of the history of responses to communications technology, and of the particular flexibility of e-mail, which makes it an example of this technology par excellence. For the sake of the development of their communicational capabilities, it would be a pity if people continued to jump to the conclusions encouraged by dichotomous conceptions of e-mail (intimate/impersonal, democratic/autocratic, etc.), rather than consciously working to develop a reflexive, open, and case-specific relationship with the technology. Footnotes This does not necessarily exclude oral face-to-face: Peters discusses Jesus's presentation of parables to the crowd as an instance of dissemination. The point is not as transparent as it can now seem. As Peters writes: "It is a mistake to equate technologies with their societal applications. For example, 'broadcasting' (one-way dispersion of programming to an audience that cannot itself broadcast) is not inherent in the technology of radio; it was a complex social accomplishment ... . The lack of dialogue owes less to broadcasting technologies than to interests that profit from constituting audiences as observers rather than participants" (34). That is, post-Fordist developments leading to downsizing of middle management, working in teams, valorisation of flexibility ('flexploitation'). There is no doubt an irony here that escapes the individual employee: namely, every other employee is e-mailing the boss 'because it is convenient for the boss', and meanwhile the boss is gritting his or her teeth as an avalanche of e-mail descends. References Brigham, Martin, and J. Martin Corbett. "E-mail, Power and the Constitution of Organisational Reality." New Technology, Work and Employment 12.1 (1997): 25-36. Casey, Catherine. Work, Self and Society: After Industrialism. London and New York: Routledge, 1995. Corbett, Martin. "Wired and Emotional." People Management 3.13 (1997): 26-32. Gee, James Paul. "The New Literacy Studies: From 'Socially Situated' to the Work of the Social." Situated Literacies: Reading and Writing in Context. Eds. David Barton et al. London and New York: Routledge, 2000. 180-96. Frau-Meigs, Divina. "A Cultural Project Based on Multiple Temporary Consensus: Identity and Community in Wired." New Media and Society 2.2 (2000): 227-44. Peters, John Durham. Speaking into the Air: A History of the Idea of Communication. Chicago and London: U of Chicago P, 1999. Sproull, Lee and Sara Kiesler. Connections: New Ways of Working in the Networked Organization. Cambridge, MA: MIT P, 1992. Citation reference for this article MLA style: Derek Wallace. "E-Mail and the Problems of Communication." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/email.php>. Chicago style: Derek Wallace, "E-Mail and the Problems of Communication," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/email.php> ([your date of access]). APA style: Derek Wallace. (2000) E-mail and the problems of communication. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/email.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
48

Cutler, Ella Rebecca Barrowclough, Jacqueline Gothe, and Alexandra Crosby. "Design Microprotests." M/C Journal 21, no. 3 (August 15, 2018). http://dx.doi.org/10.5204/mcj.1421.

Full text
Abstract:
IntroductionThis essay considers three design projects as microprotests. Reflecting on the ways design practice can generate spaces, sites and methods of protest, we use the concept of microprotest to consider how we, as designers ourselves, can protest by scaling down, focussing, slowing down and paying attention to the edges of our practice. Design microprotest is a form of design activism that is always collaborative, takes place within a community, and involves careful translation of a political conversation. While microprotest can manifest in any design discipline, in this essay we focus on visual communication design. In particular we consider the deep, reflexive practice of listening as the foundation of microprotests in visual communication design.While small in scale and fleeting in duration, these projects express rich and deep political engagements through conversations that create and maintain safe spaces. While many design theorists (Julier; Fuad-Luke; Clarke; Irwin et al.) have done important work to contextualise activist design as a broad movement with overlapping branches (social design, community design, eco-design, participatory design, critical design, and transition design etc.), the scope of our study takes ‘micro’ as a starting point. We focus on the kind of activism that takes shape in moments of careful design; these are moments when designers move politically, rather than necessarily within political movements. These microprotests respond to community needs through design more than they articulate a broad activist design movement. As such, the impacts of these microprotests often go unnoticed outside of the communities within which they take place. We propose, and test in this essay, a mode of analysis for design microprotests that takes design activism as a starting point but pays more attention to community and translation than designers and their global reach.In his analysis of design activism, Julier proposes “four possible conceptual tactics for the activist designer that are also to be found in particular qualities in the mainstream design culture and economy” (Julier, Introduction 149). We use two of these tactics to begin exploring a selection of attributes common to design microprotests: temporality – which describes the way that speed, slowness, progress and incompletion are dealt with; and territorialisation – which describes the scale at which responsibility and impact is conceived (227). In each of three projects to which we apply these tactics, one of us had a role as a visual communicator. As such, the research is framed by the knowledge creating paradigm described by Jonas as “research through design”.We also draw on other conceptualisations of design activism, and the rich design literature that has emerged in recent times to challenge the colonial legacies of design studies (Schultz; Tristan et al.; Escobar). Some analyses of design activism already focus on the micro or the minor. For example, in their design of social change within organisations as an experimental and iterative process, Lensjkold, Olander and Hasse refer to Deleuze and Guattari’s minoritarian: “minor design activism is ‘a position in co-design engagements that strives to continuously maintain experimentation” (67). Like minor activism, design microprotests are linked to the continuous mobilisation of actors and networks in processes of collective experimentation. However microprotests do not necessarily focus on organisational change. Rather, they create new (and often tiny) spaces of protest within which new voices can be heard and different kinds of listening can be done.In the first of our three cases, we discuss a representation of transdisciplinary listening. This piece of visual communication is a design microprotest in itself. This section helps to frame what we mean by a safe space by paying attention to the listening mode of communication. In the next sections we explore temporality and territorialisation through the design microprotests Just Spaces which documents the collective imagining of safe places for LBPQ (Lesbian, Bisexual, Pansexual, and Queer) women and non-binary identities through a series of graphic objects and Conversation Piece, a book written, designed and published over three days as a proposition for a collective future. A Representation of Transdisciplinary ListeningThe design artefact we present in this section is a representation of listening and can be understood as a microprotest emerging from a collective experiment that materialises firstly as a visual document asking questions of the visual communication discipline and its role in a research collaboration and also as a mirror for the interdisciplinary team to reflexively develop transdisciplinary perspectives on the risks associated with the release of environmental flows in the upper reaches of Hawkesbury Nepean River in NSW, Australia. This research project was funded through a Challenge Grant Scheme to encourage transdisciplinarity within the University. The project team worked with the Hawkesbury Nepean Catchment Management Authority in response to the question: What are the risks to maximising the benefits expected from increased environmental flows? Listening and visual communication design practice are inescapably linked. Renown American graphic designer and activist Sheila de Bretteville describes a consciousness and a commitment to listening as an openness, rather than antagonism and argument. Fiumara describes listening as nascent or an emerging skill and points to listening as the antithesis of the Western culture of saying and expression.For a visual communication designer there is a very specific listening that can be described as visual hearing. This practice materialises the act of hearing through a visualisation of the information or knowledge that is shared. This act of visual hearing is a performative process tracing the actors’ perspectives. This tracing is used as content, which is then translated into a transcultural representation constituted by the designerly act of perceiving multiple perspectives. The interpretation contributes to a shared project of transdisciplinary understanding.This transrepresentation (Fig. 1) is a manifestation of a small interaction among a research team comprised of a water engineer, sustainable governance researcher, water resource management researcher, environmental economist and a designer. This visualisation is a materialisation of a structured conversation in response to the question What are the risks to maximising the benefits expected from increased environmental flows? It represents a small contribution that provides an opportunity for reflexivity and documents a moment in time in response to a significant challenge. In this translation of a conversation as a visual representation, a design microprotest is made against reduction, simplification, antagonism and argument. This may seem intangible, but as a protest through design, “it involves the development of artifacts that exist in real time and space, it is situated within everyday contexts and processes of social and economic life” (Julier 226). This representation locates conversation in a visual order that responds to particular categorisations of the political, the institutional, the socio-economic and the physical in a transdisciplinary process that focusses on multiple perspectives.Figure 1: Transrepresentation of responses by an interdisciplinary research team to the question: What are the risks to maximising the benefits expected from increased environmental flows in the Upper Hawkesbury Nepean River? (2006) Just Spaces: Translating Safe SpacesListening is the foundation of design microprotest. Just Spaces emerged out of a collaborative listening project It’s OK! An Anthology of LBPQ (Lesbian, Bisexual, Pansexual and Queer) Women’s and Non-Binary Identities’ Stories and Advice. By visually communicating the way a community practices supportive listening (both in a physical form as a book and as an online resource), It’s OK! opens conversations about how LBPQ women and non-binary identities can imagine and help facilitate safe spaces. These conversations led to thinking about the effects of breaches of safe spaces on young LBPQ women and non-binary identities. In her book The Cultural Politics of Emotion, Sara Ahmed presents Queer Feelings as a new way of thinking about Queer bodies and the way they use and impress upon space. She makes an argument for creating and imagining new ways of creating and navigating public and private spaces. As a design microprotest, Just Spaces opens up Queer ways of navigating space through a process Ahmed describes as “the ‘non-fitting’ or discomfort .... an opening up which can be difficult and exciting” (Ahmed 154). Just Spaces is a series of workshops, translated into a graphic design object, and presented at an exhibition in the stairwell of the library at the University of Technology Sydney. It protests the requirement of navigating heteronormative environments by suggesting ‘Queer’ ways of being in and designing in space. The work offers solutions, suggestions, and new ways of doing and making by offering design methods as tools of microprotest to its participants. For instance, Just Spaces provides a framework for sensitive translation, through the introduction of a structure that helps build personas based on the game Dungeons and Dragons (a game popular among certain LGBTQIA+ communities in Sydney). Figure 2: Exhibition: Just Spaces, held at UTS Library from 5 to 27 April 2018. By focussing the design process on deep listening and rendering voices into visual translations, these workshops responded to Linda Tuhiwai Smith’s idea of the “outsider within”, articulating the way research should be navigated in vulnerable groups that have a history of being exploited as part of research. Through reciprocity and generosity, trust was generated in the design process which included a shared dinner; opening up participant-controlled safe spaces.To open up and explore ideas of discomfort and safety, two workshops were designed to provide safe and sensitive spaces for the group of seven LBPQ participants and collaborators. Design methods such as drawing, group imagining and futuring using a central prototype as a prompt drew out discussions of safe spaces. The prototype itself was a small folded house (representative of shelter) printed with a number of questions, such as:Our spaces are often unsafe. We take that as a given. But where do these breaches of safety take place? How was your safe space breached in those spaces?The workshops resulted in tangible objects, made by the participants, but these could not be made public because of privacy implications. So the next step was to use visual communication design to create sensitive and honest visual translations of the conversations. The translations trace images from the participants’ words, sketches and notes. For example, handwritten notes are transcribed and reproduced with a font chosen by the designer based on the tone of the comment and by considering how design can retain the essence of person as well as their anonymity. The translations focus on the micro: the micro breaches of safety; the interactions that take place between participants and their environment; and the everyday denigrating experiences that LBPQ women and non-binary identities go through on an ongoing basis. This translation process requires precise skills, sensitivity, care and deep knowledge of context. These skills operate at the smallest of scales through minute observation and detailed work. This micro-ness translates to the potential for truthfulness and care within the community, as it establishes a precedent through the translations for others to use and adapt for their own communities.The production of the work for exhibition also occurred on a micro level, using a Risograph, a screenprinting photocopier often found in schools, community groups and activist spaces. The machine (ME9350) used for this project is collectively owned by a co-op of Sydney creatives called Rizzeria. Each translation was printed only five times on butter paper. Butter paper is a sensitive surface but difficult to work with making the process slow and painstaking and with a lot of care.All aspects of this process and project are small: the pieced-together translations made by assembling segments of conversations; zines that can be kept in a pocket and read intimately; the group of participants; and the workshop and exhibition spaces. These small spaces of safety and their translations make possible conversations but also enable other safe spaces that move and intervene as design microprotests. Figure 3: Piecing the translations together. Figure 4: Pulling the translation off the drum; this was done every print making the process slow and requiring gentleness. This project was and is about slowing down, listening and visually translating in order to generate and imagine safe spaces. In this slowness, as Julier describes “...the activist is working in a more open-ended way that goes beyond the materialization of the design” (229). It creates methods for listening and collaboratively generating ways to navigate spaces that are fraught with micro conflict. As an act of territorialisation, it created tiny and important spaces as a design microprotest. Conversation Piece: A Fast and Slow BookConversation Piece is an experiment in collective self-publishing. It was made over three days by Frontyard, an activist space in Marrickville, NSW, involved in community “futuring”. Futuring for Frontyard is intended to empower people with tools to imagine and enact preferred futures, in contrast to what design theorist Tony Fry describes as “defuturing”, the systematic destruction of possible futures by design. Materialised as a book, Conversation Piece is also an act of collective futuring. It is a carefully designed process for producing dialogues between unlikely parties using an image archive as a starting point. Conversation Piece was designed with the book sprint format as a starting point. Founded by software designer Adam Hyde, book sprints are a method of collectively generating a book in just a few days then publishing it. Book sprints are related to the programming sprints common in agile software development or Scrum, which are often used to make FLOSS (Free and Open Source Software) manuals. Frontyard had used these techniques in a previous project to develop the Non Cash Arts Asset Platform.Conversation Piece was also modeled on two participatory books made during sprints that focussed on articulating alternative futures. Collaborative Futures was made during Transmediale in 2009, and Futurish: Thinking Out Loud about Futures (2015).The design for Conversation Piece began when Frontyard was invited to participate in the Hobiennale in 2017, a free festival emerging from the “national climate of uncertainty within the arts, influenced by changes to the structure of major arts organisations and diminishing funding opportunities.” The Hobiennale was the first Biennale held in Hobart, Tasmania, but rather than producing a standard large art survey, it focussed on artist-run spaces and initiatives, emergant practices, and marginalised voices in the arts. Frontyard is not an artist collective and does not work for commissions. Rather, the response to the invitation was based on how much energy there was in the group to contribute to Hobiennale. At Frontyard one of the ways collective and individual energy is accounted for is using spoon theory, a disability metaphor used to describe the planning that many people have to do to conserve and ration energy reserves in their daily lives (Miserandino). As outlined in the glossary of Conversation Piece, spoon theory is:A way of accounting for our emotional or physical energy and therefore our ability to participate in activities. Spoon theory can be used to collaborate with care and avoid guilt and burn out. Usually spoon theory is applied at an individual level, but it can also be used by organisations. For example, Hobiennale had enough spoons to participate in the Hobiennale so we decided to give it a go. (180)To make to book, Frontyard invited visitors to Hobiennale to participate in a series of open conversations that began with the photographic archive of the organisation over the two years of its existence. During a prototyping session, Frontyard designed nine diagrams that propositioned ways to begin conversations by combining images in different ways. Figure 5: Diagram 9. Conversation Piece: p.32-33One of the purposes of the diagrams, and the book itself, was to bring attention to the micro dynamics of conversation over time, and to create a safe space to explore the implications of these. While the production process and the book itself is micro (ten copies were printed and immediately given away), the decisions made in regards to licensing (a creative commons license is used), distribution (via the Internet Archive) and content generation (through participatory design processes) the project’s commitment to open design processes (Van Abel, Evers, Klaassen and Troxler) mean its impact is unpredictable. Counter-logical to the conventional copyright of books, open design borrows its definition - and at times its technologies and here its methods - from open source software design, to advocate the production of design objects based on fluid and shared circulation of design information. The tension between the abundance produced by an open approach to making, and the attention to the detail of relationships produced by slowing down and scaling down communication processes is made apparent in Conversation Piece:We challenge ourselves at Frontyard to keep bureaucratic processes as minimal an open as possible. We don’t have an application or acquittal process: we prefer to meet people over a cup of tea. A conversation is a way to work through questions. (7)As well as focussing on the micro dynamics of conversations, this projects protests the authority of archives. It works to dismantle the hierarchies of art and publishing through the design of an open, transparent, participatory publishing process. It offers a range of propositions about alternative economies, the agency of people working together at small scales, and the many possible futures in the collective imaginaries of people rethinking time, outcomes, results and progress.The contributors to the book are those in conversation – a complex networks of actors that are relationally configured and themselves in constant change, so as Julier explains “the object is subject to constant transformations, either literally or in its meaning. The designer is working within this instability.” (230) This is true of all design, but in this design microprotest, Frontyard works within this instability in order to redirect it. The book functions as a series of propositions about temporality and territorialisation, and focussing on micro interventions rather than radical political movements. In one section, two Frontyard residents offer a story of migration that also serves as a recipe for purslane soup, a traditional Portuguese dish (Rodriguez and Brison). Another lifts all the images of hand gestures from the Frontyard digital image archive and represents them in a photo essay. Figure 6: Talking to Rocks. Conversation Piece: p.143ConclusionThis article is an invitation to momentarily suspend the framing of design activism as a global movement in order to slow down the analysis of design protests and start paying attention to the brief moments and small spaces of protest that energise social change in design practice. We offered three examples of design microprotests, opening with a representation of transdisciplinary listening in order to frame design as a way if interpreting and listening as well as generating and producing. The two following projects we describe are collective acts of translation: small, momentary conversations designed into graphic forms that can be shared, reproduced, analysed, and remixed. Such protests have their limitations. Beyond the artefacts, the outcomes generated by design microprotests are difficult to identify. While they push and pull at the temporality and territorialisation of design, they operate at a small scale. How design microprotests connect to global networks of protest is an important question yet to be explored. The design practices of transdisciplinary listening, Queer Feelings and translations, and collaborative book sprinting, identified in these design microprotests change the thoughts and feelings of those who participate in ways that are impossible to measure in real time, and sometimes cannot be measured at all. Yet these practices are important now, as they shift the way designers design, and the way others understand what is designed. By identifying the common attributes of design microprotests, we can begin to understand the way necessary political conversations emerge in design practice, for instance about safe spaces, transdisciplinarity, and archives. Taking a research through design approach these can be understood over time, rather than just in the moment, and in specific territories that belong to community. They can be reconfigured into different conversations that change our world for the better. References Ahmed, Sara. “Queer Feelings.” The Cultural Politics of Emotion. Edinburgh: Edinburgh UP, 2004. 143-167.Clarke, Alison J. "'Actions Speak Louder': Victor Papanek and the Legacy of Design Activism." Design and Culture 5.2 (2013): 151-168.De Bretteville, Sheila L. Design beyond Design: Critical Reflection and the Practice of Visual Communication. Ed. Jan van Toorn. Maastricht: Jan van Eyck Akademie Editions, 1998. 115-127.Evers, L., et al. Open Design Now: Why Design Cannot Remain Exclusive. Amsterdam: BIS Publishers, 2011.Escobar, Arturo. Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Duke UP, 2018.Fiumara, G.C. The Other Side of Language: A Philosophy of Listening. London: Routledge, 1995.Fuad-Luke, Alastair. Design Activism: Beautiful Strangeness for a Sustainable World. London: Routledge, 2013.Frontyard Projects. 2018. Conversation Piece. Marrickville: Frontyard Projects. Fry, Tony. A New Design Philosophy: An Introduction to Defuturing. Sydney: UNSW P, 1999.Hanna, Julian, Alkan Chipperfield, Peter von Stackelberg, Trevor Haldenby, Nik Gaffney, Maja Kuzmanovic, Tim Boykett, Tina Auer, Marta Peirano, and Istvan Szakats. Futurish: Thinking Out Loud about Futures. Linz: Times Up, 2014. Irwin, Terry, Gideon Kossoff, and Cameron Tonkinwise. "Transition Design Provocation." Design Philosophy Papers 13.1 (2015): 3-11.Julier, Guy. "From Design Culture to Design Activism." Design and Culture 5.2 (2013): 215-236.Julier, Guy. "Introduction: Material Preference and Design Activism." Design and Culture 5.2 (2013): 145-150.Jonas, W. “Exploring the Swampy Ground.” Mapping Design Research. Eds. S. Grand and W. Jonas. Basel: Birkhauser, 2012. 11-41.Kagan, S. Art and Sustainability. Bielefeld: Transcript, 2011.Lenskjold, Tau Ulv, Sissel Olander, and Joachim Halse. “Minor Design Activism: Prompting Change from Within.” Design Issues 31.4 (2015): 67–78. doi:10.1162/DESI_a_00352.Max-Neef, M.A. "Foundations of Transdisciplinarity." Ecological Economics 53.53 (2005): 5-16.Miserandino, C. "The Spoon Theory." <http://www.butyoudontlooksick.com>.Nicolescu, B. "Methodology of Transdisciplinarity – Levels of Reality, Logic of the Included Middle and Complexity." Transdisciplinary Journal of Engineering and Science 1.1 (2010): 19-38.Palmer, C., J. Gothe, C. Mitchell, K. Sweetapple, S. McLaughlin, G. Hose, M. Lowe, H. Goodall, T. Green, D. Sharma, S. Fane, K. Brew, and P. Jones. “Finding Integration Pathways: Developing a Transdisciplinary (TD) Approach for the Upper Nepean Catchment.” Proceedings of the 5th Australian Stream Management Conference: Australian Rivers: Making a Difference. Thurgoona, NSW: Charles Sturt University, 2008.Rodriguez and Brison. "Purslane Soup." Conversation Piece. Eds. Frontyard Projects. Marrickville: Frontyard Projects, 2018. 34-41.Schultz, Tristan, et al. "What Is at Stake with Decolonizing Design? A Roundtable." Design and Culture 10.1 (2018): 81-101.Smith, Linda Tuhiwai. Decolonising Methodologies: Research and Indigenous Peoples. New York: ZED Books, 1998. Van Abel, Bas, et al. Open Design Now: Why Design Cannot Remain Exclusive. Bis Publishers, 2014.Wing Sue, Derald. Microaggressions in Everyday Life: Race, Gender, and Sexual Orientation. London: John Wiley & Sons, 2010. XV-XX.
APA, Harvard, Vancouver, ISO, and other styles
49

Cinque, Toija. "A Study in Anxiety of the Dark." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2759.

Full text
Abstract:
Introduction This article is a study in anxiety with regard to social online spaces (SOS) conceived of as dark. There are two possible ways to define ‘dark’ in this context. The first is that communication is dark because it either has limited distribution, is not open to all users (closed groups are a case example) or hidden. The second definition, linked as a result of the first, is the way that communication via these means is interpreted and understood. Dark social spaces disrupt the accepted top-down flow by the ‘gazing elite’ (data aggregators including social media), but anxious users might need to strain to notice what is out there, and this in turn destabilises one’s reception of the scene. In an environment where surveillance technologies are proliferating, this article examines contemporary, dark, interconnected, and interactive communications for the entangled affordances that might be brought to bear. A provocation is that resistance through counterveillance or “sousveillance” is one possibility. An alternative (or addition) is retreating to or building ‘dark’ spaces that are less surveilled and (perhaps counterintuitively) less fearful. This article considers critically the notion of dark social online spaces via four broad socio-technical concerns connected to the big social media services that have helped increase a tendency for fearful anxiety produced by surveillance and the perceived implications for personal privacy. It also shines light on the aspect of darkness where some users are spurred to actively seek alternative, dark social online spaces. Since the 1970s, public-key cryptosystems typically preserved security for websites, emails, and sensitive health, government, and military data, but this is now reduced (Williams). We have seen such systems exploited via cyberattacks and misappropriated data acquired by affiliations such as Facebook-Cambridge Analytica for targeted political advertising during the 2016 US elections. Via the notion of “parasitic strategies”, such events can be described as news/information hacks “whose attack vectors target a system’s weak points with the help of specific strategies” (von Nordheim and Kleinen-von Königslöw, 88). In accord with Wilson and Serisier’s arguments (178), emerging technologies facilitate rapid data sharing, collection, storage, and processing wherein subsequent “outcomes are unpredictable”. This would also include the effect of acquiescence. In regard to our digital devices, for some, being watched overtly—through cameras encased in toys, computers, and closed-circuit television (CCTV) to digital street ads that determine the resonance of human emotions in public places including bus stops, malls, and train stations—is becoming normalised (McStay, Emotional AI). It might appear that consumers immersed within this Internet of Things (IoT) are themselves comfortable interacting with devices that record sound and capture images for easy analysis and distribution across the communications networks. A counter-claim is that mainstream social media corporations have cultivated a sense of digital resignation “produced when people desire to control the information digital entities have about them but feel unable to do so” (Draper and Turow, 1824). Careful consumers’ trust in mainstream media is waning, with readers observing a strong presence of big media players in the industry and are carefully picking their publications and public intellectuals to follow (Mahmood, 6). A number now also avoid the mainstream internet in favour of alternate dark sites. This is done by users with “varying backgrounds, motivations and participation behaviours that may be idiosyncratic (as they are rooted in the respective person’s biography and circumstance)” (Quandt, 42). By way of connection with dark internet studies via Biddle et al. (1; see also Lasica), the “darknet” is a collection of networks and technologies used to share digital content … not a separate physical network but an application and protocol layer riding on existing networks. Examples of darknets are peer-to-peer file sharing, CD and DVD copying, and key or password sharing on email and newsgroups. As we note from the quote above, the “dark web” uses existing public and private networks that facilitate communication via the Internet. Gehl (1220; see also Gehl and McKelvey) has detailed that this includes “hidden sites that end in ‘.onion’ or ‘.i2p’ or other Top-Level Domain names only available through modified browsers or special software. Accessing I2P sites requires a special routing program ... . Accessing .onion sites requires Tor [The Onion Router]”. For some, this gives rise to social anxiety, read here as stemming from that which is not known, and an exaggerated sense of danger, which makes fight or flight seem the only options. This is often justified or exacerbated by the changing media and communication landscape and depicted in popular documentaries such as The Social Dilemma or The Great Hack, which affect public opinion on the unknown aspects of internet spaces and the uses of personal data. The question for this article remains whether the fear of the dark is justified. Consider that most often one will choose to make one’s intimate bedroom space dark in order to have a good night’s rest. We might pleasurably escape into a cinema’s darkness for the stories told therein, or walk along a beach at night enjoying unseen breezes. Most do not avoid these experiences, choosing to actively seek them out. Drawing this thread, then, is the case made here that agency can also be found in the dark by resisting socio-political structural harms. 1. Digital Futures and Anxiety of the Dark Fear of the darkI have a constant fear that something's always nearFear of the darkFear of the darkI have a phobia that someone's always there In the lyrics to the song “Fear of the Dark” (1992) by British heavy metal group Iron Maiden is a sense that that which is unknown and unseen causes fear and anxiety. Holding a fear of the dark is not unusual and varies in degree for adults as it does for children (Fellous and Arbib). Such anxiety connected to the dark does not always concern darkness itself. It can also be a concern for the possible or imagined dangers that are concealed by the darkness itself as a result of cognitive-emotional interactions (McDonald, 16). Extending this claim is this article’s non-binary assertion that while for some technology and what it can do is frequently misunderstood and shunned as a result, for others who embrace the possibilities and actively take it on it is learning by attentively partaking. Mistakes, solecism, and frustrations are part of the process. Such conceptual theorising falls along a continuum of thinking. Global interconnectivity of communications networks has certainly led to consequent concerns (Turkle Alone Together). Much focus for anxiety has been on the impact upon social and individual inner lives, levels of media concentration, and power over and commercialisation of the internet. Of specific note is that increasing commercial media influence—such as Facebook and its acquisition of WhatsApp, Oculus VR, Instagram, CRTL-labs (translating movements and neural impulses into digital signals), LiveRail (video advertising technology), Chainspace (Blockchain)—regularly changes the overall dynamics of the online environment (Turow and Kavanaugh). This provocation was born out recently when Facebook disrupted the delivery of news to Australian audiences via its service. Mainstream social online spaces (SOS) are platforms which provide more than the delivery of media alone and have been conceptualised predominantly in a binary light. On the one hand, they can be depicted as tools for the common good of society through notional widespread access and as places for civic participation and discussion, identity expression, education, and community formation (Turkle; Bruns; Cinque and Brown; Jenkins). This end of the continuum of thinking about SOS seems set hard against the view that SOS are operating as businesses with strategies that manipulate consumers to generate revenue through advertising, data, venture capital for advanced research and development, and company profit, on the other hand. In between the two polar ends of this continuum are the range of other possibilities, the shades of grey, that add contemporary nuance to understanding SOS in regard to what they facilitate, what the various implications might be, and for whom. By way of a brief summary, anxiety of the dark is steeped in the practices of privacy-invasive social media giants such as Facebook and its ancillary companies. Second are the advertising technology companies, surveillance contractors, and intelligence agencies that collect and monitor our actions and related data; as well as the increased ease of use and interoperability brought about by Web 2.0 that has seen a disconnection between technological infrastructure and social connection that acts to limit user permissions and online affordances. Third are concerns for the negative effects associated with depressed mental health and wellbeing caused by “psychologically damaging social networks”, through sleep loss, anxiety, poor body image, real world relationships, and the fear of missing out (FOMO; Royal Society for Public Health (UK) and the Young Health Movement). Here the harms are both individual and societal. Fourth is the intended acceleration toward post-quantum IoT (Fernández-Caramés), as quantum computing’s digital components are continually being miniaturised. This is coupled with advances in electrical battery capacity and interconnected telecommunications infrastructures. The result of such is that the ontogenetic capacity of the powerfully advanced network/s affords supralevel surveillance. What this means is that through devices and the services that they provide, individuals’ data is commodified (Neff and Nafus; Nissenbaum and Patterson). Personal data is enmeshed in ‘things’ requiring that the decisions that are both overt, subtle, and/or hidden (dark) are scrutinised for the various ways they shape social norms and create consequences for public discourse, cultural production, and the fabric of society (Gillespie). Data and personal information are retrievable from devices, sharable in SOS, and potentially exposed across networks. For these reasons, some have chosen to go dark by being “off the grid”, judiciously selecting their means of communications and their ‘friends’ carefully. 2. Is There Room for Privacy Any More When Everyone in SOS Is Watching? An interesting turn comes through counterarguments against overarching institutional surveillance that underscore the uses of technologies to watch the watchers. This involves a practice of counter-surveillance whereby technologies are tools of resistance to go ‘dark’ and are used by political activists in protest situations for both communication and avoiding surveillance. This is not new and has long existed in an increasingly dispersed media landscape (Cinque, Changing Media Landscapes). For example, counter-surveillance video footage has been accessed and made available via live-streaming channels, with commentary in SOS augmenting networking possibilities for niche interest groups or micropublics (Wilson and Serisier, 178). A further example is the Wordpress site Fitwatch, appealing for an end to what the site claims are issues associated with police surveillance (fitwatch.org.uk and endpolicesurveillance.wordpress.com). Users of these sites are called to post police officers’ identity numbers and photographs in an attempt to identify “cops” that might act to “misuse” UK Anti-terrorism legislation against activists during legitimate protests. Others that might be interested in doing their own “monitoring” are invited to reach out to identified personal email addresses or other private (dark) messaging software and application services such as Telegram (freeware and cross-platform). In their work on surveillance, Mann and Ferenbok (18) propose that there is an increase in “complex constructs between power and the practices of seeing, looking, and watching/sensing in a networked culture mediated by mobile/portable/wearable computing devices and technologies”. By way of critical definition, Mann and Ferenbok (25) clarify that “where the viewer is in a position of power over the subject, this is considered surveillance, but where the viewer is in a lower position of power, this is considered sousveillance”. It is the aspect of sousveillance that is empowering to those using dark SOS. One might consider that not all surveillance is “bad” nor institutionalised. It is neither overtly nor formally regulated—as yet. Like most technologies, many of the surveillant technologies are value-neutral until applied towards specific uses, according to Mann and Ferenbok (18). But this is part of the ‘grey area’ for understanding the impact of dark SOS in regard to which actors or what nations are developing tools for surveillance, where access and control lies, and with what effects into the future. 3. Big Brother Watches, So What Are the Alternatives: Whither the Gazing Elite in Dark SOS? By way of conceptual genealogy, consideration of contemporary perceptions of surveillance in a visually networked society (Cinque, Changing Media Landscapes) might be usefully explored through a revisitation of Jeremy Bentham’s panopticon, applied here as a metaphor for contemporary surveillance. Arguably, this is a foundational theoretical model for integrated methods of social control (Foucault, Surveiller et Punir, 192-211), realised in the “panopticon” (prison) in 1787 by Jeremy Bentham (Bentham and Božovič, 29-95) during a period of social reformation aimed at the improvement of the individual. Like the power for social control over the incarcerated in a panopticon, police power, in order that it be effectively exercised, “had to be given the instrument of permanent, exhaustive, omnipresent surveillance, capable of making all visible … like a faceless gaze that transformed the whole social body into a field of perception” (Foucault, Surveiller et Punir, 213–4). In grappling with the impact of SOS for the individual and the collective in post-digital times, we can trace out these early ruminations on the complex documentary organisation through state-controlled apparatuses (such as inspectors and paid observers including “secret agents”) via Foucault (Surveiller et Punir, 214; Subject and Power, 326-7) for comparison to commercial operators like Facebook. Today, artificial intelligence (AI), facial recognition technology (FRT), and closed-circuit television (CCTV) for video surveillance are used for social control of appropriate behaviours. Exemplified by governments and the private sector is the use of combined technologies to maintain social order, from ensuring citizens cross the street only on green lights, to putting rubbish in the correct recycling bin or be publicly shamed, to making cashless payments in stores. The actions see advantages for individual and collective safety, sustainability, and convenience, but also register forms of behaviour and attitudes with predictive capacities. This gives rise to suspicions about a permanent account of individuals’ behaviour over time. Returning to Foucault (Surveiller et Punir, 135), the impact of this finds a dissociation of power from the individual, whereby they become unwittingly impelled into pre-existing social structures, leading to a ‘normalisation’ and acceptance of such systems. If we are talking about the dark, anxiety is key for a Ministry of SOS. Following Foucault again (Subject and Power, 326-7), there is the potential for a crawling, creeping governance that was once distinct but is itself increasingly hidden and growing. A blanket call for some form of ongoing scrutiny of such proliferating powers might be warranted, but with it comes regulation that, while offering certain rights and protections, is not without consequences. For their part, a number of SOS platforms had little to no moderation for explicit content prior to December 2018, and in terms of power, notwithstanding important anxiety connected to arguments that children and the vulnerable need protections from those that would seek to take advantage, this was a crucial aspect of community building and self-expression that resulted in this freedom of expression. In unearthing the extent that individuals are empowered arising from the capacity to post sexual self-images, Tiidenberg ("Bringing Sexy Back") considered that through dark SOS (read here as unregulated) some users could work in opposition to the mainstream consumer culture that provides select and limited representations of bodies and their sexualities. This links directly to Mondin’s exploration of the abundance of queer and feminist pornography on dark SOS as a “counterpolitics of visibility” (288). This work resulted in a reasoned claim that the technological structure of dark SOS created a highly political and affective social space that users valued. What also needs to be underscored is that many users also believed that such a space could not be replicated on other mainstream SOS because of the differences in architecture and social norms. Cho (47) worked with this theory to claim that dark SOS are modern-day examples in a history of queer individuals having to rely on “underground economies of expression and relation”. Discussions such as these complicate what dark SOS might now become in the face of ‘adult’ content moderation and emerging tracking technologies to close sites or locate individuals that transgress social norms. Further, broader questions are raised about how content moderation fits in with the public space conceptualisations of SOS more generally. Increasingly, “there is an app for that” where being able to identify the poster of an image or an author of an unknown text is seen as crucial. While there is presently no standard approach, models for combining instance-based and profile-based features such as SVM for determining authorship attribution are in development, with the result that potentially far less content will remain hidden in the future (Bacciu et al.). 4. There’s Nothing New under the Sun (Ecclesiastes 1:9) For some, “[the] high hopes regarding the positive impact of the Internet and digital participation in civic society have faded” (Schwarzenegger, 99). My participant observation over some years in various SOS, however, finds that critical concern has always existed. Views move along the spectrum of thinking from deep scepticisms (Stoll, Silicon Snake Oil) to wondrous techo-utopian promises (Negroponte, Being Digital). Indeed, concerns about the (then) new technologies of wireless broadcasting can be compared with today’s anxiety over the possible effects of the internet and SOS. Inglis (7) recalls, here, too, were fears that humanity was tampering with some dangerous force; might wireless wave be causing thunderstorms, droughts, floods? Sterility or strokes? Such anxieties soon evaporated; but a sense of mystery might stay longer with evangelists for broadcasting than with a laity who soon took wireless for granted and settled down to enjoy the products of a process they need not understand. As the analogy above makes clear, just as audiences came to use ‘the wireless’ and later the internet regularly, it is reasonable to argue that dark SOS will also gain widespread understanding and find greater acceptance. Dark social spaces are simply the recent development of internet connectivity and communication more broadly. The dark SOS afford choice to be connected beyond mainstream offerings, which some users avoid for their perceived manipulation of content and user both. As part of the wider array of dark web services, the resilience of dark social spaces is reinforced by the proliferation of users as opposed to decentralised replication. Virtual Private Networks (VPNs) can be used for anonymity in parallel to TOR access, but they guarantee only anonymity to the client. A VPN cannot guarantee anonymity to the server or the internet service provider (ISP). While users may use pseudonyms rather than actual names as seen on Facebook and other SOS, users continue to take to the virtual spaces they inhabit their off-line, ‘real’ foibles, problems, and idiosyncrasies (Chenault). To varying degrees, however, people also take their best intentions to their interactions in the dark. The hyper-efficient tools now deployed can intensify this, which is the great advantage attracting some users. In balance, however, in regard to online information access and dissemination, critical examination of what is in the public’s interest, and whether content should be regulated or controlled versus allowing a free flow of information where users self-regulate their online behaviour, is fraught. O’Loughlin (604) was one of the first to claim that there will be voluntary loss through negative liberty or freedom from (freedom from unwanted information or influence) and an increase in positive liberty or freedom to (freedom to read or say anything); hence, freedom from surveillance and interference is a kind of negative liberty, consistent with both libertarianism and liberalism. Conclusion The early adopters of initial iterations of SOS were hopeful and liberal (utopian) in their beliefs about universality and ‘free’ spaces of open communication between like-minded others. This was a way of virtual networking using a visual motivation (led by images, text, and sounds) for consequent interaction with others (Cinque, Visual Networking). The structural transformation of the public sphere in a Habermasian sense—and now found in SOS and their darker, hidden or closed social spaces that might ensure a counterbalance to the power of those with influence—towards all having equal access to platforms for presenting their views, and doing so respectfully, is as ever problematised. Broadly, this is no more so, however, than for mainstream SOS or for communicating in the world. References Bacciu, Andrea, Massimo La Morgia, Alessandro Mei, Eugenio Nerio Nemmi, Valerio Neri, and Julinda Stefa. “Cross-Domain Authorship Attribution Combining Instance Based and Profile-Based Features.” CLEF (Working Notes). Lugano, Switzerland, 9-12 Sep. 2019. Bentham, Jeremy, and Miran Božovič. The Panopticon Writings. London: Verso Trade, 1995. Biddle, Peter, et al. “The Darknet and the Future of Content Distribution.” Proceedings of the 2002 ACM Workshop on Digital Rights Management. Vol. 6. Washington DC, 2002. Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008. Chenault, Brittney G. “Developing Personal and Emotional Relationships via Computer-Mediated Communication.” CMC Magazine 5.5 (1998). 1 May 2020 <http://www.december.com/cmc/mag/1998/may/chenault.html>. Cho, Alexander. “Queer Reverb: Tumblr, Affect, Time.” Networked Affect. Eds. K. Hillis, S. Paasonen, and M. Petit. Cambridge, Mass.: MIT Press, 2015: 43-58. Cinque, Toija. Changing Media Landscapes: Visual Networking. London: Oxford UP, 2015. ———. “Visual Networking: Australia's Media Landscape.” Global Media Journal: Australian Edition 6.1 (2012): 1-8. Cinque, Toija, and Adam Brown. “Educating Generation Next: Screen Media Use, Digital Competencies, and Tertiary Education.” Digital Culture & Education 7.1 (2015). Draper, Nora A., and Joseph Turow. “The Corporate Cultivation of Digital Resignation.” New Media & Society 21.8 (2019): 1824-1839. Fellous, Jean-Marc, and Michael A. Arbib, eds. Who Needs Emotions? The Brain Meets the Robot. New York: Oxford UP, 2005. Fernández-Caramés, Tiago M. “From Pre-Quantum to Post-Quantum IoT Security: A Survey on Quantum-Resistant Cryptosystems for the Internet of Things.” IEEE Internet of Things Journal 7.7 (2019): 6457-6480. Foucault, Michel. Surveiller et Punir: Naissance de la Prison [Discipline and Punish—The Birth of The Prison]. Trans. Alan Sheridan. New York: Random House, 1977. Foucault, Michel. “The Subject and Power.” Michel Foucault: Power, the Essential Works of Michel Foucault 1954–1984. Vol. 3. Trans. R. Hurley and others. Ed. J.D. Faubion. London: Penguin, 2001. Gehl, Robert W. Weaving the Dark Web: Legitimacy on Freenet, Tor, and I2P. Cambridge, Massachusetts: MIT Press, 2018. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219-235. Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. London: Yale UP, 2018. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Trans. Thomas Burger with the assistance of Frederick Lawrence. Cambridge, Mass.: MIT Press, 1989. Inglis, Ken S. This Is the ABC: The Australian Broadcasting Commission 1932–1983. Melbourne: Melbourne UP, 1983. Iron Maiden. “Fear of the Dark.” London: EMI, 1992. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. Lasica, J. D. Darknet: Hollywood’s War against the Digital Generation. New York: John Wiley and Sons, 2005. Mahmood, Mimrah. “Australia's Evolving Media Landscape.” 13 Apr. 2021 <https://www.meltwater.com/en/resources/australias-evolving-media-landscape>. Mann, Steve, and Joseph Ferenbok. “New Media and the Power Politics of Sousveillance in a Surveillance-Dominated World.” Surveillance & Society 11.1/2 (2013): 18-34. McDonald, Alexander J. “Cortical Pathways to the Mammalian Amygdala.” Progress in Neurobiology 55.3 (1998): 257-332. McStay, Andrew. Emotional AI: The Rise of Empathic Media. London: Sage, 2018. Mondin, Alessandra. “‘Tumblr Mostly, Great Empowering Images’: Blogging, Reblogging and Scrolling Feminist, Queer and BDSM Desires.” Journal of Gender Studies 26.3 (2017): 282-292. Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, Mass.: MIT Press, 2016. Negroponte, Nicholas. Being Digital. New York: Alfred A. Knopf, 1995. Nissenbaum, Helen, and Heather Patterson. “Biosensing in Context: Health Privacy in a Connected World.” Quantified: Biosensing Technologies in Everyday Life. Ed. Dawn Nafus. 2016. 68-79. O’Loughlin, Ben. “The Political Implications of Digital Innovations.” Information, Communication and Society 4.4 (2001): 595–614. Quandt, Thorsten. “Dark Participation.” Media and Communication 6.4 (2018): 36-48. Royal Society for Public Health (UK) and the Young Health Movement. “#Statusofmind.” 2017. 2 Apr. 2021 <https://www.rsph.org.uk/our-work/campaigns/status-of-mind.html>. Statista. “Number of IoT devices 2015-2025.” 27 Nov. 2020 <https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/>. Schwarzenegger, Christian. “Communities of Darkness? Users and Uses of Anti-System Alternative Media between Audience and Community.” Media and Communication 9.1 (2021): 99-109. Stoll, Clifford. Silicon Snake Oil: Second Thoughts on the Information Highway. Anchor, 1995. Tiidenberg, Katrin. “Bringing Sexy Back: Reclaiming the Body Aesthetic via Self-Shooting.” Cyberpsychology: Journal of Psychosocial Research on Cyberspace 8.1 (2014). The Great Hack. Dirs. Karim Amer, Jehane Noujaim. Netflix, 2019. The Social Dilemma. Dir. Jeff Orlowski. Netflix, 2020. Turkle, Sherry. The Second Self: Computers and the Human Spirit. Cambridge, Mass.: MIT Press, 2005. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. UK: Hachette, 2017. Turow, Joseph, and Andrea L. Kavanaugh, eds. The Wired Homestead: An MIT Press Sourcebook on the Internet and the Family. Cambridge, Mass.: MIT Press, 2003. Von Nordheim, Gerret, and Katharina Kleinen-von Königslöw. “Uninvited Dinner Guests: A Theoretical Perspective on the Antagonists of Journalism Based on Serres’ Parasite.” Media and Communication 9.1 (2021): 88-98. Williams, Chris K. “Configuring Enterprise Public Key Infrastructures to Permit Integrated Deployment of Signature, Encryption and Access Control Systems.” MILCOM 2005-2005 IEEE Military Communications Conference. IEEE, 2005. Wilson, Dean, and Tanya Serisier. “Video Activism and the Ambiguities of Counter-Surveillance.” Surveillance & Society 8.2 (2010): 166-180.
APA, Harvard, Vancouver, ISO, and other styles
50

Goggin, Gerard. "Broadband." M/C Journal 6, no. 4 (August 1, 2003). http://dx.doi.org/10.5204/mcj.2219.

Full text
Abstract:
Connecting I’ve moved house on the weekend, closer to the centre of an Australian capital city. I had recently signed up for broadband, with a major Australian Internet company (my first contact, cf. Turner). Now I am the proud owner of a larger modem than I have ever owned: a white cable modem. I gaze out into our new street: two thick black cables cosseted in silver wire. I am relieved. My new home is located in one of those streets, double-cabled by Telstra and Optus in the data-rush of the mid-1990s. Otherwise, I’d be moth-balling the cable modem, and the thrill of my data percolating down coaxial cable. And it would be off to the computer supermarket to buy an ASDL modem, then to pick a provider, to squeeze some twenty-first century connectivity out of old copper (the phone network our grandparents and great-grandparents built). If I still lived in the country, or the outskirts of the city, or anywhere else more than four kilometres from the phone exchange, and somewhere that cable pay TV will never reach, it would be a dish for me — satellite. Our digital lives are premised upon infrastructure, the networks through which we shape what we do, fashion the meanings of our customs and practices, and exchange signs with others. Infrastructure is not simply the material or the technical (Lamberton), but it is the dense, fibrous knotting together of social visions, cultural resources, individual desires, and connections. No more can one easily discern between ‘society’ and ‘technology’, ‘carriage’ and ‘content’, ‘base’ and ‘superstructure’, or ‘infrastructure’ and ‘applications’ (or ‘services’ or ‘content’). To understand telecommunications in action, or the vectors of fibre, we need to consider the long and heterogeneous list of links among different human and non-human actors — the long networks, to take Bruno Latour’s evocative concept, that confect our broadband networks (Latour). The co-ordinates of our infrastructure still build on a century-long history of telecommunications networks, on the nineteenth-century centrality of telegraphy preceding this, and on the histories of the public and private so inscribed. Yet we are in the midst of a long, slow dismantling of the posts-telegraph-telephone (PTT) model of the monopoly carrier for each nation that dominated the twentieth century, with its deep colonial foundations. Instead our New World Information and Communication Order is not the decolonising UNESCO vision of the late 1970s and early 1980s (MacBride, Maitland). Rather it is the neoliberal, free trade, market access model, its symbol the 1984 US judicial decision to require the break-up of AT&T and the UK legislation in the same year that underpinned the Thatcherite twin move to privatize British Telecom and introduce telecommunications competition. Between 1984 and 1999, 110 telecommunications companies were privatized, and the ‘acquisition of privatized PTOs [public telecommunications operators] by European and American operators does follow colonial lines’ (Winseck 396; see also Mody, Bauer & Straubhaar). The competitive market has now been uneasily installed as the paradigm for convergent communications networks, not least with the World Trade Organisation’s 1994 General Agreement on Trade in Services and Annex on Telecommunications. As the citizen is recast as consumer and customer (Goggin, ‘Citizens and Beyond’), we rethink our cultural and political axioms as well as the axes that orient our understandings in this area. Information might travel close to the speed of light, and we might fantasise about optical fibre to the home (or pillow), but our terrain, our band where the struggle lies today, is narrower than we wish. Begging for broadband, it seems, is a long way from warchalking for WiFi. Policy Circuits The dreary everyday business of getting connected plugs the individual netizen into a tangled mess of policy circuits, as much as tricky network negotiations. Broadband in mid-2003 in Australia is a curious chimera, welded together from a patchwork of technologies, old and newer communications industries, emerging economies and patterns of use. Broadband conjures up grander visions, however, of communication and cultural cornucopia. Broadband is high-speed, high-bandwidth, ‘always-on’, networked communications. People can send and receive video, engage in multimedia exchanges of all sorts, make the most of online education, realise the vision of home-based work and trading, have access to telemedicine, and entertainment. Broadband really entered the lexicon with the mass takeup of the Internet in the early to mid-1990s, and with the debates about something called the ‘information superhighway’. The rise of the Internet, the deregulation of telecommunications, and the involuted convergence of communications and media technologies saw broadband positioned at the centre of policy debates nearly a decade ago. In 1993-1994, Australia had its Broadband Services Expert Group (BSEG), established by the then Labor government. The BSEG was charged with inquiring into ‘issues relating to the delivery of broadband services to homes, schools and businesses’. Stung by criticisms of elite composition (a narrow membership, with only one woman among its twelve members, and no consumer or citizen group representation), the BSEG was prompted into wider public discussion and consultation (Goggin & Newell). The then Bureau of Transport and Communications Economics (BTCE), since transmogrified into the Communications Research Unit of the Department of Communications, Information Technology and the Arts (DCITA), conducted its large-scale Communications Futures Project (BTCE and Luck). The BSEG Final report posed the question starkly: As a society we have choices to make. If we ignore the opportunities we run the risk of being left behind as other countries introduce new services and make themselves more competitive: we will become consumers of other countries’ content, culture and technologies rather than our own. Or we could adopt new technologies at any cost…This report puts forward a different approach, one based on developing a new, user-oriented strategy for communications. The emphasis will be on communication among people... (BSEG v) The BSEG proposed a ‘National Strategy for New Communications Networks’ based on three aspects: education and community access, industry development, and the role of government (BSEG x). Ironically, while the nation, or at least its policy elites, pondered the weighty question of broadband, Australia’s two largest telcos were doing it. The commercial decision of Telstra/Foxtel and Optus Vision, and their various television partners, was to nail their colours (black) to the mast, or rather telegraph pole, and to lay cable in the major capital cities. In fact, they duplicated the infrastructure in cities such as Sydney and Melbourne, then deciding it would not be profitable to cable up even regional centres, let alone small country towns or settlements. As Terry Flew and Christina Spurgeon observe: This wasteful duplication contrasted with many other parts of the country that would never have access to this infrastructure, or to the social and economic benefits that it was perceived to deliver. (Flew & Spurgeon 72) The implications of this decision for Australia’s telecommunications and television were profound, but there was little, if any, public input into this. Then Minister Michael Lee was very proud of his anti-siphoning list of programs, such as national sporting events, that would remain on free-to-air television rather than screen on pay, but was unwilling, or unable, to develop policy on broadband and pay TV cable infrastructure (on the ironies of Australia’s television history, see Given’s masterly account). During this period also, it may be remembered, Australia’s Internet was being passed into private hands, with the tendering out of AARNET (see Spurgeon for discussion). No such national strategy on broadband really emerged in the intervening years, nor has the market provided integrated, accessible broadband services. In 1997, landmark telecommunications legislation was enacted that provided a comprehensive framework for competition in telecommunications, as well as consolidating and extending consumer protection, universal service, customer service standards, and other reforms (CLC). Carrier and reseller competition had commenced in 1991, and the 1997 legislation gave it further impetus. Effective competition is now well established in long distance telephone markets, and in mobiles. Rivalrous competition exists in the market for local-call services, though viable alternatives to Telstra’s dominance are still few (Fels). Broadband too is an area where there is symbolic rivalry rather than effective competition. This is most visible in advertised ADSL offerings in large cities, yet most of the infrastructure for these services is comprised by Telstra’s copper, fixed-line network. Facilities-based duopoly competition exists principally where Telstra/Foxtel and Optus cable networks have been laid, though there are quite a number of ventures underway by regional telcos, power companies, and, most substantial perhaps, the ACT government’s TransACT broadband network. Policymakers and industry have been greatly concerned about what they see as slow takeup of broadband, compared to other countries, and by barriers to broadband competition and access to ‘bottleneck’ facilities (such as Telstra or Optus’s networks) by potential competitors. The government has alternated between trying to talk up broadband benefits and rates of take up and recognising the real difficulties Australia faces as a large country with a relative small and dispersed population. In March 2003, Minister Alston directed the ACCC to implement new monitoring and reporting arrangements on competition in the broadband industry. A key site for discussion of these matters has been the competition policy institution, the Australian Competition and Consumer Commission, and its various inquiries, reports, and considerations (consult ACCC’s telecommunications homepage at http://www.accc.gov.au/telco/fs-telecom.htm). Another key site has been the Productivity Commission (http://www.pc.gov.au), while a third is the National Office on the Information Economy (NOIE - http://www.noie.gov.au/projects/access/access/broadband1.htm). Others have questioned whether even the most perfectly competitive market in broadband will actually provide access to citizens and consumers. A great deal of work on this issue has been undertaken by DCITA, NOIE, the regulators, and industry bodies, not to mention consumer and public interest groups. Since 1997, there have been a number of governmental inquiries undertaken or in progress concerning the takeup of broadband and networked new media (for example, a House of Representatives Wireless Broadband Inquiry), as well as important inquiries into the still most strategically important of Australia’s companies in this area, Telstra. Much of this effort on an ersatz broadband policy has been piecemeal and fragmented. There are fundamental difficulties with the large size of the Australian continent and its harsh terrain, the small size of the Australian market, the number of providers, and the dominant position effectively still held by Telstra, as well as Singtel Optus (Optus’s previous overseas investors included Cable & Wireless and Bell South), and the larger telecommunications and Internet companies (such as Ozemail). Many consumers living in metropolitan Australia still face real difficulties in realising the slogan ‘bandwidth for all’, but the situation in parts of rural Australia is far worse. Satellite ‘broadband’ solutions are available, through Telstra Countrywide or other providers, but these offer limited two-way interactivity. Data can be received at reasonable speeds (though at far lower data rates than how ‘broadband’ used to be defined), but can only be sent at far slower rates (Goggin, Rural Communities Online). The cultural implications of these digital constraints may well be considerable. Computer gamers, for instance, are frustrated by slow return paths. In this light, the final report of the January 2003 Broadband Advisory Group (BAG) is very timely. The BAG report opens with a broadband rhapsody: Broadband communications technologies can deliver substantial economic and social benefits to Australia…As well as producing productivity gains in traditional and new industries, advanced connectivity can enrich community life, particularly in rural and regional areas. It provides the basis for integration of remote communities into national economic, cultural and social life. (BAG 1, 7) Its prescriptions include: Australia will be a world leader in the availability and effective use of broadband...and to capture the economic and social benefits of broadband connectivity...Broadband should be available to all Australians at fair and reasonable prices…Market arrangements should be pro-competitive and encourage investment...The Government should adopt a National Broadband Strategy (BAG 1) And, like its predecessor nine years earlier, the BAG report does make reference to a national broadband strategy aiming to maximise “choice in work and recreation activities available to all Australians independent of location, background, age or interests” (17). However, the idea of a national broadband strategy is not something the BAG really comes to grips with. The final report is keen on encouraging broadband adoption, but not explicit on how barriers to broadband can be addressed. Perhaps this is not surprising given that the membership of the BAG, dominated by representatives of large corporations and senior bureaucrats was even less representative than its BSEG predecessor. Some months after the BAG report, the Federal government did declare a broadband strategy. It did so, intriguingly enough, under the rubric of its response to the Regional Telecommunications Inquiry report (Estens), the second inquiry responsible for reassuring citizens nervous about the full-privatisation of Telstra (the first inquiry being Besley). The government’s grand $142.8 million National Broadband Strategy focusses on the ‘broadband needs of regional Australians, in partnership with all levels of government’ (Alston, ‘National Broadband Strategy’). Among other things, the government claims that the Strategy will result in “improved outcomes in terms of services and prices for regional broadband access; [and] the development of national broadband infrastructure assets.” (Alston, ‘National Broadband Strategy’) At the same time, the government announced an overall response to the Estens Inquiry, with specific safeguards for Telstra’s role in regional communications — a preliminary to the full Telstra sale (Alston, ‘Future Proofing’). Less publicised was the government’s further initiative in indigenous telecommunications, complementing its Telecommunications Action Plan for Remote Indigenous Communities (DCITA). Indigenous people, it can be argued, were never really contemplated as citizens with the ken of the universal service policy taken to underpin the twentieth-century government monopoly PTT project. In Australia during the deregulatory and re-regulatory 1990s, there was a great reluctance on the part of Labor and Coalition Federal governments, Telstra and other industry participants, even to research issues of access to and use of telecommunications by indigenous communicators. Telstra, and to a lesser extent Optus (who had purchased AUSSAT as part of their licence arrangements), shrouded the issue of indigenous communications in mystery that policymakers were very reluctant to uncover, let alone systematically address. Then regulator, the Australian Telecommunications Authority (AUSTEL), had raised grave concerns about indigenous telecommunications access in its 1991 Rural Communications inquiry. However, there was no government consideration of, nor research upon, these issues until Alston commissioned a study in 2001 — the basis for the TAPRIC strategy (DCITA). The elision of indigenous telecommunications from mainstream industry and government policy is all the more puzzling, if one considers the extraordinarily varied and significant experiments by indigenous Australians in telecommunications and Internet (not least in the early work of the Tanami community, made famous in media and cultural studies by the writings of anthropologist Eric Michaels). While the government’s mid-2003 moves on a ‘National Broadband Strategy’ attend to some details of the broadband predicament, they fall well short of an integrated framework that grasps the shortcomings of the neoliberal communications model. The funding offered is a token amount. The view from the seat of government is a glance from the rear-view mirror: taking a snapshot of rural communications in the years 2000-2002 and projecting this tableau into a safety-net ‘future proofing’ for the inevitable turning away of a fully-privately-owned Telstra from its previously universal, ‘carrier of last resort’ responsibilities. In this aetiolated, residualist policy gaze, citizens remain constructed as consumers in a very narrow sense in this incremental, quietist version of state securing of market arrangements. What is missing is any more expansive notion of citizens, their varied needs, expectations, uses, and cultural imaginings of ‘always on’ broadband networks. Hybrid Networks “Most people on earth will eventually have access to networks that are all switched, interactive, and broadband”, wrote Frances Cairncross in 1998. ‘Eventually’ is a very appropriate word to describe the parlous state of broadband technology implementation. Broadband is in a slow state of evolution and invention. The story of broadband so far underscores the predicament for Australian access to bandwidth, when we lack any comprehensive, integrated, effective, and fair policy in communications and information technology. We have only begun to experiment with broadband technologies and understand their evolving uses, cultural forms, and the sense in which they rework us as subjects. Our communications networks are not superhighways, to invoke an enduring artefact from an older technology. Nor any longer are they a single ‘public’ switched telecommunications network, like those presided over by the post-telegraph-telephone monopolies of old. Like roads themselves, or the nascent postal system of the sixteenth century, broadband is a patchwork quilt. The ‘fibre’ of our communications networks is hybrid. To be sure, powerful corporations dominate, like the Tassis or Taxis who served as postmasters to the Habsburg emperors (Briggs & Burke 25). Activating broadband today provides a perspective on the path dependency of technology history, and how we can open up new threads of a communications fabric. Our options for transforming our multitudinous networked lives emerge as much from everyday tactics and strategies as they do from grander schemes and unifying policies. We may care to reflect on the waning potential for nation-building technology, in the wake of globalisation. We no longer gather our imagined community around a Community Telephone Plan as it was called in 1960 (Barr, Moyal, and PMG). Yet we do require national and international strategies to get and stay connected (Barr), ideas and funding that concretely address the wider dimensions of access and use. We do need to debate the respective roles of Telstra, the state, community initiatives, and industry competition in fair telecommunications futures. Networks have global reach and require global and national integration. Here vision, co-ordination, and resources are urgently required for our commonweal and moral fibre. To feel the width of the band we desire, we need to plug into and activate the policy circuits. Thanks to Grayson Cooke, Patrick Lichty, Ned Rossiter, John Pace, and an anonymous reviewer for helpful comments. Works Cited Alston, Richard. ‘ “Future Proofing” Regional Communications.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.php> —. ‘A National Broadband Strategy.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.php>. Australian Competition and Consumer Commission (ACCC). Broadband Services Report March 2003. Canberra: ACCC, 2003. 17 July 2003 <http://www.accc.gov.au/telco/fs-telecom.htm>. —. Emerging Market Structures in the Communications Sector. Canberra: ACCC, 2003. 15 July 2003 <http://www.accc.gov.au/pubs/publications/utilities/telecommu... ...nications/Emerg_mar_struc.doc>. Barr, Trevor. new media.com: The Changing Face of Australia’s Media and Telecommunications. Sydney: Allen & Unwin, 2000. Besley, Tim (Telecommunications Service Inquiry). Connecting Australia: Telecommunications Service Inquiry. Canberra: Department of Information, Communications and the Arts, 2000. 17 July 2003 <http://www.telinquiry.gov.au/final_report.php>. Briggs, Asa, and Burke, Peter. A Social History of the Internet: From Gutenberg to the Internet. Cambridge: Polity, 2002. Broadband Advisory Group. Australia’s Broadband Connectivity: The Broadband Advisory Group’s Report to Government. Melbourne: National Office on the Information Economy, 2003. 15 July 2003 <http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm>. Broadband Services Expert Group. Networking Australia’s Future: Final Report. Canberra: Australian Government Publishing Service (AGPS), 1994. Bureau of Transport and Communications Economics (BTCE). Communications Futures Final Project. Canberra: AGPS, 1994. Cairncross, Frances. The Death of Distance: How the Communications Revolution Will Change Our Lives. London: Orion Business Books, 1997. Communications Law Centre (CLC). Australian Telecommunications Regulation: The Communications Law Centre Guide. 2nd edition. Sydney: Communications Law Centre, University of NSW, 2001. Department of Communications, Information Technology and the Arts (DCITA). Telecommunications Action Plan for Remote Indigenous Communities: Report on the Strategic Study for Improving Telecommunications in Remote Indigenous Communities. Canberra: DCITA, 2002. Estens, D. Connecting Regional Australia: The Report of the Regional Telecommunications Inquiry. Canberra: DCITA, 2002. <http://www.telinquiry.gov.au/rti-report.php>, accessed 17 July 2003. Fels, Alan. ‘Competition in Telecommunications’, speech to Australian Telecommunications Users Group 19th Annual Conference. 6 March, 2003, Sydney. <http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc>, accessed 15 July 2003. Flew, Terry, and Spurgeon, Christina. ‘Television After Broadcasting’. In The Australian TV Book. Ed. Graeme Turner and Stuart Cunningham. Allen & Unwin, Sydney. 69-85. 2000. Given, Jock. Turning Off the Television. Sydney: UNSW Press, 2003. Goggin, Gerard. ‘Citizens and Beyond: Universal service in the Twilight of the Nation-State.’ In All Connected?: Universal Service in Telecommunications, ed. Bruce Langtry. Melbourne: University of Melbourne Press, 1998. 49-77 —. Rural Communities Online: Networking to link Consumers to Providers. Melbourne: Telstra Consumer Consultative Council, 2003. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. House of Representatives Standing Committee on Communications, Information Technology and the Arts (HoR). Connecting Australia!: Wireless Broadband. Report of Inquiry into Wireless Broadband Technologies. Canberra: Parliament House, 2002. <http://www.aph.gov.au/house/committee/cita/Wbt/report.htm>, accessed 17 July 2003. Lamberton, Don. ‘A Telecommunications Infrastructure is Not an Information Infrastructure’. Prometheus: Journal of Issues in Technological Change, Innovation, Information Economics, Communication and Science Policy 14 (1996): 31-38. Latour, Bruno. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press, 1987. Luck, David. ‘Revisiting the Future: Assessing the 1994 BTCE communications futures project.’ Media International Australia 96 (2000): 109-119. MacBride, Sean (Chair of International Commission for the Study of Communication Problems). Many Voices, One World: Towards a New More Just and More Efficient World Information and Communication Order. Paris: Kegan Page, London. UNESCO, 1980. Maitland Commission (Independent Commission on Worldwide Telecommunications Development). The Missing Link. Geneva: International Telecommunications Union, 1985. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Sydney: Allen & Unwin, 1994. Mody, Bella, Bauer, Johannes M., and Straubhaar, Joseph D., eds. Telecommunications Politics: Ownership and Control of the Information Highway in Developing Countries. Mahwah, NJ: Erlbaum, 1995. Moyal, Ann. Clear Across Australia: A History of Telecommunications. Melbourne: Thomas Nelson, 1984. Post-Master General’s Department (PMG). Community Telephone Plan for Australia. Melbourne: PMG, 1960. Productivity Commission (PC). Telecommunications Competition Regulation: Inquiry Report. Report No. 16. Melbourne: Productivity Commission, 2001. <http://www.pc.gov.au/inquiry/telecommunications/finalreport/>, accessed 17 July 2003. Spurgeon, Christina. ‘National Culture, Communications and the Information Economy.’ Media International Australia 87 (1998): 23-34. Turner, Graeme. ‘First Contact: coming to terms with the cable guy.’ UTS Review 3 (1997): 109-21. Winseck, Dwayne. ‘Wired Cities and Transnational Communications: New Forms of Governance for Telecommunications and the New Media’. In The Handbook of New Media: Social Shaping and Consequences of ICTs, ed. Leah A. Lievrouw and Sonia Livingstone. London: Sage, 2002. 393-409. World Trade Organisation. General Agreement on Trade in Services: Annex on Telecommunications. Geneva: World Trade Organisation, 1994. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm>. —. Fourth protocol to the General Agreement on Trade in Services. Geneva: World Trade Organisation. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm>. Links http://www.accc.gov.au/pubs/publications/utilities/telecommunications/Emerg_mar_struc.doc http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc http://www.accc.gov.au/telco/fs-telecom.htm http://www.aph.gov.au/house/committee/cita/Wbt/report.htm http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.html http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.html http://www.noie.gov.au/projects/access/access/broadband1.htm http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm http://www.pc.gov.au http://www.pc.gov.au/inquiry/telecommunications/finalreport/ http://www.telinquiry.gov.au/final_report.html http://www.telinquiry.gov.au/rti-report.html http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Goggin, Gerard. "Broadband" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/02-featurebroadband.php>. APA Style Goggin, G. (2003, Aug 26). Broadband. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/02-featurebroadband.php>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography