To see the other types of publications on this topic, follow the link: Wireless communication systems. Estimation theory. Speed.

Journal articles on the topic 'Wireless communication systems. Estimation theory. Speed'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 journal articles for your research on the topic 'Wireless communication systems. Estimation theory. Speed.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bessios, Anthony G., and Frank M. Caimi. "High-rate wireless data communications: An underwater acoustic communications framework at the physical layer." Mathematical Problems in Engineering 2, no. 6 (1996): 449–85. http://dx.doi.org/10.1155/s1024123x96000439.

Full text
Abstract:
A variety of signal processing functions are performed by Underwater Acoustic Systems. These include: 1) detection to determine presence or absence of information signals in the presence of noise, or an attempt to describe which of a predetermined finite set of possible messages{mi,i,...,M}the signal represents; 2) estimation of some parameterθˆassociated with the received signal (i.e. range, depth, bearing angle, etc.); 3) classification and source identification; 4) dynamics tracking; 5) navigation (collision avoidance and terminal guidance); 6) countermeasures; and 7) communications. The focus of this paper is acoustic communications.There is a global current need to develop reliable wireless digital communications for the underwater environment, with sufficient performance and efficiency to substitute for costly wired systems. One possible goal is a wireless system implementation that insures underwater terminal mobility. There is also a vital need to improve the performance of the existing systems in terms of data-rate, noise immunity, operational range, and power consumption, since, in practice, portable high-speed, long range, compact, low-power systems are desired.We concede the difficulties associated with acoustic systems and concentrate on the development of robust data transmission methods anticipating the eventual need for real time or near real time video transmission. An overview of the various detection techniques and the general statistical digital communication problem is given based on a statistical decision theory framework. The theoretical formulation of the underwater acoustic data communications problem includes modeling of the stochastic channel to incorporate a variety of impairments and environmental uncertainties, and proposal of new compensation strategies for an efficient and robust receiver design.
APA, Harvard, Vancouver, ISO, and other styles
2

Bhatt, Maharshi K., Bhavin S. Sedani, and Komal Borisagar. "Performance analysis of massive multiple input multiple output for high speed railway." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5180. http://dx.doi.org/10.11591/ijece.v11i6.pp5180-5188.

Full text
Abstract:
This paper analytically reviews the performance of massive multiple input multiple output (MIMO) system for communication in highly mobility scenarios like high speed Railways. As popularity of high speed train increasing day by day, high data rate wireless communication system for high speed train is extremely required. 5G wireless communication systems must be designed to meet the requirement of high speed broadband services at speed of around 500 km/h, which is the expected speed achievable by HSR systems, at a data rate of 180 Mbps or higher. Significant challenges of high mobility communications are fast time-varying fading, channel estimation errors, doppler diversity, carrier frequency offset, inter carrier interference, high penetration loss and fast and frequent handovers. Therefore, crucial requirement to design high mobility communication channel models or systems prevails. Recently, massive MIMO techniques have been proposed to significantly improve the performance of wireless networks for upcoming 5G technology. Massive MIMO provide high throughput and high energy efficiency in wireless communication channel. In this paper, key findings, challenges and requirements to provide high speed wireless communication onboard the high speed train is pointed out after thorough literature review. In last, future research scope to bridge the research gap by designing efficient channel model by using massive MIMO and other optimization method is mentioned.
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Meilin, Bo Ai, Yong Niu, Zhewei Zhang, Yanqing Xu, and Dapeng Li. "Resource allocation in D2D-aided high-speed railway wireless communication systems: a matching theory approach." China Communications 14, no. 12 (December 2017): 87–99. http://dx.doi.org/10.1109/cc.2017.8246339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abuella, Hisham, Sabit Ekin, Samir Ahmed, Farshad Miramirkhani, Burak Kebapci, and Murat Uysal. "Wireless Sensing using Vehicle Headlamps for Intelligent Transportation Systems: Proof of Concept." MATEC Web of Conferences 271 (2019): 06004. http://dx.doi.org/10.1051/matecconf/201927106004.

Full text
Abstract:
Vehicular communication and sensing technologies are mainly based on the conventional radio frequency (RF) or laser technologies. These systems suffer from several issues such as RF interference and poor performance in scenarios where the incidence angle between the speed detector and the vehicle is rapidly varying. Introducing a new sensing technology will add diversity to these systems and enhance the reliability of the real-time data. In this study, we investigate our speed estimation sensing system named “Visible Light Detection and Ranging (ViLDAR)”. ViLDAR utilizes visible light sensing technology to measure the variation of the vehicle's headlamp light intensity and estimate the vehicle speed. The measurement settings of the ViLDAR experiments are presented. The preliminary results obtained in the real-world environment/setting are promising when compared to the simulations. Additional measurements using the ViLDAR prototype will be conducted under different conditions and scenarios to further optimize the system.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Yanrong, Xiyu Wang, Gongpu Wang, Ruisi He, Yulong Zou, and Zhuyan Zhao. "Channel estimation and throughput evaluation for 5G wireless communication systems in various scenarios on high speed railways." China Communications 15, no. 4 (April 2018): 86–97. http://dx.doi.org/10.1109/cc.2018.8357743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lian, Jie, Yan Gao, Peng Wu, and Dianbin Lian. "Orthogonal Frequency Division Multiplexing Techniques Comparison for Underwater Optical Wireless Communication Systems." Sensors 19, no. 1 (January 4, 2019): 160. http://dx.doi.org/10.3390/s19010160.

Full text
Abstract:
Optical wireless communication is an energy-efficient and cost-effective solution for high-speed and highly-secure wireless connections. In this paper, we compare, discuss, and analyze three popular optical orthogonal frequency division multiplexing (OFDM) techniques, such as DC-biased optical OFDM (DCO-OFDM), asymmetrically-clipped optical OFDM (ACO-OFDM), and unipolar OFDM (U-OFDM), for underwater optical wireless communication systems. The peak power constraint, bandwidth limit of the light source, turbulence fading underwater channel, and the channel estimation error are taken into account. To maximize the achievable data propagation distance, we propose to optimize the modulation index that controls the signal magnitude, and a bitloading algorithm is applied. This optimization process trades off the clipping distortion caused by the peak power constraint and the signal to noise ratio (SNR). The SNR and clipping effects of the three compared OFDM techniques are modeled in this paper. From the numerical results, DCO-OFDM outperforms ACO- and U-OFDM when the transmitted bit rate is high compared to the channel bandwidth. Otherwise, U-OFDM can provide a longer propagation distance or requires less transmitted power.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hua-Ching, Chia-Lun Wu, Jwo-Shiun Sun, and Hsuan-Ming Feng. "Carrier Current Line Systems Technologies in M2M Architecture for Wireless Communication." Journal of Sensors 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/2652310.

Full text
Abstract:
This paper investigates the Carrier Current Line Systems (CCLS) technologies of Machine to Machine (M2M) architecture which applied for mobile station coverage working with metro, high speed railway, and subway such as analysis for public transport of an indoor transition system. It is based on the theory and practical engineering principle which provide guidelines and formulas for link budget design to help designers fully control and analyze the single output power of uplink and downlink between Fiber Repeaters (FR) and mobile station as well as base station. Finally, the results of this leakage cable system are successfully applied to indoor coverage design for metro rapid transit system which are easily installed cellular over fiber solutions for WCDMA/LTE access is becoming Ubiquitous Network to Internet of Thing (IOT) real case hierarchy of telecommunication.
APA, Harvard, Vancouver, ISO, and other styles
8

Dhanasekaran, S., and J. Ramesh. "Channel estimation using spatial partitioning with coalitional game theory (SPCGT) in wireless communication." Wireless Networks 27, no. 3 (January 23, 2021): 1887–99. http://dx.doi.org/10.1007/s11276-020-02528-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Waseem, Athar, Aqdas Naveed, Sardar Ali, Muhammad Arshad, Haris Anis, and Ijaz Mansoor Qureshi. "Compressive Sensing Based Channel Estimation for Massive MIMO Communication Systems." Wireless Communications and Mobile Computing 2019 (May 27, 2019): 1–15. http://dx.doi.org/10.1155/2019/6374764.

Full text
Abstract:
Massive multiple-input multiple-output (MIMO) is believed to be a key technology to get 1000x data rates in wireless communication systems. Massive MIMO occupies a large number of antennas at the base station (BS) to serve multiple users at the same time. It has appeared as a promising technique to realize high-throughput green wireless communications. Massive MIMO exploits the higher degree of spatial freedom, to extensively improve the capacity and energy efficiency of the system. Thus, massive MIMO systems have been broadly accepted as an important enabling technology for 5th Generation (5G) systems. In massive MIMO systems, a precise acquisition of the channel state information (CSI) is needed for beamforming, signal detection, resource allocation, etc. Yet, having large antennas at the BS, users have to estimate channels linked with hundreds of transmit antennas. Consequently, pilot overhead gets prohibitively high. Hence, realizing the correct channel estimation with the reasonable pilot overhead has become a challenging issue, particularly for frequency division duplex (FDD) in massive MIMO systems. In this paper, by taking advantage of spatial and temporal common sparsity of massive MIMO channels in delay domain, nonorthogonal pilot design and channel estimation schemes are proposed under the frame work of structured compressive sensing (SCS) theory that considerably reduces the pilot overheads for massive MIMO FDD systems. The proposed pilot design is fundamentally different from conventional orthogonal pilot designs based on Nyquist sampling theorem. Finally, simulations have been performed to verify the performance of the proposed schemes. Compared to its conventional counterparts with fewer pilots overhead, the proposed schemes improve the performance of the system.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Hui, Xianyu Wang, Kaixin Yang, and Tongrui Peng. "Analysis of Distributed Wireless Sensor Systems with a Switched Quantizer." Complexity 2021 (July 29, 2021): 1–14. http://dx.doi.org/10.1155/2021/6690761.

Full text
Abstract:
In this article, a switched quantizer is proposed to solve the bandwidth limitation application problem for distributed wireless sensor networks (WSNs). The proposed estimator based on switched quantitative event-triggered Kalman consensus filtering (KCF) algorithm is used to monitor the aircraft cabin environmental parameters when suffering packet loss and path loss issues during the communication process for WSN. The quantization error of the novel switched quantizer structure is bounded, and the corresponding stability theory for the quantitative estimation approach is proved. Compared with other methods, the simulation results for the introduced method verify that the environmental parameters can be estimated accurately and timely and reduce the burden of network communication bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiyu, Gongpu Wang, Rongfei Fan, and Bo Ai. "Channel Estimation With Expectation Maximization and Historical Information Based Basis Expansion Model for Wireless Communication Systems on High Speed Railways." IEEE Access 6 (2018): 72–80. http://dx.doi.org/10.1109/access.2017.2745708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Jie, Xin Guo, and Ying Xu. "Application of Wireless Communication Technology in Construction Project Information Management." Wireless Communications and Mobile Computing 2021 (August 28, 2021): 1–7. http://dx.doi.org/10.1155/2021/8174600.

Full text
Abstract:
With the replacement of the times, the electronic information technology of all countries in the world has developed rapidly, and 4G mobile communication technology has also obtained the opportunity of development, which has improved the transmission speed of 5G mobile communication technology, which has been widely used in intelligent information management of construction engineering. Under this background, this paper selects the research methods of theory guiding practice and practice updating theory to guide the application of information technology. Based on the actual engineering cases, this paper discusses the scientific process of information management construction of construction projects from the perspective of BIM Technology by taking the specific construction details as the starting point. Firstly, this paper systematically introduces the key technologies of BIM and the related concepts and basic contents of construction project management informatization, expounds and analyzes the three construction modes that should be followed by construction project collaborative management informatization, and puts forward the key and main contents of standardization construction. Then, taking New York freedom building as an engineering case, this paper analyzes the application of building information modeling (BIM) in the architectural design of New York freedom building. Relevant research can provide theoretical and practical references for realizing and ensuring the combination of project management and information technology.
APA, Harvard, Vancouver, ISO, and other styles
13

Varotsos, George K., Hector E. Nistazakis, Konstantinos Aidinis, Fadi Jaber, Mohd Nasor, and Kanhira Kadavath Mujeeb Rahman. "Error Performance Estimation of Modulated Retroreflective Transdermal Optical Wireless Links with Diversity under Generalized Pointing Errors." Telecom 2, no. 2 (April 1, 2021): 167–80. http://dx.doi.org/10.3390/telecom2020011.

Full text
Abstract:
Recent developments in both optical wireless communication (OWC) systems and implanted medical devices (IMDs) have introduced transdermal optical wireless (TOW) technology as a viable candidate for extremely high-speed in-body to out-of-body wireless data transmissions, which are growing in demand for many vital biomedical applications, including telemetry with medical implants, health monitoring, neural recording and prostheses. Nevertheless, this emerging communication modality is primarily hindered by skin-induced attenuation of the propagating signal bit carrier along with its stochastic misalignment-induced fading. Thus, by considering a typical modulated retroreflective (MRR) TOW system with spatial diversity and optimal combining (OC) for signal reception in this work, we focus, for the first time in the MRR TOW literature, on the stochastic nature of generalized pointing errors with non-zero boresight (NZB). Specifically, under these circumstances, novel analytical mathematical expressions were derived for the total average bit error rate (BER) of various system configurations. Their results revealed significant outage performance enhancements when spatial diversity was utilized. Moreover, taking into consideration the total transdermal pathloss along with the effects of stochastic NZB pointing errors, the critical average signal-to-noise ratio (SNR) metric was evaluated for typical power spectral-density values.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Jie, Yang Pan, Shijian Ni, and Feng Wang. "A Hybrid Reliable Routing Algorithm Based on LQI and PRR in Industrial Wireless Networks." Wireless Communications and Mobile Computing 2021 (September 4, 2021): 1–16. http://dx.doi.org/10.1155/2021/6039900.

Full text
Abstract:
In Industrial Wireless Networks (IWNs), the communication through Machine-to-Machine (M2M) is often affected by the noise in the industrial environment, which leads to the decline of communication reliability. In this paper, we investigate how to improve route stability through M2M in an industrial environment. We first compare different link quality estimations, such as Signal-Noise Ratio (SNR), Received Signal Strength Indicator (RSSI), Link Quality Indicator (LQI), Packet Reception Ratio (PRR), and Expected Transmission Count (ETX). We then propose a link quality estimation combining LQI and PRR. Finally, we propose a Hybrid Link Quality Estimation-Based Reliable Routing (HLQEBRR) algorithm for IWNs, with the object of maximizing link stability. In addition, HLQEBRR provides a recovery mechanism to detect node failure, which improves the speed and accuracy of node recovery. OMNeT++-based simulation results demonstrate that our HLQEBRR algorithm significantly outperforms the Collection Tree Protocol (CTP) algorithm in terms of end-to-end transmission delay and packet loss ratio, and the HLQEBRR algorithm achieves higher reliability at a small additional cost.
APA, Harvard, Vancouver, ISO, and other styles
15

Gupta, Vivek Kumar, and Sadip Vijay. "A Summative Comparison of Blind Channel Estimation Techniques for Orthogonal Frequency Division Multiplexing Systems." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (October 1, 2018): 2744. http://dx.doi.org/10.11591/ijece.v8i5.pp2744-2752.

Full text
Abstract:
The OFDM techniquei.e. Orthogonal frequency division multiplexing has become prominent in wireless communication since its instruction in 1950’s due to its feature of combating the multipath fading and other losses. In an OFDM system, a large number of orthogonal, overlapping, narrow band subchannels or subcarriers, transmitted in parallel, divide the available transmission bandwidth. The separation of the subcarriers is theoretically optimal such that there is a very compact spectral utilization. This paper reviewed the possible approaches for blind channel estimation in the light of the improved performance in terms of speed of convergence and complexity. There were various researches which adopted the ways for channel estimation for Blind, Semi Blind and trained channel estimators and detectors. Various ways of channel estimation such as Subspace, iteration based, LMSE or MSE based (using statistical methods), SDR, Maximum likelihood approach, cyclostationarity, Redundancy and Cyclic prefix based. The paper reviewed all the above approaches in order to summarize the outcomes of approaches aimed at optimum performance for channel estimation in OFDM systems
APA, Harvard, Vancouver, ISO, and other styles
16

Bhandari, Renuka, and Sangeeta Jadhav. "Spectral Efficient Blind Channel Estimation Technique for MIMO-OFDM Communications." International Journal of Advances in Applied Sciences 7, no. 3 (August 1, 2018): 286. http://dx.doi.org/10.11591/ijaas.v7.i3.pp286-297.

Full text
Abstract:
<p><em>With emerge of increasing research in the domain of future wireless communications, massive MIMO (multiple inputs multiple outputs) attracted most of researchers interests. Massive MIMO is high-speed wireless communication standards. A channel estimation technology plays the essential role in the MIMO systems. Efficient channel estimation leads to spectral efficient wireless communications. The critics of Inter-Symbol Interference (ISI) are the challenging tasks while designing the channel estimation methods. To mitigate the challenges of ISI, we proposed the novel blind channel estimation method which based on Independent component analysis (ICA) in this paper. Proposed channel estimation it works for both blind interference cancellation and ISI cancellation. The proposed Hybrid ICA (HICA) method depends on pulse shape filtering and ambiguity removal to improve the spectral efficiency and reliability for MIMO communications. The Kurtosis operation is used to measure the complex data at first to estimate the common signals. Then we exploited the advantages of 3rd and 4th order Higher Order Statistics (HOS) to priorities the common signals during the channel estimation. In this paper, we present the detailed design and evaluation of HICA blind channel estimation method. We showed the simulation results of HICA against the state-of-art techniques for channel estimation using BER, MSE, and PAPR.</em><em></em></p>
APA, Harvard, Vancouver, ISO, and other styles
17

Et.al, Eui-Soo Lee. "Joint Timing and Frequency Synchronization Using Convolutional Neural Network in WLAN Systems." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 6 (April 10, 2021): 531–37. http://dx.doi.org/10.17762/turcomat.v12i6.1969.

Full text
Abstract:
In wireless communication systems, the performance of the receiver is very sensitive to time and frequency offsets. In particular, orthogonal frequency division multiplexing (OFDM) systems are highly vulnerable to those offsets due to inter-carrier interference (ICI) and inter-symbol interference (ISI). To solve this problem, wireless local area network (WLAN) systems transmit a preamble for synchronization. In this paper, we propose a joint time and frequency offsets estimation technique based on convolutional neural network (CNN) for WLAN systems. In the proposed technique, the correlation between the received signal and the transmitted preamble is performed first. Then the frequency offset is coarsely compensated by several hypothesized offsets. The compensated signals are inputted to the proposed CNN and the CNN predicts the time and frequency offsets. The estimation performance is examined through computer simulation. According to the results, the proposed time offset estimator shows 3 dB to 6 dB performance gain, and the frequency offset estimator shows much lower root mean square error (RMSE) performance than the conventional technique at low SNRs
APA, Harvard, Vancouver, ISO, and other styles
18

Weiss, Andreas Peter, and Franz Peter Wenzl. "Identification and Speed Estimation of a Moving Object in an Indoor Application Based on Visible Light Sensing of Retroreflective Foils." Micromachines 12, no. 4 (April 15, 2021): 439. http://dx.doi.org/10.3390/mi12040439.

Full text
Abstract:
Identification and sensing are two of the main tasks a wireless sensor node has to perform in an Internet of Things (IoT) environment. Placing active powered nodes on objects is the most usual approach for the fulfillment of these functions. With the expected massive increase of connected things, there are several issues on the horizon that hamper the further deployment of this approach in an energy efficient, sustainable way, like the usage of environmentally hazardous batteries or accumulators, as well as the required electrical energy for their operation. In this work, we propose a novel approach for performing the tasks of identification and sensing, applying visible light sensing (VLS) based on light emitting diode (LED) illumination and utilizing retroreflective foils mounted on a moving object. This low cost hardware is combined with a self-developed, low complex software algorithm with minimal training effort. Our results show that successful identification and sensing of the speed of a moving object can be achieved with a correct estimation rate of 99.92%. The used foils are commercially available and pose no threat to the environment and there is no need for active sensors on the moving object and no requirement of wireless radio frequency communication. All of this is achievable whilst undisturbed illumination is still provided.
APA, Harvard, Vancouver, ISO, and other styles
19

Sarowa, Sandeep, Naresh Kumar, and Ram Sewak Singh. "Analysis of WOFDM over LTE 1.25 MHz Band." Wireless Communications and Mobile Computing 2020 (December 1, 2020): 1–9. http://dx.doi.org/10.1155/2020/8835879.

Full text
Abstract:
Orthogonal Frequency Division Multiplexing (OFDM) is the one of the most preferred multiplexing technique for realizing high-speed wireless communication, like Long Term Evolution (LTE) and LTE-Adv. In the era of digital wireless communication, applications of wavelet theory have been favorably applied in many areas of signal processing. Orthogonality, flexible time-frequency analysis, and the ability to characterize signals accurately have attracted the attention of the telecommunication community to use wavelet as a basis function for OFDM. In this paper, discrete wavelet transform (DWT) has been proposed as an alternative signal analysis with multiple merits such as support high-speed applications, immune to distortion, wavelet diversity, better error performance, and efficient bandwidth utilization. A simulative analysis of various wavelets, at different modulation techniques, over OFDM has been presented to demonstrate the improvement in BER performance. Further, in accordance with the LTE parameterization over 1.25 MHz band, the performance of wavelet-based OFDM (WOFDM) is found significantly higher in terms of maximum achievable data rate and system spectral efficiency.
APA, Harvard, Vancouver, ISO, and other styles
20

Mustafa, Ali, Muhammad Najam ul Islam, Salman Ahmed, and Muhammad Ahsan Tufail. "Unreliable communication in high-performance distributed multi-agent systems: A ingenious scheme in high computing." International Journal of Distributed Sensor Networks 14, no. 2 (February 2018): 155014771875921. http://dx.doi.org/10.1177/1550147718759218.

Full text
Abstract:
Designing of distributed consensus algorithms featuring accuracy, robustness, reliability, and speed of convergence is in high demand for various multi-agent applications. In this research, it has been investigated to device a novel design of distributed estimation algorithm which can tackle the problem of unreliable communication among multi-agents to achieve consensus on the average value of their initial values and must be capable of computing the total number of agents in the system under dynamically changing interaction topologies. A dynamically changing network topology is considered in this research with unreliable communication links, and four different scenarios are established to be analyzed for the proposed consensus-based distributed estimation algorithm. This study established a consensus for a dynamically changing interaction topology among agents, for addition of agents in the network with dynamically switching topology at any instant in communication, for removal of agents from the network with dynamically switching topology at any instant in communication, and for a fixed topology with link failure and a reconnection with the same agent after each iteration. The proposed algorithm paces up the rate of convergence by reducing the number of iteration, along with sure convergence of the designed algorithm using the concepts of stochastic differential equation theory, control system theory, algebraic graph theory, and algebraic matrix theory. Finally, in the end, simulation results are provided which are clear evidence to validate the effectiveness of theoretical results of the proposed algorithm in comparison to previously known consensus algorithms in terms of different performance parameters.
APA, Harvard, Vancouver, ISO, and other styles
21

Sheikh, Muhammad Sameer, Jun Liang, and Wensong Wang. "An Improved Automatic Traffic Incident Detection Technique Using a Vehicle to Infrastructure Communication." Journal of Advanced Transportation 2020 (January 13, 2020): 1–14. http://dx.doi.org/10.1155/2020/9139074.

Full text
Abstract:
Traffic incident detection is one of the major research areas of intelligent transportation systems (ITSs). In recent years, many mega-cities suffer from heavy traffic flow and congestion. Therefore, monitoring traffic scenarios is a challenging issue due to the nature and the characteristics of a traffic incident. Reliable detection of traffic incidents and congestions provide useful information for enhancing traffic safety and indicate the characteristics of traffic incidents, traffic violation, driving pattern, etc. This paper investigates the estimation of traffic incident from a hybrid observer (HO) method, and detects a traffic incident by using an improved automatic incident detection (AID) technique based on the lane-changing speed mechanism in the highway traffic environment. First, we developed the connection between vehicles and roadside units (RSUs) by using a beacon mechanism. Then, they will exchange information once the vehicles get access to a wireless medium. Second, we utilized the probabilistic approach to collect the traffic information data, by using a vehicle to infrastructure (V2I) communication. Third, we estimated the traffic incident by using an HO method which can provide an accurate estimation of an event occurring. Finally, in order to detect traffic incident accurately, we applied the probabilistic data collected through V2I communication based on lane-changing speed mechanism. The experimental results and analysis obtained from simulations show that the proposed method outperforms other methods in terms of obtaining a better estimation of traffic incident which agrees well with the theoretical incident, around 30% faster detection of traffic incidents and 25% faster dissipation of traffic congestion. With regard to duration of an incident, the proposed system obtained a better Kaplan–Meier (KM) curve, influenced by the shortest duration of time to clear the traffic incident, in comparison with the other methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Aihua, Shouyi Yang, and Guan Gui. "Sparse Channel Estimation for MIMO-OFDM Two-Way Relay Network with Compressed Sensing." International Journal of Antennas and Propagation 2013 (2013): 1–6. http://dx.doi.org/10.1155/2013/914734.

Full text
Abstract:
Accurate channel impulse response (CIR) is required for equalization and can help improve communication service quality in next-generation wireless communication systems. An example of an advanced system is amplify-and-forward multiple-input multiple-output two-way relay network, which is modulated by orthogonal frequency-division multiplexing. Linear channel estimation methods, for example, least squares and expectation conditional maximization, have been proposed previously for the system. However, these methods do not take advantage of channel sparsity, and they decrease estimation performance. We propose a sparse channel estimation scheme, which is different from linear methods, at end users under the relay channel to enable us to exploit sparsity. First, we formulate the sparse channel estimation problem as a compressed sensing problem by using sparse decomposition theory. Second, the CIR is reconstructed by CoSaMP and OMP algorithms. Finally, computer simulations are conducted to confirm the superiority of the proposed methods over traditional linear channel estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Saiko, Volodymyr, Teodor Narytnyk, Valeriy Gladkykh, and Natalia Sivkova. "INNOVATIVE SOLUTION FOR LEO-SYSTEM WITH DISTRIBUTED SATELLITE ARCHITECTURE." Information systems and technologies security, no. 1 (2) (2020): 77–83. http://dx.doi.org/10.17721/ists.2020.1.77-83.

Full text
Abstract:
An innovative solution for practical implementation in a LEO system with a "distributed satellite" architecture that can be used to provide low-orbital spacecraft communications with ground stations and users of 5G / IoT satellite services is proposed. The essence of the proposed development in the system of low-orbital satellite communication with FC-architecture is that to reduce the delay in signaling to consumers and the probability of overloading the network into a prospective system of low-orbital satellite communication, which contains artificial Earth satellites, each of which functions in Earth orbit and equipped with onboard repeaters, inter-satellite communications, a network of ground-based communication and control systems for artificial satellites of the Earth, a grouping of low-orbiting space their devices (LEO-system), which includes the grouping of root (leading) satellites and satellites-repeaters (slave), around each root satellite is formed micro-grouping of satellites-repeaters, and functions of the root satellite in the selected orbital phase of the orbital -or micro-satellites that are connected to the annular network by communication lines between satellites, and - functions of satellites-repeaters - kubsat, new is the introduction of a multilevel boundary cloud system, which is a heterogeneity distributed computing cloud structure. At the same time, the boundary clouds of the multilevel system are connected by ultra-high-speed wireless terahertz radio lines and wireless optical communication systems. The technique of estimation of access time in the proposed structure of "fog computing" on the basis of the model of access in "fog computing" with the resolution of collisions of data sources implementing the survey mode is presented.
APA, Harvard, Vancouver, ISO, and other styles
24

Romanuke, V. V. "COMBINED INFLUENCE OF DOPPLER EFFECT AND PILOT DE-ORTHOGONALIZATION ON 2×2 TO 4×4 MIMO SYSTEMS AND IMPROVEMENT OF ORTHOGONAL SEQUENCES." Proceedings of the O.S. Popov ОNAT 1, no. 2 (December 31, 2020): 50–64. http://dx.doi.org/10.33243/2518-7139-2020-1-2-50-64.

Full text
Abstract:
The Doppler effect in 2×2, 3×3, and 4×4 MIMO wireless communication systems with channel state estimation is studied. The orthogonal pilot signal approach is used for the channel estimation, where the Hadamard sequences are used for piloting along with the eight Romanuke orthogonal sets similar to the Walsh set. The Doppler effect is additionally aggravated by the pilot signal de-orthogonalization, where two negative-to-positive symbol errors are assumed to have occurred while signal is transmitted. MIMO transmissions are simulated for 10 cases of the frame length and pilot symbols per frame by no Doppler shift to 1100 Hz Doppler shift with a step of 100 Hz. By assuming that the carrier frequency is 5,9 GHz, the step corresponds to a motion speed of about 18.3 km/hr. Based on the simulations, it is ascertained that the Doppler effect negatively influences transmissions of long data packets. It is impracticable to apply MIMO transmissions of long packets at speeds exceeding 100 km/hr. To maintain an appropriate MIMO link data rate, the packet length should be shortened as the motion speed increases. On the other hand, the MIMO performance is substantially improved by increasing the number of antennas, except for the case of transmitting long packets. Besides, under the de-orthogonalization caused by two negative-to-positive symbol errors, the MIMO Walsh pilot sequences are outperformed by MIMO Romanuke pilot sequences, so the latter are considered as an improvement of MIMO orthogonal sequences. However, the performance difference between the Romanuke and Walsh pilot sequences decays as a greater number of transmit-receive antenna pairs is used and the motion speed increases.
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Zhidong, and Xueqing Li. "Construction of Live Broadcast Training Platform Based on “Cloud Computing” and “Big Data” and “Wireless Communication Technology”." Wireless Communications and Mobile Computing 2021 (September 14, 2021): 1–9. http://dx.doi.org/10.1155/2021/8971195.

Full text
Abstract:
With the rapid development of information technology, a scientific theory is brought by the rapid progress of science and technology. The advancement of science and technology of the impact on every field, changing the mode of transmission of information, the advent of big data for promotion and dissemination of resources played their part, let more and more people benefit. In the context of cloud computing, big data ushered in another upsurge of development and growth. Given this, the live broadcast training platform, which focuses on enterprise staff training and network education, arises at the right moment. People favor its convenience, real-time performance, and high efficiency. However, the low-value density of big data and cloud computing’s security problem has difficulties constructing a live broadcast training platform. In this paper, the live broadcast training platform’s structure is improved by constructing three modules: the live training module based on cloud computing, the user recommendation module based on big data, and the security policy guarantee module. In addition, to ensure that the trainees can receive training anytime and anywhere, this paper uses wireless communication technology to ensure the quality and speed of all users’ live video sources.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Kaiwen, Ka Wai Eric Cheng, Yun Yang, and Jianfei Pan. "Stability Improvement of Dynamic EV Wireless Charging System with Receiver-Side Control Considering Coupling Disturbance." Electronics 10, no. 14 (July 9, 2021): 1639. http://dx.doi.org/10.3390/electronics10141639.

Full text
Abstract:
Receiver-side control has been a reliable practice for regulating the transferred energy to the batteries in the electric vehicle (EV) wireless power transfer (WPT) systems. Nonetheless, the unpredictable fluctuation of the mutual inductance in dynamic wireless charging brings extreme instability to the charging process. This overshoot that appears in instant vibrations may largely increase the voltage/current stress of the system, and even cause catastrophic failure in the battery load. In addition, the speed of the vehicles may lead to untraceable steady-state operation. However, existing solutions to the above two issues suffer from either long communication time delay or significantly compromised output regulation. In this paper, the slow dynamics and the overshoot issues of the WPT system are elaborated in theory, and the small-signal model mainly considering mutual inductance disturbance is established. A simple feedforward control is proposed for overshoot damping and fast system dynamics. Experimental results validate that the overshoot can be reduced by 13% and the settling time is improved by 50% in vehicle braking or acceleration. In constant speed driving, the battery charging ripple is decreased by 12% and ensures better system stability.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Jinpeng, Ye Zhengpeng, Jeremy Gillbanks, Tarun M. Sanders, and Nianyu Zou. "A Power Control Algorithm Based on Chicken Game Theory in Multi-Hop Networks." Symmetry 11, no. 5 (May 27, 2019): 718. http://dx.doi.org/10.3390/sym11050718.

Full text
Abstract:
With the development of modern society, there are not only many voice calls being made over wireless communication systems, but there is also a great deal of demand for data services. There are increasing demands from the general public for more information data, especially for high-speed services with elevated Gbps levels. As is well known, higher sending power is needed once data rates increase. In order to solve this problem, virtual cellular networks (VCNs) can be employed in order to reduce these peak power shifts. If a VCN works well, mobile ports will receive their own wireless signals via individual cells, and then, the signals will access core networks with the help of a central terminal. Power control can improve the power capacity in multi-hop networks. However, the use of power control will also have a negative impact on network connectivity, delay, and capacity. In order to address the problem, this paper compares specific control methods and capacities in multi-hop networks. Distributed chicken game algorithm power control (DCGAPC) methods are presented in order to reach acceptable minimum levels of network delay and maximum network capacity and connectivity. Finally, a computer simulation is implemented, and the results are shown.
APA, Harvard, Vancouver, ISO, and other styles
28

Pan, Jie, and Fu Jiang. "Low Complexity Beamspace Super Resolution for DOA Estimation of Linear Array." Sensors 20, no. 8 (April 15, 2020): 2222. http://dx.doi.org/10.3390/s20082222.

Full text
Abstract:
Beamspace processing has become much attractive in recent radar and wireless communication applications, since the advantages of complexity reduction and of performance improvements in array signal processing. In this paper, we concentrate on the beamspace DOA estimation of linear array via atomic norm minimization (ANM). The existed generalized linear spectrum estimation based ANM approaches suffer from the high computational complexity for large scale array, since their complexity depends upon the number of sensors. To deal with this problem, we develop a low dimensional semidefinite programming (SDP) implementation of beamspace atomic norm minimization (BS-ANM) approach for DFT beamspace based on the super resolution theory on the semi-algebraic set. Then, a computational efficient iteration algorithm is proposed based on alternating direction method of multipliers (ADMM) approach. We develop the covariance based DOA estimation methods via BS-ANM and apply the BS-ANM based DOA estimation method to the channel estimation problem for massive MIMO systems. Simulation results demonstrate that the proposed methods exhibit the superior performance compared to the state-of-the-art counterparts.
APA, Harvard, Vancouver, ISO, and other styles
29

Celaya-Echarri, Azpilicueta, López-Iturri, Aguirre, and Falcone. "Performance Evaluation and Interference Characterization of Wireless Sensor Networks for Complex High-Node Density Scenarios." Sensors 19, no. 16 (August 11, 2019): 3516. http://dx.doi.org/10.3390/s19163516.

Full text
Abstract:
The uncontainable future development of smart regions, as a set of smart cities’ networks assembled, is directly associated with a growing demand of full interactive and connected ubiquitous smart environments. To achieve this global connection goal, large numbers of transceivers and multiple wireless systems will be involved to provide user services and applications anytime and anyplace, regardless the devices, networks, or systems they use. Adequate, efficient and effective radio wave propagation tools, methodologies, and analyses in complex indoor and outdoor environments are crucially required to prevent communication limitations such as coverage, capacity, speed, or channel interferences due to high-node density or channel restrictions. In this work, radio wave propagation characterization in an urban indoor and outdoor wireless sensor network environment has been assessed, at ISM 2.4 GHz and 5 GHz frequency bands. The selected scenario is an auditorium placed in an open free city area surrounded by inhomogeneous vegetation. User density within the scenario, in terms of inherent transceivers density, poses challenges in overall system operation, given by multiple node operation which increases overall interference levels. By means of an in-house developed 3D ray launching (3D-RL) algorithm with hybrid code operation, the impact of variable density wireless sensor network operation is presented, providing coverage/capacity estimations, interference estimation, device level performance and precise characterization of multipath propagation components in terms of received power levels and time domain characteristics. This analysis and the proposed simulation methodology, can lead in an adequate interference characterization extensible to a wide range of scenarios, considering conventional transceivers as well as wearables, which provide suitable information for the overall network performance in crowded indoor and outdoor complex heterogeneous environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Albert, Sylvie, and Don Flournoy. "Think Global, Act Local." International Journal of Sociotechnology and Knowledge Development 2, no. 1 (January 2010): 59–79. http://dx.doi.org/10.4018/jskd.2010100804.

Full text
Abstract:
Being able to connect high-speed computing and other information technologies into broadband communication networks presents local communities with some of their best chances for renewal. Such technologies are now widely perceived to be not just a nice amenity among corporations and such non-profit organizations as universities but a social and economic necessity for communities struggling to find their place in a rapidly changing world. Today, citizens want and expect their local communities to be “wired” for broadband digital transactions, whether for family, business, education or leisure. Such networks have become a necessity for attracting and retaining the new “knowledge workforce” that will be key to transforming communities into digital societies where people will want to live and work. Since the Internet is a global phenomenon, some of the challenges of globalization for local communities and regions are introduced in this article and suggestions for turning those challenges into opportunities are offered. To attain maximum benefit from the new wired and wireless networks, local strategies must be developed for its implementation and applications must be chosen with some sensitivity to local needs. New Growth theory is used to show why communities must plan their development agenda, and case studies of the Intelligent Community Forum are included to show how strategically used ICTs are allowing local communities to be contributors in global markets.
APA, Harvard, Vancouver, ISO, and other styles
31

Malyshev, Alexander, and Evgenii Burgov. "Revisiting Parameters of Bioinspired Behavior Models in Group Foraging Modeling." SPIIRAS Proceedings 19, no. 1 (February 7, 2020): 79–103. http://dx.doi.org/10.15622/10.15622/sp.2020.19.1.3.

Full text
Abstract:
Using bioinspired models and methods is one of approaches for solving tasks of swarm robotics. In this paper one of such tasks, modeling of foraging, and it’s solving by creating analogues of social structures of ants and models of feeding behavior are considered. The most important characteristics of ants’ colonies for modeling were defined – individuals number in society and it’s structure, workers’ speed, a communication distance and working area size. Besides, existing experimental basis (a group of robots and a polygon) was estimated for a usage as a hardware platform for experiments. Several models of feeding behavior were considered: a model without foragers’ functions differentiation and a model with differentiation on active and passive ones. Active foragers look for resources by themselves, then they involve passive foragers; passive foragers are settled on a base, while are not involved in harvesting. A set of finite state machines describe the behavior of agents: basic automatons (provide basic behavior functions) and a meta- automaton, that switches with some conditions an execution of basic automatons. Basic movements were tested on experimental basis. A complex test of models were conducted in a simulation program Kvorum. An analogue of real polygon was made in the program. Modeling consists of series of experiments for every model in which agents must harvest resources. Series differ from each other by number of agents. For models’ quality estimation a ratio of received energy to average obtaining time. Experiments settle that model with functions differentiation works more effective.
APA, Harvard, Vancouver, ISO, and other styles
32

Nayyar, Anand, Rudra Rameshwar, and Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Full text
Abstract:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
APA, Harvard, Vancouver, ISO, and other styles
33

Авдєєнко, Гліб Леонідович, Сергій Георгійович Бунін, and Теодор Миколайович Наритник. "ТЕРАГЕРЦОВІ ТЕХНОЛОГІЇ В ТЕЛЕКОМУНІКАЦІЙНИХ СИСТЕМАХ. ЧАСТИНА 1. ОБҐРУНТУВАННЯ ЧАСТОТНОГО ДІАПАЗОНУ, ПРОЕКТУВАННЯ ФУНКЦІОНАЛЬНИХ ВУЗЛІВ ТЕЛЕКОМУНІКАЦІЙНИХ СИСТЕМ ТЕРАГЕРЦОВОГО ДІАПАЗОНУ." Aerospace technic and technology, no. 4 (October 14, 2018): 72–91. http://dx.doi.org/10.32620/aktt.2018.4.10.

Full text
Abstract:
The article presented results of researches conducted by the team of authors devoted to the possibilities of creation for the first time in Ukraine the real prerequisites for solving the fundamental problem of constructing digital telecommunication systems with the use of terahertz technologies. The necessity of transition to the use of the terahertz frequency range substantiated during the deployment of future telecommunication systems of ultra-high bandwidth. The analysis of characteristics of the path of signal propagation and determination of signal losses in conditions of operation of the radio relay system in the terahertz frequency range is carried out. The conducted analysis has shown that in the frequency range of 30-300 GHz, the most important types of fading that should consider during the design are fading due to the easing of the signal by hydrometeors and fading due to the absorption of the radio signal in gases, fading due to the influence of the antenna pattern. It determined that the work of the radio relay lines in the terahertz range allows practically not to take into consideration the refraction and interference of electromagnetic waves reflected from interference in the zone of radio signal propagation, which arises especially in conditions of dense urban development. This is due primarily to the fact that the terahertz waves have a low ability to "bend" the noise, and secondly, at the current frequency of 30 to 300 Hz apply at relatively small distances (up to 5 km), which allows for avoiding spatial planning interference to the zone of direct visibility of antennas and the first Fresnel zone. It considered the main factors that lead to the emergence of fading in radio relay communication lines. It is shown that in the terahertz range the greatest influence on the energy potential of the radio-relay lines is attenuating in hydro meteors and gases. The terahertz frequency range areas allocated that is most suitable for application in radio relay communication lines. The principles of formation of signal-code construction considered methods and new technical solutions for choosing the type of signal construction proposed in order to achieve the best bandwidth and performance in the channel of communication of a wireless gigabit system of transmission in the terahertz range. The physical simulation of the ultra high-speed shaper based on multifrequency multiplexing of the modulated OFDM digital streams has been carried out, bench tests and optimization have been carried out to achieve the maximum bandwidth of the digital data transmission channel in the Ethernet format using the developed software. The developed software and hardware allowed for the first time to reach the overall channel speed with a full duplex up to 1.2 Gb / s. On the basis of the generalization of the results of theoretical research and experimental work, the analysis of the existing radio relay element base, the design of the main nodes of the receiving and transmitting parts of a telecommunication system with a gigabit throughput in the frequency range 130-134 GHz, the structural scheme of the transmitting and receiving system of the system is developed: frequency mixer with subharmonic pumping, the heterodyne, which uses a highly stable reference quartz oscillator with a subsequent chain of multiplication and under power stage, bandpass filter using a thin metal plate in the E-plane of the waveguide channel 1.6x0.8 mm, horn antenna. It is presented the results of experimental studies of the main nodes of the receiving and transmitting parts of a telecommunication system with a gigabit throughput in the frequency range 130-134 GHz. Scientific novelty of the work consists in generalization and development of the theory of distribution, generation and measurement of terahertz signals, in the development of the method of multiple frequency multiplexing and generation of modulated OFDM digital streams in the terahertz frequency range and the development of the principles of functional design of the receiving and transmitting parts of a telecommunication system with a gigabit throughput in terahertz frequency range
APA, Harvard, Vancouver, ISO, and other styles
34

Gupta, Akhil, and Ishfaq Bashir Sofi. "Performance Evaluation of Different Channel Estimation Techniques Using OFDM System on Large and Small Scale Fading Channel Model." International Journal of Sensors, Wireless Communications and Control 09 (April 5, 2019). http://dx.doi.org/10.2174/2210327909666190405160431.

Full text
Abstract:
The development of internet related applications and their increasing demands have led to advancement in high speed communication systems. For the accomplishment of these increasing demands MIMO channels with multiple transmitting and receiving antennas have been devised. MIMO systems enhance the capacity using spatial multiplexing and interference reduction through spatial dimensioning. Furthermore, MIMO systems mutual with orthogonal frequency division multiplexing (MIMO-OFDM) improves system capacity and mitigation against fading. These systems are the basis of 4G and beyond wireless communication standards. This research work focus on improvement in channel estimation performance using various techniques on standard communication models. Semi blind channel estimation technique is employed for channel estimation, which includes use of training sequence in form of pilots. Block type arrangement is being used, since it’s based on frequency selective channels. LSE and MMSE estimations are the two basic estimation techniques which can be efficiently used to estimate channels in OFDM systems. For MMSE implementation information of noise variance and channel covariance is required. Furthermore it is extremely complex than LSE estimator. DFT and DCT based channel estimation methods are employed to increase the performance and diminish the complexity. Different interpolation methods are been used and analyzed for better performance. Number of subcarriers which also have an impact on estimation performance has also been discussed. Overall channel estimation performance have been examined for both large and small scale fading channel models.
APA, Harvard, Vancouver, ISO, and other styles
35

"Application of Fuzzy Logic to Cognitive Wireless Communications." International Journal of Recent Technology and Engineering 8, no. 3 (September 30, 2019): 2228–34. http://dx.doi.org/10.35940/ijrte.b2065.098319.

Full text
Abstract:
This paper mainly studies data transmission Rate (DTR) in the of fuzzy logic in on Cognitive wireless modern communication improving the use of the radio frequency spectrum and the degree of intelligence of network and subscriber equipment, In this regard, The use of methods of cryptographic protection of information with a public key are convenient in that they do not require an additional communication channel for the exchange of a private key between the sender and the recipient However, they often rely on complex mathematical calculations and usually much less effective than c cryptosystems on a symmetric key. In this article we will focus on implementation of fuzzy logic methods for asymmetric encryption key. Fuzzy logic, in this case, is a problem solving methodology for data transfer that can find its application in various systems. In the present article deals with the encryption method using the theory of fuzzy sets technology of constructing cognitive wireless data transmission systems (WDTS) use of fuzzy logic and fuzzy controllers, Systems а demand of high quality transmission increasing of transmission data speed.
APA, Harvard, Vancouver, ISO, and other styles
36

Ge, Jungang, Ying-Chang Liang, Zhidong Bai, and Guangming Pan. "Large-dimensional random matrix theory and its applications in deep learning and wireless communications." Random Matrices: Theory and Applications, June 18, 2021, 2230001. http://dx.doi.org/10.1142/s2010326322300017.

Full text
Abstract:
Large-dimensional (LD) random matrix theory, RMT for short, which originates from the research field of quantum physics, has shown tremendous capability in providing deep insights into large-dimensional systems. With the fact that we have entered an unprecedented era full of massive amounts of data and large complex systems, RMT is expected to play more important roles in the analysis and design of modern systems. In this paper, we review the key results of RMT and its applications in two emerging fields: wireless communications and deep learning. In wireless communications, we show that RMT can be exploited to design the spectrum sensing algorithms for cognitive radio systems and to perform the design and asymptotic analysis for large communication systems. In deep learning, RMT can be utilized to analyze the Hessian, input–output Jacobian and data covariance matrix of the deep neural networks, thereby to understand and improve the convergence and the learning speed of the neural networks. Finally, we highlight some challenges and opportunities in applying RMT to the practical large-dimensional systems.
APA, Harvard, Vancouver, ISO, and other styles
37

"A Novel Multiuser Detection and Channel Estimation Method for Uplink Communication in MIMO-OFDM IDMA System." International Journal of Innovative Technology and Exploring Engineering 8, no. 12S (December 26, 2019): 934–38. http://dx.doi.org/10.35940/ijitee.l1210.10812s19.

Full text
Abstract:
Demand of wireless cellular communication systems has grown rapidly which necessitates high speed and reliable communication for various multimedia applications. Several technologies have been introduced to meet the desired communication requirements such as 1G, 2G, 3G and 4G but these techniques suffer from multiple-access and interference issues. Hence in this work, we focus on these challenges to improve the cellular communication performance. The main objective of this work is to develop a novel approach for multiuser detection and introducing a new architecture of channel estimation. However, several techniques are present to achieve these objectives but computational complexity and reliable performance still remains an issue. Moreover, interference remains a challenging task. Hence, in order to overcome these issues, we present a joint approach of multiuser detection where we compute the LLRs, find the Gaussian distribution and measure the mean and variance of this distribution. Similarly, in next phase, channel estimation is performed with the help of ESE and DEC. The performance of proposed approach is measured in terms of BER for varied simulation configurations. The experimental analysis reported significant improvement in the performance of MIMO-OFDM IDMA system.
APA, Harvard, Vancouver, ISO, and other styles
38

Hamid, Kamal, and Nadim Chahine. "Fault Tree Analysis for a Modern Communication System." Pakistan Journal of Engineering, Technology & Science 7, no. 1 (June 30, 2017). http://dx.doi.org/10.22555/pjets.v7i1.2087.

Full text
Abstract:
Wireless communications became one of the most widespread means for transferring information. Speed and reliability in transferring the piece of information are considered one of the most important requirements in communication systems in general. Moreover, Quality and reliability in any system are considered the most important criterion of the efficiency of this system in doing the task it is designed to do and its ability for satisfactory performance for a certain period of time, Therefore, we need fault tree analysis in these systems in order to determine how to detect an error or defect when happening in communication system and what are the possibilities that make this error happens. This research deals with studying TETRA system components, studying the physical layer in theory and practice, as well as studying fault tree analysis in this system, and later benefit from this study in proposing improvements to the structure of the system, which led to improve gain in Link Budget. A simulation and test have been done using MATLAB, where simulation results have shown that the built fault tree is able to detect the system’s work by 82.4%.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Jin. "Downlink channel estimation for millimeter wave communication combining low-rank and sparse structure characteristics." Annals of Telecommunications, September 10, 2020. http://dx.doi.org/10.1007/s12243-020-00802-2.

Full text
Abstract:
Abstract The acquisition of channel state information (CSI) is essential in millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems. The mmWave channel exhibits sparse scattering characteristics and a meaningful low-rank structure, which can be simultaneously employed to reduce the complexity of channel estimation. Most existing works recover the low-rank structure of channels using nuclear norm theory. However, solving the nuclear norm-based convex problem often leads to a suboptimal solution of the rank minimization problem, thus degrading the accuracy of channel estimation. Previous contributions recover the channel using over-complete dictionary with the assumption that the mmWave channel can be sparsely represented under some dictionary. While over-complete dictionary may increase the computational complexity. To address these problems, we propose a channel estimation framework based on non-convex low-rank approximation and dictionary learning by exploring the joint low-rank and sparse representations of wireless channels. We surrogate the widely used nuclear norm theory with non-convex low-rank approximation method and design a dictionary learning algorithm based on channel feature classification employing deep neural network (DNN). Our simulation results reveal the proposed scheme outperform the conventional dictionary learning algorithm, Bayesian framework algorithm, and compressed sensing-based algorithms.
APA, Harvard, Vancouver, ISO, and other styles
40

"An Effective and Active Bandwidth Distribution in Networked Control Systems." International Journal of Engineering and Advanced Technology 9, no. 4 (April 30, 2020): 2150–55. http://dx.doi.org/10.35940/ijeat.d8983.049420.

Full text
Abstract:
Networked Control System (NCS) is a method composed of physically shared smart devices that can observe the surroundings, work on it, and converse with one another by means of a communication system to attain a widespread purpose. Characteristic examples that fall into this section are Wireless Sensors and Actuators Networks (WSANs) for ecological analyzing and checking, multi-vehicle systems for composed investigation, camera systems for observation, multicamera facilitated movement catch, shrewd lattices for vitality circulation and the executives, and so forth. NCSs changes from increasingly customary control systems as a result of their interdisciplinary which needs the combination of control hypothesis, correspondences, software engineering and programming designing. Plenty of communication modes are available from telephone lines, cell phone networks, satellite networks and most widely used is internet. The choice of network depends upon the application to be served. Internet is the most suitable and inexpensive choice for many applications where the plant and the controller are far from each other. The troubles present in the structure of control systems that are solid to correspondence parameters like transfer speed, arbitrary deferral and packet loss, to computational parameters in light of the tremendous amount of information to be handled or to the mutual idea of the detecting and control to ongoing execution on limited resources and due to the unpredictability to the huge number of untrustworthy agent present. With the limited measure of data transmission accessible, it is improved to use it ideally and proficiently. This further raises the requirement for need choices issue for controlling a series of actuators for a progression of tasks. The proposed methodology deals broadly made in two distinct directions. The first direction aims at a control theoretical analysis while considering the network as a constant parameter like special controllers and altering the sampling rate. The second direction aims the design of new communication network infrastructures, algorithms or protocols like designing static and dynamic message scheduling algorithms. This method combines both directions and depends on the well- recognized results in both communication networks and control theory
APA, Harvard, Vancouver, ISO, and other styles
41

Malyshev, Alexander, and Evgenii Burgov. "Revisiting Parameters of Bioinspired Behavior Models in Group Foraging Modeling." SPIIRAS Proceedings, February 7, 2020, 79–103. http://dx.doi.org/10.15622/sp.2020.19.1.3.

Full text
Abstract:
Using bioinspired models and methods is one of approaches for solving tasks of swarm robotics. In this paper one of such tasks, modeling of foraging, and it’s solving by creating analogues of social structures of ants and models of feeding behavior are considered. The most important characteristics of ants’ colonies for modeling were defined – individuals number in society and it’s structure, workers’ speed, a communication distance and working area size. Besides, existing experimental basis (a group of robots and a polygon) was estimated for a usage as a hardware platform for experiments. Several models of feeding behavior were considered: a model without foragers’ functions differentiation and a model with differentiation on active and passive ones. Active foragers look for resources by themselves, then they involve passive foragers; passive foragers are settled on a base, while are not involved in harvesting. A set of finite state machines describe the behavior of agents: basic automatons (provide basic behavior functions) and a meta- automaton, that switches with some conditions an execution of basic automatons. Basic movements were tested on experimental basis. A complex test of models were conducted in a simulation program Kvorum. An analogue of real polygon was made in the program. Modeling consists of series of experiments for every model in which agents must harvest resources. Series differ from each other by number of agents. For models’ quality estimation a ratio of received energy to average obtaining time. Experiments settle that model with functions differentiation works more effective.
APA, Harvard, Vancouver, ISO, and other styles
42

Chesher, Chris. "Mining Robotics and Media Change." M/C Journal 16, no. 2 (March 8, 2013). http://dx.doi.org/10.5204/mcj.626.

Full text
Abstract:
Introduction Almost all industries in Australia today have adopted digital media in some way. However, uses in large scale activities such as mining may seem to be different from others. This article looks at mining practices with a media studies approach, and concludes that, just as many other industries, mining and media have converged. Many Australian mine sites are adopting new media for communication and control to manage communication, explore for ore bodies, simulate forces, automate drilling, keep records, and make transport and command robotic. Beyond sharing similar digital devices for communication and computation, new media in mining employ characteristic digital media operations, such as numerical operation, automation and managed variability. This article examines the implications of finding that some of the most material practices have become mediated by new media. Mining has become increasingly mediated through new media technologies similar to GPS, visualisation, game remote operation, similar to those adopted in consumer home and mobile digital media. The growing and diversified adoption of digital media championed by companies like Rio Tinto aims not only ‘improve’ mining, but to change it. Through remediating practices of digital mining, new media have become integral powerful tools in prospective, real time and analytical environments. This paper draws on two well-known case studies of mines in the Pilbara and Western NSW. These have been documented in press releases and media reports as representing changes in media and mining. First, the West Angelas mines in the Pilbara is an open cut iron ore mine introducing automation and remote operation. This mine is located in the remote Pilbara, and is notable for being operated remotely from a control centre 2000km away, near Perth Airport, WA. A growing fleet of Komatsu 930E haul trucks, which can drive autonomously, traverses the site. Fitted with radars, lasers and GPS, these enormous vehicles navigate through the open pit mine with no direct human control. Introducing these innovations to mine sites become more viable after iron ore mining became increasingly profitable in the mid-2000s. A boom in steel building in China drove unprecedented demand. This growing income coincided with a change in public rhetoric from companies like Rio Tinto. They pointed towards substantial investments in research, infrastructure, and accelerated introduction of new media technologies into mining practices. Rio Tinto trademarked the term ‘Mine of the future’ (US Federal News Service 1), and publicised their ambitious project for renewal of mining practice, including digital media. More recently, prices have been more volatile. The second case study site is a copper and gold underground mine at Northparkes in Western NSW. Northparkes uses substantial sensing and control, as well as hybrid autonomous and remote operated vehicles. The use of digital media begins with prospecting, and through to logistics of transportation. Engineers place explosives in optimal positions using computer modelling of the underground rock formations. They make heavy use of software to coordinate layer-by-layer use of explosives in this advanced ‘box cut’ mine. After explosives disrupt the rock layer a kilometre underground, another specialised vehicle collects and carries the ore to the surface. The Sandvik loader-hauler-dumper (LHD) can be driven conventionally by a driver, but it can also travel autonomously in and out of the mine without a direct operator. Once it reaches a collection point, where the broken up ore has accumulated, a user of the surface can change the media mode to telepresence. The human operator then takes control using something like a games controller and multiple screens. The remote operator controls the LHD to fill the scoop with ore. The fully-loaded LHD backs up, and returns autonomously using laser senses to follow a trail to the next drop off point. The LHD has become a powerful mediator, reconfiguring technical, material and social practices throughout the mine. The Meanings of Mining and Media Are Converging Until recently, mining and media typically operated ontologically separately. The media, such as newspapers and television, often tell stories about mining, following regular narrative scripts. There are controversies and conflicts, narratives of ecological crises, and the economics of national benefit. There are heroic and tragic stories such as the Beaconsfield mine collapse (Clark). There are new industry policies (Middelbeek), which are politically fraught because of the lobbying power of miners. Almost completely separately, workers in mines were consumers of media, from news to entertainment. These media practices, while important in their own right, tell nothing of the approaching changes in many other sectors of work and everyday life. It is somewhat unusual for a media studies scholar to study mine sites. Mine sites are most commonly studied by Engineering (Bellamy & Pravica), Business and labour and cultural histories (McDonald, Mayes & Pini). Until recently, media scholarship on mining has related to media institutions, such as newspapers, broadcasters and websites, and their audiences. As digital media have proliferated, the phenomena that can be considered as media phenomena has changed. This article, pointing to the growing roles of media technologies, observes the growing importance that media, in these terms, have in the rapidly changing domain of mining. Another meaning for ‘media’ studies, from cybernetics, is that a medium is any technology that translates perception, makes interpretations, and performs expressions. This meaning is more abstract, operating with a broader definition of media — not only those institutionalised as newspapers or radio stations. It is well known that computer-based media have become ubiquitous in culture. This is true in particular within the mining company’s higher ranks. Rio Tinto’s ambitious 2010 ‘Mine of the Future’ (Fisher & Schnittger, 2) program was premised on an awareness that engineers, middle managers and senior staff were already highly computer literate. It is worth remembering that such competency was relatively uncommon until the late 1980s. The meanings of digital media have been shifting for many years, as computers become experienced more as everyday personal artefacts, and less as remote information systems. Their value has always been held with some ambivalence. Zuboff’s (387-414) picture of loss, intimidation and resistance to new information technologies in the 1980s seems to have dissipated by 2011. More than simply being accepted begrudgingly, the PC platform (and variants) has become a ubiquitous platform, a lingua franca for information workers. It became an intimate companion for many professions, and in many homes. It was an inexpensive, versatile and generalised convergent medium for communication and control. And yet, writers such as Gregg observe, the flexibility of networked digital work imposes upon many workers ‘unlimited work’. The office boundaries of the office wall break down, for better or worse. Emails, utility and other work-related behaviours increasingly encroach onto domestic and public space and time. Its very attractiveness to users has tied them to these artefacts. The trail that leads the media studies discipline down the digital mine shaft has been cleared by recent work in media archaeology (Parikka), platform studies (Middelbeek; Montfort & Bogost; Maher) and new media (Manovich). Each of these redefined Media Studies practices addresses the need to diversify the field’s attention and methods. It must look at more specific, less conventional and more complex media formations. Mobile media and games (both computer-based) have turned out to be quite different from traditional media (Hjorth; Goggin). Kirschenbaum’s literary study of hard drives and digital fiction moves from materiality to aesthetics. In my study of digital mining, I present a reconfigured media studies, after the authors, that reveals heterogeneous media configurations, deserving new attention to materiality. This article also draws from the actor network theory approach and terminology (Latour). The uses of media / control / communications in the mining industry are very complex, and remain under constant development. Media such as robotics, computer modelling, remote operation and so on are bound together into complex practices. Each mine site is different — geologically, politically, and economically. Mines are subject to local and remote disasters. Mine tunnels and global prices can collapse, rendering active sites uneconomical overnight. Many technologies are still under development — including Northparkes and West Angelas. Both these sites are notable for their significant use of autonomous vehicles and remote operated vehicles. There is no doubt that the digital technologies modulate all manner of the mining processes: from rocks and mechanical devices to human actors. Each of these actors present different forms of collusion and opposition. Within a mining operation, the budgets for computerised and even robotic systems are relatively modest for their expected return. Deep in a mine, we can still see media convergence at work. Convergence refers to processes whereby previously diverse practices in media have taken on similar devices and techniques. While high-end PCs in mining, running simulators; control data systems; visualisation; telepresence, and so on may be high performance, ruggedised devices, they still share a common platform to the desktop PC. Conceptual resources developed in Media Ecology, New Media Studies, and the Digital Humanities can now inform readings of mining practices, even if their applications differ dramatically in size, reliability and cost. It is not entirely surprising that some observations by new media theorists about entertainment and media applications can also relate to features of mining technologies. Manovich argues that numerical representation is a distinctive feature of new media. Numbers have always already been key to mining engineering. However, computers visualise numerical fields in simulations that extend out of the minds of the calculators, and into visual and even haptic spaces. Specialists in geology, explosives, mechanical apparatuses, and so on, can use plaftorms that are common to everyday media. As the significance of numbers is extended by computers in the field, more and more diverse sources of data provide apparently consistent and seamless images of multiple fields of knowledge. Another feature that Manovich identifies in new media is the capacity for automation of media operations. Automation of many processes in mechanical domains clearly occurred long before industrial technologies were ported into new media. The difference with new media in mine sites is that robotic systems must vary their performance according to feedback from their extra-system environments. For our purposes, the haul trucks in WA are software-controlled devices that already qualify as robots. They sense, interpret and act in the world based on their surroundings. They evaluate multiple factors, including the sensors, GPS signals, operator instructions and so on. They can repeat the path, by sensing the differences, day after day, even if the weather changes, the track wears away or the instructions from base change. Automation compensates for differences within complex and changing environments. Automation of an open-pit mine haulage system… provides more consistent and efficient operation of mining equipment, it removes workers from potential danger, it reduces fuel consumption significantly reducing greenhouse gas (GHG) emissions, and it can help optimize vehicle repairs and equipment replacement because of more-predictable and better-controlled maintenance. (Parreire and Meech 1-13) Material components in physical mines tend to become modular and variable, as their physical shape lines up with the logic of another of Manovich’s new media themes, variability. Automatic systems also make obsolete human drivers, who previously handled those environmental variations, for better or for worse, through the dangerous, dull and dirty spaces of the mine. Drivers’ capacity to control repeat trips is no longer needed. The Komatsu driverless truck, introduced to the WA iron ore mines from 2008, proved itself to be almost as quick as human drivers at many tasks. But the driverless trucks have deeper advantages: they can run 23 hours each day with no shift breaks; they drive more cautiously and wear the equipment less than human drivers. There is no need to put up workers and their families up in town. The benefit most often mentioned is safety: even the worst accident won’t produce injuries to drivers. The other advantage less mentioned is that autonomous trucks don’t strike. Meanwhile, managers of human labour also need to adopt certain strategies of modulation to support the needs and expectations of their workers. Mobile phones, televisions and radio are popular modes of connecting workers to their loved ones, particularly in the remote and harsh West Angelas site. One solution — regular fly-in-fly out shifts — tends also to be alienating for workers and locals (Cheshire; Storey; Tonts). As with any operations, the cost of maintaining a safe and comfortable environment for workers requires trade-offs. Companies face risks from mobile phones, leaking computer networks, and espionage that expose the site to security risks. Because of such risks, miners tend be subject to disciplinary regimes. It is common to test alcohol and drug levels. There was some resistance from workers, who refused to change to saliva testing from urine testing (Latimer). Contesting these machines places the medium, in a different sense, at the centre of regulation of the workers’ bodies. In Northparkes, the solution of hybrid autonomous and remote operation is also a solution for modulating labour. It is safer and more comfortable, while also being more efficient, as one experienced driver can control three trucks at a time. This more complex mode of mediation is necessary because underground mines are more complex in geology, and working environments to suit full autonomy. These variations provide different relationships between operators and machines. The operator uses a games controller, and watches four video views from the cabin to make the vehicle fill the bucket with ore (Northparkes Mines, 9). Again, media have become a pivotal element in the mining assemblage. This combines the safety and comfort of autonomous operation (helping to retain staff) with the required use of human sensorimotor dexterity. Mine systems deserve attention from media studies because sites are combining large scale physical complexity with increasingly sophisticated computing. The conventional pictures of mining and media rarely address the specificity of subjective and artefactual encounters in and around mine sites. Any research on mining communication is typically within the instrumental frames of engineering (Duff et al.). Some of the developments in mechanical systems have contributed to efficiency and safety of many mines: larger trucks, more rock crushers, and so on. However, the single most powerful influence on mining has been adopting digital media to control, integrate and mining systems. Rio Tinto’s transformative agenda document is outlined in its high profile ‘Mine of the Future’ agenda (US Federal News Service). The media to which I refer are not only those in popular culture, but also those with digital control and communications systems used internally within mines and supply chains. The global mining industry began adopting digital communication automation (somewhat) systematically only in the 1980s. Mining companies hesitated to adopt digital media because the fundamentals of mining are so risky and bound to standard procedures. Large scale material operations, extracting and processing minerals from under the ground: hardly to be an appropriate space for delicate digital electronics. Mining is also exposed to volatile economic conditions, so investing in anything major can be unattractive. High technology perhaps contradicts an industry ethos of risk-taking and masculinity. Digital media became domesticated, and familiar to a new generation of formally educated engineers for whom databases and algorithms (Manovich) were second nature. Digital systems become simultaneously controllers of objects, and mediators of meanings and relationships. They control movements, and express communications. Computers slide from using meanings to invoking direct actions over objects in the world. Even on an everyday scale, computer operations often control physical processes. Anti-lock Braking Systems regulate a vehicle’s braking pressure to avoid the danger when wheels lock-up. Or another example, is the ATM, which involves both symbolic interactions, and also exchange of physical objects. These operations are examples of the ‘asignifying semiotic’ (Guattari), in which meanings and non-meanings interact. There is no operation essential distinction between media- and non-media digital operations. Which are symbolic, attached or non-consequential is not clear. This trend towards using computation for both meanings and actions has accelerated since 2000. Mines of the Future Beyond a relatively standard set of office and communications software, many fields, including mining, have adopted specialised packages for their domains. In 3D design, it is AutoCAD. In hard sciences, it is custom modelling. In audiovisual production, it may be Apple and Adobe products. Some platforms define their subjectivity, professional identity and practices around these platforms. This platform orientation is apparent in areas of mining, so that applications such as the Gemcom, Rockware, Geological Database and Resource Estimation Modelling from Micromine; geology/mine design software from Runge, Minemap; and mine production data management software from Corvus. However, software is only a small proportion of overall costs in the industry. Agents in mining demand solutions to peculiar problems and requirements. They are bound by their enormous scale; physical risks of environments, explosive and moving elements; need to negotiate constant change, as mining literally takes the ground from under itself; the need to incorporate geological patterns; and the importance of logistics. When digital media are the solution, there can be what is perceived as rapid gains, including greater capacities for surveillance and control. Digital media do not provide more force. Instead, they modulate the direction, speed and timing of activities. It is not a complete solution, because too many uncontrolled elements are at play. Instead, there are moment and situations when the degree of control refigures the work that can be done. Conclusions In this article I have proposed a new conception of media change, by reading digital innovations in mining practices themselves as media changes. This involved developing an initial reading of the operations of mining as digital media. With this approach, the array of media components extends far beyond the conventional ‘mass media’ of newspapers and television. It offers a more molecular media environment which is increasingly heterogeneous. It sometimes involves materiality on a huge scale, and is sometimes apparently virtual. The mining media event can be a semiotic, a signal, a material entity and so on. It can be a command to a human. It can be a measurement of location, a rock formation, a pressure or an explosion. The mining media event, as discussed above, is subject to Manovich’s principles of media, being numerical, variable and automated. In the mining media event, these principles move from the aesthetic to the instrumental and physical domains of the mine site. The role of new media operates at many levels — from the bottom of the mine site to the cruising altitude of the fly-in-fly out aeroplanes — has motivated significant changes in the Australian industry. When digital media and robotics come into play, they do not so much introduce change, but reintroduce similarity. This inversion of media is less about meaning, and more about local mastery. Media modulation extends the kinds of influence that can be exerted by the actors in control. In these situations, the degrees of control, and of resistance, are yet to be seen. Acknowledgments Thanks to Mining IQ for a researcher's pass at Mining Automation and Communication Conference, Perth in August 2012. References Bellamy, D., and L. Pravica. “Assessing the Impact of Driverless Haul Trucks in Australian Surface Mining.” Resources Policy 2011. Cheshire, L. “A Corporate Responsibility? The Constitution of Fly-In, Fly-Out Mining Companies as Governance Partners in Remote, Mine-Affected Localities.” Journal of Rural Studies 26.1 (2010): 12–20. Clark, N. “Todd and Brant Show PM Beaconsfield's Cage of Hell.” The Mercury, 6 Nov. 2008. Duff, E., C. Caris, A. Bonchis, K. Taylor, C. Gunn, and M. Adcock. “The Development of a Telerobotic Rock Breaker.” CSIRO 2009: 1–10. Fisher, B.S. and S. Schnittger. Autonomous and Remote Operation Technologies in the Mining Industry: Benefits and Costs. BAE Report 12.1 (2012). Goggin, G. Global Mobile Media. London: Routledge, 2010. Gregg, M. Work’s Intimacy. Cambridge: Polity, 2011. Guattari, F. Chaosmosis: An Ethico-Aesthetic Paradigm. Trans. Paul Bains and Julian Pefanis. Bloomington: Indiana UP, 1992. Hjorth, L. Mobile Media in the Asia-Pacific: Gender and the Art of Being Mobile. Taylor & Francis, 2008. Kirschenbaum, M.G. Mechanisms: New Media and the Forensic Imagination. Campridge, Mass.: MIT Press, 2008. Latimer, Cole. “Fair Work Appeal May Change Drug Testing on Site.” Mining Australia 2012. 3 May 2013 ‹http://www.miningaustralia.com.au/news/fair-work-appeal-may-change-drug-testing-on-site›. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2007. Maher, J. The Future Was Here: The Commodore Amiga. Cambridge, Mass.: MIT Press, 2012. Manovich, Lev. The Language of New Media. Cambridge, Mass.: MIT Press, 2001. McDonald, P., R. Mayes, and B. Pini. “Mining Work, Family and Community: A Spatially-Oriented Approach to the Impact of the Ravensthorpe Nickel Mine Closure in Remote Australia.” Journal of Industrial Relations 2012. Middelbeek, E. “Australia Mining Tax Set to Slam Iron Ore Profits.” Metal Bulletin Weekly 2012. Montfort, N., and I. Bogost. Racing the Beam: The Atari Video Computer System. Cambridge, Mass.: MIT Press, 2009. Parikka, J. What Is Media Archaeology? London: Polity Press, 2012. Parreira, J., and J. Meech. “Autonomous vs Manual Haulage Trucks — How Mine Simulation Contributes to Future Haulage System Developments.” Paper presented at the CIM Meeting, Vancouver, 2010. 3 May 2013 ‹http://www.infomine.com/library/publications/docs/parreira2010.pdf›. Storey, K. “Fly-In/Fly-Out and Fly-Over: Mining and Regional Development in Western Australia.” Australian Geographer 32.2 (2010): 133–148. Storey, K. “Fly-In/Fly-Out: Implications for Community Sustainability.” Sustainability 2.5 (2010): 1161–1181. 3 May 2013 ‹http://www.mdpi.com/2071-1050/2/5/1161›. Takayama, L., W. Ju, and C. Nas. “Beyond Dirty, Dangerous and Dull: What Everyday People Think Robots Should Do.” Paper presented at HRI '08, Amsterdam, 2008. 3 May 2013 ‹http://www-cdr.stanford.edu/~wendyju/publications/hri114-takayama.pdf›. Tonts, M. “Labour Market Dynamics in Resource Dependent Regions: An Examination of the Western Australian Goldfields.” Geographical Research 48.2 (2010): 148-165. 3 May 2013 ‹http://onlinelibrary.wiley.com/doi/10.1111/j.1745-5871.2009.00624.x/abstract›. US Federal News Service, Including US State News. “USPTO Issues Trademark: Mine of the Future.” 31 Aug. 2011. Wu, S., H. Han, X. Liu, H. Wang, F. Xue. “Highly Effective Use of Australian Pilbara Blend Lump Ore in a Blast Furnace.” Revue de Métallurgie 107.5 (2010): 187-193. doi:10.1051/metal/2010021. Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power. Heinemann Professional, 1988.
APA, Harvard, Vancouver, ISO, and other styles
43

Pedersen, Isabel, and Kirsten Ellison. "Startling Starts: Smart Contact Lenses and Technogenesis." M/C Journal 18, no. 5 (October 14, 2015). http://dx.doi.org/10.5204/mcj.1018.

Full text
Abstract:
On 17 January 2013, Wired chose the smart contact lens as one of “7 Massive Ideas That Could Change the World” describing a Google-led research project. Wired explains that the inventor, Dr. Babak Parviz, wants to build a microsystem on a contact lens: “Using radios no wider than a few human hairs, he thinks these lenses can augment reality and incidentally eliminate the need for displays on phones, PCs, and widescreen TVs”. Explained further in other sources, the technology entails an antenna, circuits embedded into a contact lens, GPS, and an LED to project images on the eye, creating a virtual display (Solve for X). Wi-Fi would stream content through a transparent screen over the eye. One patent describes a camera embedded in the lens (Etherington). Another mentions medical sensing, such as glucose monitoring of tears (Goldman). In other words, Google proposes an imagined future when we use contact lenses to search the Internet (and be searched by it), shop online, communicate with friends, work, navigate maps, swipe through Tinder, monitor our health, watch television, and, by that time, probably engage in a host of activities not yet invented. Often referred to as a bionic contact, the smart contact lens would signal a weighty shift in the way we work, socialize, and frame our online identities. However, speculative discussion over this radical shift in personal computing, rarely if ever, includes consideration of how the body, acting as a host to digital information, will manage to assimilate not only significant affordances, but also significant constraints and vulnerabilities. At this point, for most people, the smart contact lens is just an idea. Is a new medium of communication started when it is launched in an advertising campaign? When we Like it on Facebook? If we chat about it during a party amongst friends? Or, do a critical mass of people actually have to be using it to say it has started? One might say that Apple’s Macintosh computer started as a media platform when the world heard about the famous 1984 television advertisement aired during the American NFL Super Bowl of that year. Directed by Ridley Scott, the ad entails an athlete running down a passageway and hurling a hammer at a massive screen depicting cold war style rulers expounding state propaganda. The screen explodes freeing those imprisoned from their concentration camp existence. The direct reference to Orwell’s 1984 serves as a metaphor for IBM in 1984. PC users were made analogous to political prisoners and IBM served to represent the totalitarian government. The Mac became a something that, at the time, challenged IBM, and suggested an alternative use for the desktop computer that had previously been relegated for work rather than life. Not everyone bought a Mac, but the polemical ad fostered the idea that Mac was certainly the start of new expectations, civic identities, value-systems, and personal uses for computers. The smart contact lens is another startling start. News of it shocks us, initiates social media clicks and forwards, and instigates dialogue. But, it also indicates the start of a new media paradigm that is already undergoing popular adoption as it is announced in mainstream news and circulated algorithmically across media channels. Since 2008, news outlets like CNN, The New York Times, The Globe and Mail, Asian International News, United News of India, The Times of London and The Washington Post have carried it, feeding the buzz in circulation that Google intends. Attached to the wave of current popular interest generated around any technology claiming to be “wearable,” a smart contact lens also seems surreptitious. We would no longer hold smartphones, but hide all of that digital functionality beneath our eyelids. Its emergence reveals the way commercial models have dramatically changed. The smart contact lens is a futuristic invention imagined for us and about us, but also a sensationalized idea socializing us to a future that includes it. It is also a real device that Parviz (with Google) has been inventing, promoting, and patenting for commercial applications. All of these workings speak to a broader digital culture phenomenon. We argue that the smart contact lens discloses a process of nascent posthuman adaptation, launched in an era that celebrates wearable media as simultaneously astonishing and banal. More specifically, we adopt technology based on our adaptation to it within our personal, political, medial, social, and biological contexts, which also function in a state of flux. N. Katherine Hayles writes that “Contemporary technogenesis, like evolution in general, is not about progress ... rather, contemporary technogenesis is about adaptation, the fit between organisms and their environments, recognizing that both sides of the engagement (human and technologies) are undergoing coordinated transformations” (81). This article attends to the idea that in these early stages, symbolic acts of adaptation signal an emergent medium through rhetorical processes that society both draws from and contributes to. In terms of project scope, this article contributes a focused analysis to a much larger ongoing digital rhetoric project. For the larger project, we conducted a discourse analysis on a collection of international publications concerning Babak Parviz and the invention. We searched for and collected newspaper stories, news broadcasts, YouTube videos from various sources, academic journal publications, inventors’ conference presentations, and advertising, all published between January 2008 and May 2014, generating a corpus of more than 600 relevant artifacts. Shortly after this time, Dr. Parviz, a Professor at the University of Washington, left the secretive GoogleX lab and joined Amazon.com (Mac). For this article we focus specifically on the idea of beginnings or genesis and how digital spaces increasingly serve as the grounds for emergent digital cultural phenomena that are rarely recognized as starting points. We searched through the corpus to identify a few exemplary international mainstream news stories to foreground predominant tropes in support of the claim we make that smart contacts lenses are a startling idea. Content producers deliberately use astonishment as a persuasive device. We characterize the idea of a smart contact lens cast in rhetorical terms in order to reveal how its allure works as a process of adaptation. Rhetorician and philosopher, Kenneth Burke writes that “rhetorical language is inducement to action (or to attitude)” (42). A rhetorical approach is instrumental because it offers a model to explain how we deploy, often times, manipulative meaning as senders and receivers while negotiating highly complex constellations of resources and contexts. Burke’s rhetorical theory can show how messages influence and become influenced by powerful hierarchies in discourse that seem transparent or neutral, ones that seem to fade into the background of our consciousness. For this article, we also concentrate on rhetorical devices such as ethos and the inventor’s own appeals through different modes of communication. Ethos was originally proposed by Aristotle to identify speaker credibility as a persuasive tactic. Addressed by scholars of rhetoric for centuries, ethos has been reconfigured by many critical theorists (Burke; Baumlin Ethos; Hyde). Baumlin and Baumlin suggest that “ethos describes an audience’s projection of authority and trustworthiness onto the speaker ... ethos suggests that the ethical appeal to be a radically psychological event situated in the mental processes of the audience – as belonging as much to the audience as to the actual character of a speaker” (Psychology 99). Discussed in the next section, our impression of Parviz and his position as inventor plays a dramatic role in the surfacing of the smart contact lens. Digital Rhetoric is an “emerging scholarly discipline concerned with the interpretation of computer-generated media as objects of study” (Losh 48). In an era when machine-learning algorithms become the messengers for our messages, which have become commodity items operating across globalized, capitalist networks, digital rhetoric provides a stable model for our approach. It leads us to demonstrate how this emergent medium and invention, the smart contact lens, is born amid new digital genres of speculative communication circulated in the everyday forums we engage on a daily basis. Smart Contact Lenses, Sensationalism, and Identity One relevant site for exploration into how an invention gains ethos is through writing or video penned or produced by the inventor. An article authored by Parviz in 2009 discusses his invention and the technical advancements that need to be made before the smart contact lens could work. He opens the article using a fictional and sensationalized analogy to encourage the adoption of his invention: The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.But why stop there?In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes. Identity building is made to correlate with smart contact lenses in a manner that frames them as exciting. Coming to terms with them often involves casting us as superhumans, wielding abilities that we do not currently possess. One reason for embellishment is because we do not need digital displays on the eyes, so the motive to use them must always be geared to transcending our assumed present condition as humans and society members. Consequently, imagination is used to justify a shift in human identity along a future trajectory.This passage above also instantiates a transformation from humanist to posthumanist posturing (i.e. “the cyborg”) in order to incent the adoption of smart contact lenses. It begins with the bold declarative statement, “The human eye is a perceptual powerhouse,” which is a comforting claim about our seemingly human superiority. Indexing abstract humanist values, Parviz emphasizes skills we already possess, including seeing a plethora of colours, adjusting to light on the fly, and thinking fast, indeed faster than “a high-speed Internet connection”. However, the text goes on to summon the Terminator character and his optic feats from the franchise of films. Filmic cyborg characters fulfill the excitement that posthuman rhetoric often seems to demand, but there is more here than sensationalism. Parviz raises the issue of augmenting human vision using science fiction as his contextualizing vehicle because he lacks another way to imbricate the idea. Most interesting in this passage is the inventor’s query “But why stop there?” to yoke the two claims, one biological (i.e., “The human eye is a perceptual powerhouse”) and one fictional (i.e. Terminator, Vernor Vinge characters). The query suggests, Why stop with human superiority, we may as well progress to the next level and embrace a smart contact lens just as fictional cyborgs do. The non-threatening use of fiction makes the concept seem simultaneously exciting and banal, especially because the inventor follows with a clear description of the necessary scientific engineering in the rest of the article. This rhetorical act signifies the voice of a technoelite, a heavily-funded cohort responding to global capitalist imperatives armed with a team of technologists who can access technological advancements and imbue comments with an authority that may extend beyond their fields of expertise, such as communication studies, sociology, psychology, or medicine. The result is a powerful ethos. The idea behind the smart contact lens maintains a degree of respectability long before a public is invited to use it.Parviz exhumes much cultural baggage when he brings to life the Terminator character to pitch smart contact lenses. The Terminator series of films has established the “Arnold Schwarzenegger” character a cultural mainstay. Each new film reinvented him, but ultimately promoted him within a convincing dystopian future across the whole series: The Terminator (Cameron), Terminator 2: Judgment Day (Cameron), Terminator 3: Rise of the Machines (Mostow), Terminator Salvation (McG) and Terminator Genisys (Taylor) (which appeared in 2015 after Parviz’s article). Recently, several writers have addressed how cyborg characters figure significantly in our cultural psyche (Haraway, Bukatman; Leaver). Tama Leaver’s Artificial Culture explores the way popular, contemporary, cinematic, science fiction depictions of embodied Artificial Intelligence, such as the Terminator cyborgs, “can act as a matrix which, rather than separating or demarcating minds and bodies or humanity and the digital, reinforce the symbiotic connection between people, bodies, and technologies” (31). Pointing out the violent and ultimately technophobic motive of The Terminator films, Leaver reads across them to conclude nevertheless that science fiction “proves an extremely fertile context in which to address the significance of representations of Artificial Intelligence” (63).Posthumanism and TechnogenesisOne reason this invention enters the public’s consciousness is its announcement alongside a host of other technologies, which seem like parts of a whole. We argue that this constant grouping of technologies in the news is one process indicative of technogenesis. For example, City A.M., London’s largest free commuter daily newspaper, reports on the future of business technology as a hodgepodge of what ifs: As Facebook turns ten, and with Bill Gates stepping down as Microsoft chairman, it feels like something is drawing to an end. But if so, it is only the end of the technological revolution’s beginning ... Try to look ahead ten years from now and the future is dark. Not because it is bleak, but because the sheer profusion of potential is blinding. Smartphones are set to outnumber PCs within months. After just a few more years, there are likely to be 3bn in use across the planet. In ten years, who knows – wearables? smart contact lenses? implants? And that’s just the start. The Internet of Things is projected to be a $300bn (£183bn) industry by 2020. (Sidwell) This reporting is a common means to frame the commodification of technology in globalized business news that seeks circulation as much as it does readership. But as a text, it also posits how individuals frame the future and their participation with it (Pedersen). Smart contacts appear to move along this exciting, unstoppable trajectory where the “potential is blinding”. The motive is to excite and scare. However, simultaneously, the effect is predictable. We are quite accustomed to this march of innovations that appears everyday in the morning paper. We are asked to adapt rather than question, consequently, we never separate the parts from the whole (e.g., “wearables? smart contact lenses? Implants”) in order to look at them critically.In coming to terms with Cary Wolf’s definition of posthumanism, Greg Pollock writes that posthumanism is the questioning that goes on “when we can no longer rely on ‘the human’ as an autonomous, rational being who provides an Archimedean point for knowing about the world (in contrast to “humanism,” which uses such a figure to ground further claims)” (208). With similar intent, N. Katherine Hayles formulating the term technogenesis suggests that we are not really progressing to another level of autonomous human existence when we adopt media, we are in effect, adapting to media and media are also in a process of adapting to us. She writes: As digital media, including networked and programmable desktop stations, mobile devices, and other computational media embedded in the environment, become more pervasive, they push us in the direction of faster communication, more intense and varied information streams, more integration of humans and intelligent machines, and more interactions of language with code. These environmental changes have significant neurological consequences, many of which are now becoming evident in young people and to a lesser degree in almost everyone who interacts with digital media on a regular basis. (11) Following Hayles, three actions or traits characterize adaptation in a manner germane to the technogenesis of media like smart contact lenses. The first is “media embedded in the environment”. The trait of embedding technology in the form of sensors and chips into external spaces evokes the onset of The Internet of Things (IoT) foundations. Extensive data-gathering sensors, wireless technologies, mobile and wearable components integrated with the Internet, all contribute to the IoT. Emerging from cloud computing infrastructures and data models, The IoT, in its most extreme, involves a scenario whereby people, places, animals, and objects are given unique “embedded” identifiers so that they can embark on constant data transfer over a network. In a sense, the lenses are adapted artifacts responding to a world that expects ubiquitous networked access for both humans and machines. Smart contact lenses will essentially be attached to the user who must adapt to these dynamic and heavily mediated contexts.Following closely on the first, the second point Hayles makes is “integration of humans and intelligent machines”. The camera embedded in the smart contact lens, really an adapted smartphone camera, turns the eye itself into an image capture device. By incorporating them under the eyelids, smart contact lenses signify integration in complex ways. Human-machine amalgamation follows biological, cognitive, and social contexts. Third, Hayles points to “more interactions of language with code.” We assert that with smart contact lenses, code will eventually govern interaction between countless agents in accordance with other smart devices, such as: (1) exchanges of code between people and external nonhuman networks of actors through machine algorithms and massive amalgamations of big data distributed on the Internet;(2) exchanges of code amongst people, human social actors in direct communication with each other over social media; and (3) exchanges of coding and decoding between people and their own biological processes (e.g. monitoring breathing, consuming nutrients, translating brainwaves) and phenomenological (but no less material) practices (e.g., remembering, grieving, or celebrating). The allure of the smart contact lens is the quietly pressing proposition that communication models such as these will be radically transformed because they will have to be adapted to use with the human eye, as the method of input and output of information. Focusing on genetic engineering, Eugene Thacker fittingly defines biomedia as “entail[ing] the informatic recontextualization of biological components and processes, for ends that may be medical or nonmedical (economic, technical) and with effects that are as much cultural, social, and political as they are scientific” (123). He specifies, “biomedia are not computers that simply work on or manipulate biological compounds. Rather, the aim is to provide the right conditions, such that biological life is able to demonstrate or express itself in a particular way” (123). Smart contact lenses sit on the cusp of emergence as a biomedia device that will enable us to decode bodily processes in significant new ways. The bold, technical discourse that announces it however, has not yet begun to attend to the seemingly dramatic “cultural, social, and political” effects percolating under the surface. Through technogenesis, media acclimatizes rapidly to change without establishing a logic of the consequences, nor a design plan for emergence. Following from this, we should mention issues such as the intrusion of surveillance algorithms deployed by corporations, governments, and other hegemonic entities that this invention risks. If smart contact lenses are biomedia devices inspiring us to decode bodily processes and communicate that data for analysis, for ourselves, and others in our trust (e.g., doctors, family, friends), we also need to be wary of them. David Lyon warns: Surveillance has spilled out of its old nation-state containers to become a feature of everyday life, at work, at home, at play, on the move. So far from the single all-seeing eye of Big Brother, myriad agencies now trace and track mundane activities for a plethora of purposes. Abstract data, now including video, biometric, and genetic as well as computerized administrative files, are manipulated to produce profiles and risk categories in a liquid, networked system. The point is to plan, predict, and prevent by classifying and assessing those profiles and risks. (13) In simple terms, the smart contact lens might disclose the most intimate information we possess and leave us vulnerable to profiling, tracking, and theft. Irma van der Ploeg presupposed this predicament when she wrote: “The capacity of certain technologies to change the boundary, not just between what is public and private information but, on top of that, between what is inside and outside the human body, appears to leave our normative concepts wanting” (71). The smart contact lens, with its implied motive to encode and disclose internal bodily information, needs considerations on many levels. Conclusion The smart contact lens has made a digital beginning. We accept it through the mass consumption of the idea, which acts as a rhetorical motivator for media adoption, taking place long before the device materializes in the marketplace. This occurrence may also be a sign of our “posthuman predicament” (Braidotti). We have argued that the smart contact lens concept reveals our posthuman adaptation to media rather than our reasoned acceptance or agreement with it as a logical proposition. By the time we actually squabble over the price, express fears for our privacy, and buy them, smart contact lenses will long be part of our everyday culture. References Baumlin, James S., and Tita F. Baumlin. “On the Psychology of the Pisteis: Mapping the Terrains of Mind and Rhetoric.” Ethos: New Essays in Rhetorical and Critical Theory. Eds. James S. Baumlin and Tita F. Baumlin. Dallas: Southern Methodist University Press, 1994. 91-112. Baumlin, James S., and Tita F. Baumlin, eds. Ethos: New Essays in Rhetorical and Critical Theory. Dallas: Southern Methodist University Press, 1994. Bilton, Nick. “A Rose-Colored View May Come Standard.” The New York Times, 4 Apr. 2012. Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Bukatman, Scott. Terminal Identity: The Virtual Subject in Postmodern Science Fiction. Durham: Duke University Press, 1993. Burke, Kenneth. A Rhetoric of Motives. Berkeley: University of California Press, 1950. Cameron, James, dir. The Terminator. Orion Pictures, 1984. DVD. Cameron, James, dir. Terminator 2: Judgment Day. Artisan Home Entertainment, 2003. DVD. Etherington, Darrell. “Google Patents Tiny Cameras Embedded in Contact Lenses.” TechCrunch, 14 Apr. 2014. Goldman, David. “Google to Make Smart Contact Lenses.” CNN Money 17 Jan. 2014. Haraway, Donna. Simians, Cyborgs and Women: The Reinvention of Nature. London: Free Association Books, 1991. Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago, 2012. Hyde, Michael. The Ethos of Rhetoric. Columbia: University of South Carolina Press, 2004. Leaver, Tama. Artificial Culture: Identity, Technology, and Bodies. New York: Routledge, 2012. Losh, Elizabeth. Virtualpolitik: An Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes. Boston: MIT Press. 2009. Lyon, David, ed. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. New York: Routledge, 2003. Mac, Ryan. “Amazon Lures Google Glass Creator Following Phone Launch.” Forbes.com, 14 July 2014. McG, dir. Terminator Salvation. Warner Brothers, 2009. DVD. Mostow, Jonathan, dir. Terminator 3: Rise of the Machines. Warner Brothers, 2003. DVD. Parviz, Babak A. “Augmented Reality in a Contact Lens.” IEEE Spectrum, 1 Sep. 2009. Pedersen, Isabel. Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media. Anderson, South Carolina: Parlor Press, 2013. Pollock, Greg. “What Is Posthumanism by Cary Wolfe (2009).” Rev. of What is Posthumanism?, by Cary Wolfe. Journal for Critical Animal Studies 9.1/2 (2011): 235-241. Sidwell, Marc. “The Long View: Bill Gates Is Gone and the Dot-com Era Is Over: It's Only the End of the Beginning.” City A.M., 7 Feb. 2014. “Solve for X: Babak Parviz on Building Microsystems on the Eye.” YouTube, 7 Feb. 2012. Taylor, Alan, dir. Terminator: Genisys. Paramount Pictures, 2015. DVD. Thacker, Eugene “Biomedia.” Critical Terms for Media Studies. Eds. W.J.T Mitchell and Mark Hansen, Chicago: Chicago Press, 2010. 117-130. Van der Ploeg, Irma. “Biometrics and the Body as Information.” Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. Ed. David Lyon. New York: Routledge, 2003. 57-73. Wired Staff. “7 Massive Ideas That Could Change the World.” Wired.com, 17 Jan. 2013.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography