To see the other types of publications on this topic, follow the link: Applicative level packets processing.

Journal articles on the topic 'Applicative level packets processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Applicative level packets processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fais, Alessandra, Giuseppe Lettieri, Gregorio Procissi, Stefano Giordano, and Francesco Oppedisano. "Data Stream Processing for Packet-Level Analytics." Sensors 21, no. 5 (March 3, 2021): 1735. http://dx.doi.org/10.3390/s21051735.

Full text
Abstract:
One of the most challenging tasks for network operators is implementing accurate per-packet monitoring, looking for signs of performance degradation, security threats, and so on. Upon critical event detection, corrective actions must be taken to keep the network running smoothly. Implementing this mechanism requires the analysis of packet streams in a real-time (or close to) fashion. In a softwarized network context, Stream Processing Systems (SPSs) can be adopted for this purpose. Recent solutions based on traditional SPSs, such as Storm and Flink, can support the definition of general complex queries, but they show poor performance at scale. To handle input data rates in the order of gigabits per seconds, programmable switch platforms are typically used, although they offer limited expressiveness. With the proposed approach, we intend to offer high performance and expressive power in a unified framework by solely relying on SPSs for multicores. Captured packets are translated into a proper tuple format, and network monitoring queries are applied to tuple streams. Packet analysis tasks are expressed as streaming pipelines, running on general-purpose programmable network devices, and a second stage of elaboration can process aggregated statistics from different devices. Experiments carried out with an example monitoring application show that the system is able to handle realistic traffic at a 10 Gb/s speed. The same application scales almost up to 20 Gb/s speed thanks to the simple optimizations of the underlying framework. Hence, the approach proves to be viable and calls for the investigation of more extensive optimizations to support more complex elaborations and higher data rates.
APA, Harvard, Vancouver, ISO, and other styles
2

Ermakov, R. N. "CLASSIFICATION OF NETWORK PROTOCOLS WITH APPLICATION OF MACHINE LEARNING METHODS AND FUZZY LOGIC ALGORITHMS IN TRAFFIC ANALYSIS SYSTEMS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 189 (March 2020): 37–48. http://dx.doi.org/10.14489/vkit.2020.03.pp.037-048.

Full text
Abstract:
This paper presents a new effective approach to analyzing network traffic in order to determine the protocol of information exchange. A brief description of the structure of the algorithm for classifying network packets by belonging to one of the known network protocols is given. To define the protocol, the principle of high-speed one-packet classification is used, which consists in analyzing the information transmitted in each particular packet. Elements of behavioral analysis are used, namely, the transition states of information exchange protocols are classified, which allows to achieve a higher level of accuracy of classification and a higher degree of generalization in new test samples. The topic of the article is relevant in connection with the rapid growth of transmitted traffic, including malicious traffic, and the emergence of new technologies for transmitting and processing information. The article analyzes the place of traffic analysis systems among other information security systems, describes the tasks that they allow to solve. It is shown that when recognizing the internal state in which a particular protocol may be in the process of information exchange at the handshake stage, a classifier of network packets of the application level can be useful. To classify network packets, we used fuzzy logic algorithms (Mamdani model) and machine learning methods (neural network solutions based on logistic regression). The paper presents 4 stages of developing a network packet classifier – monitoring and collecting packet statistics of the most famous network traffic protocols, preprocessing primary packet statistics, building a classifier for network packets and testing. The test results of the constructed software module capable of identifying network protocols for information exchange are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
3

Kamble, Suchita, and N. N. Mhala. "Controller for Network Interface Card on FPGA." International Journal of Reconfigurable and Embedded Systems (IJRES) 1, no. 2 (July 1, 2012): 55. http://dx.doi.org/10.11591/ijres.v1.i2.pp55-58.

Full text
Abstract:
The continuing advances in the performance of network servers make it essential for network interface cards (NICs) to provide more sophisticated services and data processing. Modern network interfaces provide fixed functionality and are optimized for sending and receiving large packets. Network interface cards allow the operating system to send and receive packets through the main memory to the network. The operating system stores and retrieves data from the main memory and communicates with the NIC over the local interconnect, usually a peripheral component interconnect bus (PCI). Most NICs have a PCI hardware interface to the host server, use a device driver to communicate with the operating system and use local receive and transmit storage buffers. NICs typically have a direct memory access (DMA) engine to transfer data between host memory and the network interface memory. In addition, NICs include a medium access control (MAC) unit to implement the link level protocol for the underlying network such as Ethernet, and use a signal processing hardware to implement the physical (PHY) layer defined in the network. To execute and synchronize the above operations NICs also contents controller whose architecture is customized for network data transfer. In this paper we present the architecture of application specific controller that can be used in NICs.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Beom-Su, Sangdae Kim, Kyong Hoon Kim, Tae-Eung Sung, Babar Shah, and Ki-Il Kim. "Adaptive Real-Time Routing Protocol for (m,k)-Firm in Industrial Wireless Multimedia Sensor Networks." Sensors 20, no. 6 (March 14, 2020): 1633. http://dx.doi.org/10.3390/s20061633.

Full text
Abstract:
Many applications are able to obtain enriched information by employing a wireless multimedia sensor network (WMSN) in industrial environments, which consists of nodes that are capable of processing multimedia data. However, as many aspects of WMSNs still need to be refined, this remains a potential research area. An efficient application needs the ability to capture and store the latest information about an object or event, which requires real-time multimedia data to be delivered to the sink timely. Motivated to achieve this goal, we developed a new adaptive QoS routing protocol based on the (m,k)-firm model. The proposed model processes captured information by employing a multimedia stream in the (m,k)-firm format. In addition, the model includes a new adaptive real-time protocol and traffic handling scheme to transmit event information by selecting the next hop according to the flow status as well as the requirement of the (m,k)-firm model. Different from the previous approach, two level adjustment in routing protocol and traffic management are able to increase the number of successful packets within the deadline as well as path setup schemes along the previous route is able to reduce the packet loss until a new path is established. Our simulation results demonstrate that the proposed schemes are able to improve the stream dynamic success ratio and network lifetime compared to previous work by meeting the requirement of the (m,k)-firm model regardless of the amount of traffic.
APA, Harvard, Vancouver, ISO, and other styles
5

MUROOKA, TAKAHIRO, AKIRA NAGOYA, TOSHIAKI MIYAZAKI, HIROYUKI OCHI, and YUKIHIRO NAKAMURA. "NETWORK PROCESSOR FOR HIGH-SPEED NETWORK AND QUICK PROGRAMMING." Journal of Circuits, Systems and Computers 16, no. 01 (February 2007): 65–79. http://dx.doi.org/10.1142/s0218126607003502.

Full text
Abstract:
The paper describes the concept, architecture, and prototype test results of a packet processor that enables us to implement an application-specific high-speed packet processing system without expert-level programming skills. This processor has a pipelined processing architecture and features coarse-grained instructions that are based on the data formats of the telecommunication packet. Using this processor, target applications can be implemented within a short working period without degrading the processing performance. We implemented a prototype system to evaluate its packet propagation delay and packet forwarding performance. The measured results suggest that the architecture is useful for packet processing on high-speed telecommunication networks.
APA, Harvard, Vancouver, ISO, and other styles
6

omran alkaam, nora. "Image Compression by Wavelet Packets." Oriental journal of computer science and technology 11, no. 1 (March 20, 2018): 24–28. http://dx.doi.org/10.13005/ojcst11.01.05.

Full text
Abstract:
This research implements Image processing to reduce the size of image without losing the important information This paper aims to determine the best wavelet to compress the still image at a particular decomposition level using wavelet packet transforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmadi, Mahmood, and Stephan Wong. "A Cache Architecture for Counting Bloom Filters: Theory and Application." Journal of Electrical and Computer Engineering 2011 (2011): 1–10. http://dx.doi.org/10.1155/2011/475865.

Full text
Abstract:
Within packet processing systems, lengthy memory accesses greatly reduce performance. To overcome this limitation, network processors utilize many different techniques, for example, utilizing multilevel memory hierarchies, special hardware architectures, and hardware threading. In this paper, we introduce a multilevel memory architecture for counting Bloom filters. Based on the probabilities of incrementing of the counters in the counting Bloom filter, a multi-level cache architecture called the cached counting Bloom filter (CCBF) is presented, where each cache level stores the items with the same counters. To test the CCBF architecture, we implement a software packet classifier that utilizes basic tuple space search using a 3-level CCBF. The results of mathematical analysis and implementation of the CCBF for packet classification show that the proposed cache architecture decreases the number of memory accesses when compared to a standard Bloom filter. Based on the mathematical analysis of CCBF, the number of accesses is decreased by at least 53%. The implementation results of the software packet classifier are at most 7.8% (3.5% in average) less than corresponding mathematical analysis results. This difference is due to some parameters in the packet classification application such as number of tuples, distribution of rules through the tuples, and utilized hashing functions.
APA, Harvard, Vancouver, ISO, and other styles
8

Korobeynikov, A. V. "Synthesis of a packet of pulse with phase manipulation with side lobes level 1/N at incoherent accumulation." Issues of radio electronics 49, no. 5 (July 5, 2020): 28–34. http://dx.doi.org/10.21778/2218-5453-2020-5-28-34.

Full text
Abstract:
The paper is devoted to the problem of choosing a probing radar signal using optimal processing of a packet of pulses with unknown initial phases. A method for synthesizing a packet of phase-coded pulses is proposed, the total autocorrelation function (ACF) of which, with coordinated filtering and incoherent accumulation, has a side lobes level (SLL) of 1/N. The search criteria for binary codes for phase manipulation of a signal that are potentially capable of forming packets of pulses with relative SLL of 1/N are formulated and justified. An algorithm has been developed for searching codes with a given ACF using the exhaustive search method. A method is proposed for forming the composition of a packets of pulses based on the exhaustive search method. A number of values of the code N duration were determined for which there are packets of pulses with a relative SLL of the total ACF equal to 1/N with coordinated filtering and incoherent accumulation.
APA, Harvard, Vancouver, ISO, and other styles
9

ČASTOVÁ, NINA, DAVID HORÁK, and ZDENĚK KALÁB. "DESCRIPTION OF SEISMIC EVENTS USING WAVELET TRANSFORM." International Journal of Wavelets, Multiresolution and Information Processing 04, no. 03 (September 2006): 405–14. http://dx.doi.org/10.1142/s0219691306001336.

Full text
Abstract:
This paper deals with engineering application of wavelet transform for processing of real seismological signals. Methodology for processing of these slight signals using wavelet transform is presented in this paper. Briefly, three basic aims are connected with this procedure:. 1. Selection of optimal wavelet and optimal wavelet basis B opt for selected data set based on minimal entropy: B opt = arg min B E(X,B). The best results were reached by symmetric complex wavelets with scaling coefficients SCD-6. 2. Wavelet packet decomposition and filtration of data using universal criterion of thresholding of the form [Formula: see text], where σ is minimal variance of the sum of packet decomposition of chosen level. 3. Cluster analysis of decomposed data. All programs were elaborated using program MATLAB 5.
APA, Harvard, Vancouver, ISO, and other styles
10

Neeb, C., M. J. Thul, and N. Wehn. "Application driven evaluation of network on chip architectures forcation parallel signal processing." Advances in Radio Science 2 (May 27, 2005): 181–86. http://dx.doi.org/10.5194/ars-2-181-2004.

Full text
Abstract:
Abstract. Today’s signal processing applications exhibit steadily increasing throughput requirements which can be achieved by parallel architectures. However, efficient communication is mandatory to fully exploit their parallelism. Turbo-Codes as an instance of highly efficient forward-error correction codes are a very good application to demonstrate the communication complexity in parallel architectures. We present a network-on-chip approach to derive an optimal communication architecture for a parallel Turbo-Decoder system. The performance of such a system significantly depends on the efficiency of the underlying interleaver network to distribute data among the parallel units. We focus on the strictly orthogonal n-dimensional mesh, torus and k-ary-n cube networks comparing deterministic dimension-order and partially adaptive negative- first and planar-adaptive routing algorithms. For each network topology and routing algorithm, input- and output-queued packet switching schemes are compared on the architectural level. The evaluation of candidate network architectures is based on performance measures and implementation cost to allow a fair trade-off.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Junnan, Zhigang Sun, Jinli Yan, Xiangrui Yang, Yue Jiang, and Wei Quan. "DrawerPipe: A Reconfigurable Pipeline for Network Processing on FPGA-Based SmartNIC." Electronics 9, no. 1 (December 31, 2019): 59. http://dx.doi.org/10.3390/electronics9010059.

Full text
Abstract:
In the public cloud, FPGA-based SmartNICs are widely deployed to accelerate network functions (NFs) for datacenter operators. We argue that with the trend of network as a service (NaaS) in the cloud is also meaningful to accelerate tenant NFs to meet performance requirements. However, in pursuit of high performance, existing work such as AccelNet is carefully designed to accelerate specific NFs for datacenter providers, which sacrifices the flexibility of rapidly deploying new NFs. For most tenants with limited hardware design ability, it is time-consuming to develop NFs from scratch due to the lack of a rapidly reconfigurable framework. In this paper, we present a reconfigurable network processing pipeline, i.e., DrawerPipe, which abstracts packet processing into multiple “drawers” connected by the same interface. NF developers can easily share existing modules with other NFs and simply load core application logic in the appropriate “drawer” to implement new NFs. Furthermore, we propose a programmable module indexing mechanism, namely PMI, which can connect “drawers” in any logical order, to perform distinct NFs for different tenants or flows. Finally, we implemented several highly reusable modules for low-level packet processing, and extended four example NFs (firewall, stateful firewall, load balancer, IDS) based on DrawerPipe. Our evaluation shows that DrawerPipe can easily offload customized packet processing to FPGA with high performance up to 100 Mpps and ultra-low latency (<10 µs). Moreover, DrawerPipe enables modular development of NFs, which is suitable for rapid deployment of NFs. Compared with individual NF development, DrawerPipe reduces the line of code (LoC) of the four NFs above by 68%.
APA, Harvard, Vancouver, ISO, and other styles
12

Dhulekar, P. A., S. L. Nalbalwar, and J. J. Chopade. "Spectral Splitting of Speech by Wavelet Packets to Shrink Simultaneous Masking." Advanced Materials Research 403-408 (November 2011): 970–75. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.970.

Full text
Abstract:
Simultaneous masking is when a sound is made inaudible by a masker, a noise or unwanted sound of the same duration as the original sound. An innovative approach is investigated for speech processing in cochlea. Differently from the traditional filter-bank spectral analysis strategies, the proposed method analyses the speech signal by means of the wavelet packets. Splitting the speech signal by filtering and down sampling at each decomposition level, using wavelet packets with different wavelet functions, helps to shrink the effect of simultaneous masking. The performance of the proposed method is experimentally evaluated with vowel-consonant-vowel -syllables for fifteen English consonants. The dichotic presentation of processed speech signals effectively reduces the simultaneous masking through which it improves auditory perception.
APA, Harvard, Vancouver, ISO, and other styles
13

Simon, Judy, Aishwarya A, Mahalakshmi K, and A. Naveen Kumar. "A Novel Signal Processing Based Driver Drowsiness Detection System." September 2021 3, no. 3 (August 6, 2021): 176–90. http://dx.doi.org/10.36548/jismac.2021.3.001.

Full text
Abstract:
Drowsiness is a major cause of vehicle collisions and it most of the cases it may cause traffic accidents. This condition necessitates the need to develop a drowsiness detection system. Generally, the degree of sleep may be assessed by the number of eye blinks, yawning, gripping power on the steering wheel, and so on. These methods simply compute the actions of the driver. Henceforth, this research work proposes a Brain Computer Interface (BCI) technology to evaluate the mental state of brain by utilizing the EEG signals. Brain signal analysis is the main process involved in this project. Depending on the mental state of the drivers, the neurons pattern differs. Different electric brain signals will be produced in every neurons pattern. The attention level of brain signal varies from general state when the driver is sleeping mentally with eyes open. Various frequency and amplitude of EEG based brain signal are collected by using a brain wave sensor and the attention level is analyzed by using a level splitter section to which the brain signals are made into packets and transmitted through a medium. Level splitter section (LSS) figures out the driver’s state and provides a drowsiness alarm and retains the vehicle in a self-controlled mode until the driver wakes up. Additionally, this research work will provide an alert to the users and control the vehicle by employing the proposed model.
APA, Harvard, Vancouver, ISO, and other styles
14

Ahmed, O., S. Areibi, and G. Grewal. "Hardware Accelerators Targeting a Novel Group Based Packet Classification Algorithm." International Journal of Reconfigurable Computing 2013 (2013): 1–33. http://dx.doi.org/10.1155/2013/681894.

Full text
Abstract:
Packet classification is a ubiquitous and key building block for many critical network devices. However, it remains as one of the main bottlenecks faced when designing fast network devices. In this paper, we propose a novel Group Based Search packet classification Algorithm (GBSA) that is scalable, fast, and efficient. GBSA consumes an average of 0.4 megabytes of memory for a 10 k rule set. The worst-case classification time per packet is 2 microseconds, and the preprocessing speed is 3 M rules/second based on an Xeon processor operating at 3.4 GHz. When compared with other state-of-the-art classification techniques, the results showed that GBSA outperforms the competition with respect to speed, memory usage, and processing time. Moreover, GBSA is amenable to implementation in hardware. Three different hardware implementations are also presented in this paper including an Application Specific Instruction Set Processor (ASIP) implementation and two pure Register-Transfer Level (RTL) implementations based on Impulse-C and Handel-C flows, respectively. Speedups achieved with these hardware accelerators ranged from 9x to 18x compared with a pure software implementation running on an Xeon processor.
APA, Harvard, Vancouver, ISO, and other styles
15

Michel, Oliver, Roberto Bifulco, Gábor Rétvári, and Stefan Schmid. "The Programmable Data Plane." ACM Computing Surveys 54, no. 4 (July 2021): 1–36. http://dx.doi.org/10.1145/3447868.

Full text
Abstract:
Programmable data plane technologies enable the systematic reconfiguration of the low-level processing steps applied to network packets and are key drivers toward realizing the next generation of network services and applications. This survey presents recent trends and issues in the design and implementation of programmable network devices, focusing on prominent abstractions, architectures, algorithms, and applications proposed, debated, and realized over the past years. We elaborate on the trends that led to the emergence of this technology and highlight the most important pointers from the literature, casting different taxonomies for the field, and identifying avenues for future research.
APA, Harvard, Vancouver, ISO, and other styles
16

Shelkovoy, D. V., and A. A. Chernikov. "Simulation modeling of packet switching network segment functioning." Issues of radio electronics, no. 12 (December 28, 2019): 75–82. http://dx.doi.org/10.21778/2218-5453-2019-12-75-82.

Full text
Abstract:
The testing results of required channel resource mathematical estimating models for the for serving the proposed multimedia load in packet-switched communication networks are presented in the article. The assessment of the attainable level of quality of service at the level of data packet transportation was carried out by means of simulation modeling of the functioning of a switching node of a communication network. The developed modeling algorithm differs from the existing ones by taking into account the introduced delay for processing each data stream packet arriving at the switching node, depending on the size of the reserved buffer and the channel resource for its maintenance. A joint examination of the probability of packet loss and the introduced delay in the processing of data packets in the border router allows a comprehensive assessment of the quality of service «end to end», which in turn allows you to get more accurate values of the effective data transmitted rate by aggregating flows at the entrance to the transport network.
APA, Harvard, Vancouver, ISO, and other styles
17

Freeman, Walter J. "A Neurobiological Theory of Meaning in Perception Part II: Spatial Patterns of Phase in Gamma EEGs from Primary Sensory Cortices Reveal the Dynamics of Mesoscopic Wave Packets." International Journal of Bifurcation and Chaos 13, no. 09 (September 2003): 2513–35. http://dx.doi.org/10.1142/s0218127403008156.

Full text
Abstract:
Domains of cooperative neural activity called "wave packets" have been discovered in the visual, auditory, and somatomotor cortices of rabbits that were trained to discriminate conditioned stimuli in these modalities. Each domain forms by a first order state transition, which strongly resembles a phase transition from vapor to liquid. In this view, raw sense data injected into cortex by sensory axons drive cortical action potentials in swarms like water molecules in steam. The increased activity destabilizes the cortex. Within 3 to 7 milliseconds of transition onset, the activity binds together into a state resembling a scintillating rain drop, which lasts ~80 to 100 milliseconds, then dissolves. Wave packets form at rates of 2 to 7/second in all sensory areas, overlapping in space and time. Results of sensory information processing are seen in spatial patterns of amplitude modulation (AM) of wave packets with carrier waves in the gamma range (20 to 80 Hz in rabbits). The AM patterns correspond to categories of CSs that the rabbits can discriminate. The patterns are found in electroencephalographic (EEG) potentials generated by dendrites and recorded with high-density electrode arrays. The state transitions by which AM patterns form are manifested in the spatial pattern of phase modulation (PM), which have the radial symmetry of a cone. The apex of a PM cone marks the site of nucleation of an AM pattern. The phase gradient gives a soft boundary condition, where the axonal delay in spread gives sufficient phase dispersion to reach the half-power level. The size of the wave packets (10 to 30 mm in diameter in rabbits) is determined largely by the conduction velocities of intracortical axons through which the neural cooperation is maintained. The findings show that significant cortical activity takes the form of mesoscopic interactions of millions of neurons in broad areas of cortex, which are more clearly detected in graded dendritic potentials than in action potentials. The distinction is analogous to the difference between statistical mechanical and thermodynamic descriptions of particle behavior. Both types of neural activity show spatial and temporal discontinuities but at distinctive scales of microns and msec versus mm and tenths of a second. The aim of measurement here is to establish the wave packet as the information carrier at the mesoscopic level in brain dynamics, comparable to the role of the action potential as the information carrier at the microscopic level in neuron dynamics.
APA, Harvard, Vancouver, ISO, and other styles
18

Rodríguez-Prieto, Álvaro, Ana Maria Camacho, and Miguel Ángel Sebastián. "Development of a Computer Tool to Support the Teaching of Materials Technology." Materials Science Forum 903 (August 2017): 17–23. http://dx.doi.org/10.4028/www.scientific.net/msf.903.17.

Full text
Abstract:
Materials technology is a matter of great applicative and crosscutting interest, as evidenced by their presence in most curriculums of the current industrial engineering degrees. During the development of this matter, it is crucial that the student assimilates not only the relationship among composition, processing and mechanical properties, but also, how all these technological features interact facing the in-service behavior of the material. That is why, within a Doctoral dissertation developed at the Department of Construction and Manufacturing Engineering at the National Distance Education University (UNED), it has designed a computer tool to quantify the stringency level of technological requirements of materials (especially suitable for high demanding applications), characterized by its suitability as interactive teaching material used in the teaching of materials engineering. As a case study, we have chosen a selection of materials for nuclear reactor pressure vessels, because it is a very representative example of the relationship between chemical composition, mechanical properties and in-service behavior.
APA, Harvard, Vancouver, ISO, and other styles
19

Periša, Marko, Ivan Cvitić, Dragan Peraković, and Siniša Husnjak. "BEACON TECHNOLOGY FOR REAL-TIME INFORMING THE TRAFFIC NETWORK USERS ABOUT THE ENVIRONMENT." Transport 34, no. 3 (June 3, 2019): 373–82. http://dx.doi.org/10.3846/transport.2019.10402.

Full text
Abstract:
Informing the users about their environment is of extreme importance for their full and independent functioning in the traffic system. Today’s development of technology provides the user the access to information about their environment by using the smartphone device at any moment if there is a defined applicative solution. For this, it is necessary to define the user’s environment according to the Ambient Assisted Living (AAL) concept, which understands adequate technology of gathering, processing and distribution of information. This paper presents the proposal of the solution for informing the traffic network users about the environment for the defined group of users based on the beacon technology. The mentioned solution is based on the results of two separate studies about the needs of users who move along a part of the traffic network. The aim of the proposed solution is to provide the user with precise and real-time information and to raise the level of safety during movement.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Yi Ming, and Jian Cheng Tan. "Design of a Synthesized Merging Unit Based on IEC 61850-9-2." Applied Mechanics and Materials 241-244 (December 2012): 2223–27. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2223.

Full text
Abstract:
To meet the requirements of smart substations for digital information, integrated functions and compact structure, the definitions of merging units in IEC 60044-8 and IEC 61850-9-1/2 were analyzed. Based on the analysis, this paper described a realization of merging units used in the electronic transformers. The hardware and software architectures of merging unit were proposed. According to the design, the merging unit realizes synchronization of sampling pulse, receiving and processing of sampled values, and data transmission. An additional function of phasor measurement was included, which makes it a synthesized device between the process level and the bay level of the digital substation. The data packets were captured and analyzed by Wireshark, corresponding with IEC 61850-9-2, which demonstrates the high flexibility and utility value of the merging unit.
APA, Harvard, Vancouver, ISO, and other styles
21

Yang, Bao Liang, Jia Yong Ye, and Zheng Fu Cheng. "Design of Multi Parameter Water Quality Online Monitoring System Base on FPGA." Advanced Materials Research 1061-1062 (December 2014): 945–49. http://dx.doi.org/10.4028/www.scientific.net/amr.1061-1062.945.

Full text
Abstract:
]:With increasingly serious pollution to surrounding environment,water quality is deteriorating gradually, which on a certain level threats our daily life. In order to detect problems timely, based on FPGA, Multi Parameter Water Quality Online Monitoring System is designed, which, taking Cyclone II FPGA that supports Nios II processor as the core, is responsible for data acquisition, analysis and processing. Data communication adopts General Packet Radio Service (GPRS) to realize the data transmission from acquisition point to the monitoring center. In the analysis of the existing problems of the current water quality monitoring systems, we present the structure and working principle of this system, and then illustrate them from two aspects: hardware and software. Practice shows that the Multi Parameter Water Quality Online Monitoring system with a simple structure is steady, reliable and easy-operated in processing multi-point data acquisition and remote monitoring the whole system, and proves a widely application in the water supply plant of villages and small towns.Key words: FPGA; Water quality; Nios II; remote monitoring
APA, Harvard, Vancouver, ISO, and other styles
22

Tung, Yung-Hao, Hung-Chuan Wei, Yen-Wu Ti, Yao-Tung Tsou, Neetesh Saxena, and Chia-Mu Yu. "Counteracting UDP Flooding Attacks in SDN." Electronics 9, no. 8 (August 1, 2020): 1239. http://dx.doi.org/10.3390/electronics9081239.

Full text
Abstract:
Software-defined networking (SDN) is a new networking architecture with a centralized control mechanism. SDN has proven to be successful in improving not only the network performance, but also security. However, centralized control in the SDN architecture is associated with new security vulnerabilities. In particular, user-datagram-protocol (UDP) flooding attacks can be easily launched and cause serious packet-transmission delays, controller-performance loss, and even network shutdown. In response to applications in the Internet of Things (IoT) field, this study considers UDP flooding attacks in SDN and proposes two lightweight countermeasures. The first method sometimes sacrifices address-resolution-protocol (ARP) requests to achieve a high level of security. In the second method, although packets must sometimes be sacrificed when undergoing an attack before starting to defend, the detection of the network state can prevent normal packets from being sacrificed. When blocking a network attack, attacks from the affected port are directly blocked without affecting normal ports. The performance and security of the proposed methods were confirmed by means of extensive experiments. Compared with the situation where no defense is implemented, or similar defense methods are implemented, after simulating a UDP flooding attack, our proposed method performed better in terms of the available bandwidth, central-processing-unit (CPU) consumption, and network delay time.
APA, Harvard, Vancouver, ISO, and other styles
23

Miruta, Radu-Dinel, Cosmin Stanuica, and Eugen Borcoci. "New Fields in Classifying Algorithms for Content Awareness." International Journal of Information Technology and Web Engineering 7, no. 2 (April 2012): 1–15. http://dx.doi.org/10.4018/jitwe.2012040101.

Full text
Abstract:
The content aware (CA) packet classification and processing at network level is a new approach leading to significant increase of delivery quality of the multimedia traffic in Internet. This paper presents a solution for a new multi-dimensional packet classifier of an edge router, based on content - related new fields embedded in the data packets. The technique is applicable to content aware networks. The classification algorithm is using three new packet fields named Virtual Content Aware Network (VCAN), Service Type (STYPE), and U (unicast/multicast) which are part of the Content Awareness Transport Information (CATI) header. A CATI header is inserted into the transmitted data packets at the Service/Content Provider server side, in accordance with the media service definition, and enables the content awareness features at a new overlay Content Aware Network layer. The functionality of the CATI header within the classification process is then analyzed. Two possibilities are considered: the adaptation of the Lucent Bit vector algorithm and, respectively, of the tuple space search, in order to respond to the suggested multi-fields classifier. The results are very promising and they prove that theoretical model of inserting new packet fields for content aware classification can be implemented and can work in a real time classifier.
APA, Harvard, Vancouver, ISO, and other styles
24

Mochalov, Valery P., Gennady I. Linets, Natalya Yu Bratchenko, and Svetlana V. Govorova. "An Analytical Model of a Corporate Software-Controlled Network Switch." Scalable Computing: Practice and Experience 21, no. 2 (June 27, 2020): 337–46. http://dx.doi.org/10.12694/scpe.v21i2.1698.

Full text
Abstract:
Implementing the almost limitless possibilities of a software-defined network requires additional study of its infrastructure level and assessment of the telecommunications aspect. The aim of this study is to develop an analytical model for analyzing the main quality indicators of modern network switches. Based on the general theory of queuing systems and networks, generated functions and Laplace-Stieltjes transforms, a three-phase model of a network switch was developed. Given that, in this case, the relationship between processing steps is not significant, quality indicators were obtained by taking into account the parameters of single-phase networks. This research identified the dependencies of service latency and service time of incoming network packets on load, as well as equations for finding the volume of a switch’s buffer memory with an acceptable probability for message loss.
APA, Harvard, Vancouver, ISO, and other styles
25

FAROOQ, OMAR, and SEKHARJIT DATTA. "EVALUATION OF A WAVELET BASED ASR FRONT-END." International Journal of Wavelets, Multiresolution and Information Processing 05, no. 04 (July 2007): 641–54. http://dx.doi.org/10.1142/s021969130700194x.

Full text
Abstract:
In this paper, we propose the use of the wavelet transform for the extraction of features for phonemes in order to overcome some of the shortcomings of short time Fourier transform. New log-energy based features are proposed using discrete wavelet transform as well as wavelet packets and their recognition performance has been evaluated. These features overcome the problem of shift variance as encountered in the features based on the discrete wavelet transform coefficients. The effect on the recognition performance by choosing different mother wavelets for the decomposition and window duration is also studied. Finally, a scheme based on the admissible wavelet packet has also been proposed and the results are discussed and compared with the frequently used Mel Frequency Cepstral Coefficients based features. The recognition performance of these features is further evaluated in the presence of different level of additive white Gaussian noise.
APA, Harvard, Vancouver, ISO, and other styles
26

Shukur, Marwan Ihsan. "S-CDCA: a semi-cluster directive-congestion protocol for priority-based data in WSNs." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 1 (July 1, 2021): 438. http://dx.doi.org/10.11591/ijeecs.v23.i1.pp438-444.

Full text
Abstract:
The internet of things (IoT) protocols and regulations are being developed forvarious applications includes: habitat monitoring, machinery control, general health-care, smart-homes and more. A great part of I0T comprised of sensors nodes in connected networks (i.e. sensor networks.). A sensor network is a group of nodes with sensory module and computational elements connected through network interfaces. The most interesting type of sensor networks are wireless sensor networks. The nodes here are connected through wirless interfaces. The shared medium between these nodes, creates different challenges. Congestion in such network is ineavitable. Different models andmethods were proposed to alleviate congestion in wireless sensor networks.This paper presents a semi-cluster directive congestion method that allivatenetwork congestion forpriority-baseddata transmission. The method simprove the network performance by implementing temporary cluster forlow level priority data packets while providing a clear link between highpriority data source node and the network base station. Simulation resultsshow that. The proposed method outperformes ad hocOn-demand distance vector (AODV) reactive procotol approach and priority-based congestion control dynamic clustering (PCCDC) a cluster-based methodin network energy consumption and control packets overhead during network operation.The proposed method also shows comparative improvments in end-to-enddelays versus PCCDC.
APA, Harvard, Vancouver, ISO, and other styles
27

RHO, MIN-JEONG, MYUNG-SUB CHUNG, JEE-HAE LEE, and JIYONG PARK. "Monitoring of Microbial Hazards at Farms, Slaughterhouses, and Processing Lines of Swine in Korea." Journal of Food Protection 64, no. 9 (September 1, 2001): 1388–91. http://dx.doi.org/10.4315/0362-028x-64.9.1388.

Full text
Abstract:
This study was executed to investigate microbiological hazards at swine farms, slaughterhouses, dressing operations, and local markets for the application of the hazard analysis critical control point system in Korea by analyzing total aerobic plate count (APC) and presence of pathogens. Six integrated pig farms and meat packers were selected from six different provinces, and samples were collected from pig carcasses by swabbing and excision methods at the slaughterhouses, processing rooms, and local markets, respectively. APCs of water in water tanks were relatively low, 1.9 to 3.1 log10 CFU/ml; however, they were increased to 4.6 to 6.9 log10 CFU/ml when sampled from water nipples in the pigpen. APCs of feeds in the feed bins and in the pigpens were 4.4 to 5.4 and 5.2 to 6.7 log10 CFU/g, respectively. Salmonella spp., Staphylococcus aureus, and Clostridium perfringens were detected from water and feed sampled in pigpens and pigpen floors. S. aureus was the most frequently detected pathogenic bacteria in slaughterhouses and processing rooms. Listeria monocytogenes and Yersinia enterocolitica were also detected from the processing rooms of the Kyonggi, Kyongsang, and Cheju provinces. Even though APCs were maintained at the low level of 3.0 log10 CFU/g during slaughtering and processing steps, those of final pork products produced by the same companies showed relatively high numbers when purchased from the local market. These results indicated that the cold chain system for transporting and merchandising of pork products was deficient in Korea. Water supply and feed bins in swine farms and individual operations can be identified as critical control points to reduce microbiological hazards in swine farms, slaughterhouses, and processing plants.
APA, Harvard, Vancouver, ISO, and other styles
28

Hernandez, C., Z. De La Lande Dolce, R. Bensaied, and M. Mitrea. "Neural network-based assessment of the impact induced in video quality assessment by the semantic labels." Electronic Imaging 2021, no. 9 (January 18, 2021): 224–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-224.

Full text
Abstract:
Subjective video quality assessment generally comes across with semantically labeled evaluation scales (e.g. Excellent, Good, Fair, Poor and Bad on a single stimulus, 5 level grading scale). While suspicions about an eventual bias these labels induce in the quality evaluation always occur, to the best of our knowledge, very few state-of-the-art studies target an objective assessment of such an impact. Our study presents a neural network solution in this respect. We designed a 5-class classifier, with 2 hidden layers, and a softmax output layer. An ADAM optimizer coupled to a Sparse Categorical Cross Entropy function is subsequently considered. The experimental results are obtained out of processing a database composed of 440 observers scoring about 7 hours of video content of 4 types (high-quality stereoscopic video content, low-quality stereoscopic video content, high-quality 2D video, and low-quality 2D video). The experimental results are discussed and confrontment to the reference given by a probability-based estimation method. They show an overall good convergence between the two types of methods while pointing out to some inner applicative differences that are discussed and explained.
APA, Harvard, Vancouver, ISO, and other styles
29

FARAG, EMAD N., MOHAMED I. ELMASRY, MOHAMED N. SALEH, and NABIL M. ELNADY. "A TWO-LEVEL HIERARCHICAL MOBILE NETWORK: STRUCTURE AND NETWORK CONTROL." International Journal of Reliability, Quality and Safety Engineering 03, no. 04 (December 1996): 325–51. http://dx.doi.org/10.1142/s0218539396000211.

Full text
Abstract:
The increase in demand for mobile telecommunication systems, and the limited bandwidth allocated to these systems, have led to systems with smaller cell dimensions, which in turn led to the increase of control messages. In order to prevent controller bottle necks, it is desirable to distribute the network control functions throughout the network. To satisfy this requirement, a mobile network structure characterized by its hierarchical and decentralized network control is presented in this paper. The area served by the mobile system is divided into regions, and the regions are further divided into cells. Each cell is served by a base station, each base station is connected to a regional network through a base station interface unit (BIU). Each region has its own regional network. Connected to each regional network are the cellular controller, the home database, the visitor database, the trunk interface unit (TIU) and the gateway interface unit (GIU). The TIU connects the regional network to the public switched telephone network (PSTN). The GIU connects the regional network to other regional networks through the gateway network. This architecture distributes the network control functions among a large number of processing elements, thus preventing controller bottle necks — a problem faced by centralized controlled systems. The information and network control messages are transferred in the form of packets across this network. Processes inherent to the operation of this network structure are illustrated and discussed. These processes include the location update process, the setting up of a call, the handoff process (both the intra-region handoff process and the inter-region handoff process are considered), and the process of terminating a call.
APA, Harvard, Vancouver, ISO, and other styles
30

Peng, Guang-Qian, Guangtao Xue, and Yi-Chao Chen. "Network Measurement and Performance Analysis at Server Side." Future Internet 10, no. 7 (July 16, 2018): 67. http://dx.doi.org/10.3390/fi10070067.

Full text
Abstract:
Network performance diagnostics is an important topic that has been studied since the Internet was invented. However, it remains a challenging task, while the network evolves and becomes more and more complicated over time. One of the main challenges is that all network components (e.g., senders, receivers, and relay nodes) make decision based only on local information and they are all likely to be performance bottlenecks. Although Software Defined Networking (SDN) proposes to embrace a centralize network intelligence for a better control, the cost to collect complete network states in packet level is not affordable in terms of collection latency, bandwidth, and processing power. With the emergence of the new types of networks (e.g., Internet of Everything, Mission-Critical Control, data-intensive mobile apps, etc.), the network demands are getting more diverse. It is critical to provide finer granularity and real-time diagnostics to serve various demands. In this paper, we present EVA, a network performance analysis tool that guides developers and network operators to fix problems in a timely manner. EVA passively collects packet traces near the server (hypervisor, NIC, or top-of-rack switch), and pinpoints the location of the performance bottleneck (sender, network, or receiver). EVA works without detailed knowledge of application or network stack and is therefore easy to deploy. We use three types of real-world network datasets and perform trace-driven experiments to demonstrate EVA’s accuracy and generality. We also present the problems observed in these datasets by applying EVA.
APA, Harvard, Vancouver, ISO, and other styles
31

Hu, Tianyu, Jinhui Zhao, Ruifang Zheng, Pengfeng Wang, Xiaolu Li, and Qichun Zhang. "Ultrasonic based concrete defects identification via wavelet packet transform and GA-BP neural network." PeerJ Computer Science 7 (August 31, 2021): e635. http://dx.doi.org/10.7717/peerj-cs.635.

Full text
Abstract:
Concrete is the main material in building. Since its poor structural integrity may cause accidents, it is significant to detect defects in concrete. However, it is a challenging topic as the unevenness of concrete would lead to the complex dynamics with uncertainties in the ultrasonic diagnosis of defects. Note that the detection results mainly depend on the direct parameters, e.g., the time of travel through the concrete. The current diagnosis accuracy and intelligence level are difficult to meet the design requirement for automatic and increasingly high-performance demands. To solve the mentioned problems, our contribution of this paper can be summarized as establishing a diagnosis model based on the GA-BPNN method and ultrasonic information extracted that helps engineers identify concrete defects. Potentially, the application of this model helps to improve the working efficiency, diagnostic accuracy and automation level of ultrasonic testing instruments. In particular, we propose a simple and effective signal recognition method for small-size concrete hole defects. This method can be divided into two parts: (1) signal effective information extraction based on wavelet packet transform (WPT), where mean value, standard deviation, kurtosis coefficient, skewness coefficient and energy ratio are utilized as features to characterize the detection signals based on the analysis of the main frequency node of the signals, and (2) defect signal recognition based on GA optimized back propagation neural network (GA-BPNN), where the cross-validation method has been used for the stochastic division of the signal dataset and it leads to the BPNN recognition model with small bias. Finally, we implement this method on 150 detection signal data which are obtained by the ultrasonic testing system with 50 kHz working frequency. The experimental test block is a C30 class concrete block with 5, 7, and 9 mm penetrating holes. The information of the experimental environment, algorithmic parameters setting and signal processing procedure are described in detail. The average recognition accuracy is 91.33% for the identification of small size concrete defects according to experimental results, which verifies the feasibility and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
32

Ho Kwon, Tae, Jai Eun Kim, Ki Soo An, Rappy Saha, and Ki Doo Kim. "Visual-MIMO for Software-Defined Vehicular Networks." International Journal of Engineering & Technology 7, no. 4.4 (September 15, 2018): 13. http://dx.doi.org/10.14419/ijet.v7i4.4.19596.

Full text
Abstract:
The paradigm of software-defined network (SDN) is being applied to vehicle scenarios in order to eliminate this heterogeneity of vehicular network infrastructure and to manage packet flow in an application- and user-centrically flexible and efficient manner. However, owing to the random mobility of vehicles and the unpredictable road communication environment, efficient vehicle-based SDN development needs further research. In this study, we propose the concept of a sub-control plane for supporting and backing up, at the data plane level, various functions of the control plane, which plays a key role in SDN. The sub-control plane can be intuitively understood through the image processing techniques used in color-independent visual-MIMO (multiple input multiple output) networking, and the function of the control plane can be backed up through various vehicle-based recognition and tracking algorithms under the situation of disconnection between the data plane and the control plane. The proposed sub-control plane is expected to facilitate efficient management of the software-defined vehicular network (SDVN) and improve vehicular communication performance and service quality.
APA, Harvard, Vancouver, ISO, and other styles
33

Rhim, Hana, Damien Sauveron, Ryma Abassi, Karim Tamine, and Sihem Guemara. "A Secure Protocol against Selfish and Pollution Attacker Misbehavior in Clustered WSNs." Electronics 10, no. 11 (May 24, 2021): 1244. http://dx.doi.org/10.3390/electronics10111244.

Full text
Abstract:
Wireless sensor networks (WSNs) have been widely used for applications in numerous fields. One of the main challenges is the limited energy resources when designing secure routing in such networks. Hierarchical organization of nodes in the network can make efficient use of their resources. In this case, a subset of nodes, the cluster heads (CHs), is entrusted with transmitting messages from cluster nodes to the base station (BS). However, the existence of selfish or pollution attacker nodes in the network causes data transmission failure and damages the network availability and integrity. Mainly, when critical nodes like CH nodes misbehave by refusing to forward data to the BS, by modifying data in transit or by injecting polluted data, the whole network becomes defective. This paper presents a secure protocol against selfish and pollution attacker misbehavior in clustered WSNs, known as (SSP). It aims to thwart both selfish and pollution attacker misbehaviors, the former being a form of a Denial of Service (DoS) attack. In addition, it maintains a level of confidentiality against eavesdroppers. Based on a random linear network coding (NC) technique, the protocol uses pre-loaded matrices within sensor nodes to conceive a larger number of new packets from a set of initial data packets, thus creating data redundancy. Then, it transmits them through separate paths to the BS. Furthermore, it detects misbehaving nodes among CHs and executes a punishment mechanism using a control counter. The security analysis and simulation results demonstrate that the proposed solution is not only capable of preventing and detecting DoS attacks as well as pollution attacks, but can also maintain scalable and stable routing for large networks. The protocol means 100% of messages are successfully recovered and received at the BS when the percentage of lost packets is around 20%. Moreover, when the number of misbehaving nodes executing pollution attacks reaches a certain threshold, SSP scores a reception rate of correctly reconstructed messages equal to 100%. If the SSP protocol is not applied, the rate of reception of correctly reconstructed messages is reduced by 90% at the same case.
APA, Harvard, Vancouver, ISO, and other styles
34

Reddy, K. Shirisha, M. Balaraju, and Ramananaik . "Development of new optimal cloud computing mechanism for data exchange based on link selectivity, link reliability and data exchange efficiency." International Journal of Engineering & Technology 7, no. 1.2 (December 28, 2017): 199. http://dx.doi.org/10.14419/ijet.v7i1.2.9066.

Full text
Abstract:
Reorganization of virtual machines (VM) presents an extraordinary opportunity for parallel, cluster, grid, cloud and distributed computing. Virtualization technology benefits the computing and IT ventures by enabling clients to share expensive by multiplexing virtual machines on a similar arrangement of hardware hosts. With the advantage of higher data-accessing feature, this data sharing approach provides challenges in perceptive to data security and data integrity issues. In this paper, to the selected network, a service level agreement approach for data access resource management is developed. During the exchange of packets over the selected link, it is required that data are to be accessed at a faster rate. The overhead may have serious negative effects on cluster utilization, throughput, and Quality of Service issues. Therefore, the challenge is to develop VMs a control approach, which governs the rate allocation in terms of data access and bandwidth to speed the data exchange performance. The throughput of the network is monitored by the traffic link rate, wherein the processing overhead is observed at the process of quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Saeed, Ahmed, Ali Ahmadinia, and Mike Just. "Secure On-Chip Communication Architecture for Reconfigurable Multi-Core Systems." Journal of Circuits, Systems and Computers 25, no. 08 (May 17, 2016): 1650089. http://dx.doi.org/10.1142/s0218126616500894.

Full text
Abstract:
Security is becoming the primary concern in today’s embedded systems. Network-on-chip (NoC)-based communication architectures have emerged as an alternative to shared bus mechanism in multi-core system-on-chip (SoC) devices and the increasing number and functionality of processing cores have made such systems vulnerable to security attacks. In this paper, a secure communication architecture has been presented by designing an identity and address verification (IAV) security module, which is embedded in each router at the communication level. IAV module verifies the identity and address range to be accessed by incoming and outgoing data packets in an NoC-based multi-core shared memory architecture. Our IAV module is implemented on an FPGA device for functional verification and evaluated in terms of its area and power consumption overhead. For FPGA-based systems, the IAV module can be reconfigured at run-time through partial reconfiguration. In addition, a cycle-accurate simulation is carried out to analyze the performance and total network energy consumption overhead for different network configurations. The proposed IAV module has presented reduced area and power consumption overhead when compared with similar existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
36

Kam, Z., T. Volberg, and B. Geiger. "Mapping of adherens junction components using microscopic resonance energy transfer imaging." Journal of Cell Science 108, no. 3 (March 1, 1995): 1051–62. http://dx.doi.org/10.1242/jcs.108.3.1051.

Full text
Abstract:
Quantitative microscopic imaging of resonance energy transfer (RET) was applied for immunological high resolution proximity mapping of several cytoskeletal components of cell adhesions. To conduct this analysis, a microscopic system was developed, consisting of a highly stable field illuminator, computer-controlled filter wheels for rapid multiple-color imaging and a sensitive, high resolution CCD camera, enabling quantitative data recording and processing. Using this system, we have investigated the spatial inter-relationships and organization of four adhesion-associated proteins, namely vinculin, talin, alpha-actinin and actin. Cultured chick lens cells were double labeled for each of the junctional molecules, using fluorescein- and rhodamine-conjugated antibodies or phalloidin. RET images were acquired with fluorescein excitation and rhodamine emission filter setting, corrected for fluorescein and rhodamine fluorescence, and normalized to the fluorescein image. The results pointed to high local densities of vinculin, talin and F-actin in focal adhesions, manifested by mean RET values of 15%, 12% and 10%, respectively. On the other hand, relatively low values (less than 1%) were observed following double immunofluorescence labeling of the same cells for alpha-actinin. Double indirect labeling for pairs of these four proteins (using fluorophore-conjugated antibodies or phalloidin) resulted in RET values of 5% or lower, except for the pair alpha-actinin and actin, which yielded significantly higher values (13-15%). These results suggest that despite their overlapping staining patterns, at the level of resolution of the light microscope, the plaque proteins vinculin and talin are not homogeneously interspersed at the molecular level but form segregated clusters. alpha-Actinin, on the other hand, does not appear to form such clusters but, rather, closely interacts with actin. We discuss here the conceptual and applicative aspects of RET measurements and the implications of the results on the subcellular molecular organization of adherens-type junctions.
APA, Harvard, Vancouver, ISO, and other styles
37

Latif, Khalid, Amir-Mohammad Rahmani, Tiberiu Seceleanu, and Hannu Tenhunen. "Cluster Based Networks-on-Chip." International Journal of Adaptive, Resilient and Autonomic Systems 4, no. 3 (July 2013): 25–41. http://dx.doi.org/10.4018/jaras.2013070102.

Full text
Abstract:
Partial Virtual channel Sharing (PVS) architecture has been proposed to enhance the performance of Networks-on-Chip (NoC) based systems. In this paper, the authors present an efficient and reliable Network Interface (NI) assisted routing strategy for NoC using PVS architecture. For this purpose, NoC system is divided into clusters. Each cluster is a group of two nodes comprising Processing Elements (PE), switches, links, etc. Each PE in a cluster can inject data to the network through a router, which is closer to the destination. This helps to reduce the network load by reducing the average hop count of the network. The proposed architecture can recover the PE disconnected from the network due to network level faults by allowing the PE to transmit and receive the packets through the other router in the cluster. 5×6 crossbar is used for the proposed architecture which requires one more 5×1 multiplexer without increasing the critical path delay of the router as compared to the 5×5 crossbar. The proposed router has been simulated for uniform, transpose and negative exponential distribution (NED) traffic patterns. The simulation results show the significant reduction in average packet latency at the expense of negligible area overhead.
APA, Harvard, Vancouver, ISO, and other styles
38

Fu, Wenwen, Tao Li, and Zhigang Sun. "FAS: Using FPGA to Accelerate and Secure SDN Software Switches." Security and Communication Networks 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/5650205.

Full text
Abstract:
Software-Defined Networking (SDN) promises the vision of more flexible and manageable networks but requires certain level of programmability in the data plane to accommodate different forwarding abstractions. SDN software switches running on commodity multicore platforms are programmable and are with low deployment cost. However, the performance of SDN software switches is not satisfactory due to the complex forwarding operations on packets. Moreover, this may hinder the performance of real-time security on software switch. In this paper, we analyze the forwarding procedure and identify the performance bottleneck of SDN software switches. An FPGA-based mechanism for accelerating and securing SDN switches, named FAS (FPGA-Accelerated SDN software switch), is proposed to take advantage of the reconfigurability and high-performance advantages of FPGA. FAS improves the performance as well as the capacity against malicious traffic attacks of SDN software switches by offloading some functional modules. We validate FAS on an FPGA-based network processing platform. Experiment results demonstrate that the forwarding rate of FAS can be 44% higher than the original SDN software switch. In addition, FAS provides new opportunity to enhance the security of SDN software switches by allowing the deployment of bump-in-the-wire security modules (such as packet detectors and filters) in FPGA.
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Chao, Jinlin Wang, and Yang Li. "An Efficient Indexing Scheme for Network Traffic Collection and Retrieval System." Electronics 10, no. 2 (January 15, 2021): 191. http://dx.doi.org/10.3390/electronics10020191.

Full text
Abstract:
Historical network traffic retrieval, both at the packet and flow level, has been applied in many fields of network security, such as network traffic analysis and network forensics. To retrieve specific packets from a vast number of packet traces, it is an effective solution to build indexes for the query attributes. However, it brings challenges of storage consumption and construction time overhead for packet indexing. To address these challenges, we propose an efficient indexing scheme called IndexWM based on the wavelet matrix data structure for packet indexing. Moreover, we design a packet storage format based on the PcapNG format for our network traffic collection and retrieval system, which can speed up the extraction of index data from packet traces. Offline experiments on randomly generated network traffic and actual network traffic are performed to evaluate the performance of the proposed indexing scheme. We choose an open-source and widely used bitmap indexing scheme, FastBit, for comparison. Apart from the native bitmap compression method Word-Aligned Hybrid (WAH), we implement an efficient bitmap compression method Scope-Extended COMPAX (SECOMPAX) in FastBit for performance evaluation. The comparison results show that our scheme outperforms the selected bitmap indexing schemes in terms of time consumption, storage consumption and retrieval efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Beshley, Mykola, Natalia Kryvinska, Halyna Beshley, Oleg Yaremko, and Julia Pyrih. "Virtual Router Design and Modeling for Future Networks with QoS Guarantees." Electronics 10, no. 10 (May 11, 2021): 1139. http://dx.doi.org/10.3390/electronics10101139.

Full text
Abstract:
A virtual router model with a static and dynamic resource reconfiguration for future internet networking was developed. This technique allows us to create efficient virtual devices with optimal parameters (queue length, queue overflow management discipline, number of serving devices, mode of serving devices) to ensure the required level of quality of service (QoS). An analytical model of a network device with virtual routers is proposed. By means of the mentioned mathematical representation, it is possible to determine the main parameters of the virtual queue system, which are based on the first in, first out (FIFO) algorithm, in order to analyze the efficiency of network resources utilization, as well as to determine the parameters of QoS flows, for a given intensity of packets arrival at the input interface of the network element. In order to research the guaranteed level of QoS in future telecommunications networks, a simulation model of a packet router with resource virtualization was developed. This model will allow designers to choose the optimal parameters of network equipment for the organization of virtual routers, which, in contrast to the existing principle of service, will provide the necessary quality of service provision to end users in the future network. It is shown that the use of standard static network device virtualization technology is not able to fully provide a guaranteed level of QoS to all present flows in the network by the criterion of minimum delay. An approach for dynamic reconfiguration of network device resources for virtual routers has been proposed, which allows more flexible resource management at certain points in time depending on the input load. Based on the results of the study, it is shown that the dynamic virtualization of the network device provides a guaranteed level of QoS for all transmitted flows. Thus, the obtained results confirm the feasibility of using dynamic reconfiguration of network device resources to improve the quality of service for end users.
APA, Harvard, Vancouver, ISO, and other styles
41

O.I., Beley, and Kolesnyk K.K. "Modeling of attack detection system based on hybridization of binary classifiers." Artificial Intelligence 25, no. 3 (October 10, 2020): 14–25. http://dx.doi.org/10.15407/jai2020.03.014.

Full text
Abstract:
The study considers the development of methods for detecting anomalous network connections based on hybridization of computational intelligence methods. An analysis of approaches to detecting anomalies and abuses in computer networks. In the framework of this analysis, a classification of methods for detecting network attacks is proposed. The main results are reduced to the construction of multi-class models that increase the efficiency of the attack detection system, and can be used to build systems for classifying network parameters during the attack. A model of an artificial immune system based on an evolutionary approach, an algorithm for genetic-competitive learning of the Kohonen network and a method of hierarchical hybridization of binary classifiers with the addition to the detection of anomalous network connections have been developed. The architecture of the network distributed attack detection system has been developed. The architecture of the attack detection system is two-tier: the first level provides the primary analysis of individual packets and network connections using signature analysis, the second level processes the processing of aggregate network data streams using adaptive classifiers. A signature analysis was performed to study network performance based on the Aho-Korasik and Boyer-Moore algorithms and their improved analogues were implemented using OpenMP and CUDA technologies. The architecture is presented and the main points of operation of the network attack generator are shown. A system for generating network attacks has been developed. This system consists of two components: an asynchronous transparent proxy server for TCP sessions and a frontend interface for a network attack generator. The results of the experiments confirmed that the functional and non-functional requirements, as well as the requirements for computing intelligent systems, are met for the developed attack detection system.
APA, Harvard, Vancouver, ISO, and other styles
42

Kadochnikov, Alexey. "Experience in the development a regional geoportal for the Krasnoyarsk Region." InterCarto. InterGIS 26, no. 1 (2020): 203–14. http://dx.doi.org/10.35595/2414-9179-2020-1-26-203-214.

Full text
Abstract:
The paper presents the experience of developing a subsystem of the state geographic information system Yenisei-GIS. Yenisei-GIS is a software package designed to solve problems of creating, collecting, updating, processing and analyzing spatial data, in accordance with the requirements of the concept of creating a regional segment of the spatial data infrastructure of the Russian Federation. Yenisei-GIS is a technological platform of the Krasnoyarsk Region for integration projects using spatial data, the storage and publication of which is provided by the subsystem spatial data storage. When developing the Yenisei-GIS system, the problem of creating a spatial data bank for a geographically oriented information system for supporting decision-making at the level of a constituent subject of the federation was solved using the example of the Krasnoyarsk Region. The solution to this problem from a technological point of view is provided by the construction of a set of interconnected software elements, among which there are both properly configured “packet” software and original authoring developments. From an organizational point of view, the solution of the problem is based on the technological regulations of information interaction and regulatory documents. The paper describes web services and tools of the Yenisei-GIS system, designed for interagency electronic interaction of different information systems. The system architecture is described, attention is paid to the creation of base maps, and examples of developed application systems based on the Yenisei-GIS system are also considered. Under the base maps in the Yenisei GIS are meant such map maps or satellite images that can be used as a substrate for displaying thematic maps on their background.
APA, Harvard, Vancouver, ISO, and other styles
43

Mursidin, Mursidin, Ulfiah Ulfiah, and Ening Ningsih. "RELEVANSI KURIKULUM FAKULTAS PSIKOLOGI UIN SUNAN GUNUNG DJATI BANDUNG DENGAN DUNIA KERJA." Psympathic : Jurnal Ilmiah Psikologi 1, no. 1 (February 26, 2018): 1–16. http://dx.doi.org/10.15575/psy.v1i1.2114.

Full text
Abstract:
Curriculum is one of the important components in education which has a big role in determining the direction and purpose that will be reached by the academic institution. Therefore, curriculum must be designed on the basis of curriculum development principles should be noticed. One of the principles is the curriculum relevance with the employment. So, to evaluate the curriculum, including psychology faculty UIN SGD Bandung, it can be conducted by scrutinizing how far the curriculum relevance with occupation.To scrutinize how far the curriculum relevance with occupation, we can trace from alumni’s feed back. So, in this research it has been scrutinized how curriculum relevance of psychology faculty UIN SGD Bandung with the occupation taken from alumni’s view, the job appropriateness with the competence and psychology profession, and the hope of alumni to curriculum change.The data of research are processed either qualitative approach or quantitative by descriptive statistic analyzing present age calculation, and level category and also the analysis of contingency coefficient correlation test. From the result of data processing, it shows that most of alumni view that the curriculum of psychology faculty UIN SGD Bandung is still relevant with occupation. It is strengthened by the data showing the most of alumni’s job which generally is still appropriate with the psychology profession scope. It is so with being relatively fast for the alumni to get the job after graduating. It indicates that the products (graduations) of psychology faculty UIN SGD Bandung still have any competition and they are needed by the user.From correlation test result, it can be concluded that there is no respondent’s characteristic which has significant relation with their view to the curriculum relevance of psychology faculty UIN SGD Bandung with occupation. In other word, respondent’s characteristic can’t be a significant correlation to human’s view in curriculum.Although Alumni view curriculum still relevant with occupation, but all respondents still hope any change in curriculum to be more applicative and relevant with human’s need especially for occupation.
APA, Harvard, Vancouver, ISO, and other styles
44

Kurzon, Ittai, Ran N. Nof, Michael Laporte, Hallel Lutzky, Andrey Polozov, Dov Zakosky, Haim Shulman, Ariel Goldenberg, Ben Tatham, and Yariv Hamiel. "The “TRUAA” Seismic Network: Upgrading the Israel Seismic Network—Toward National Earthquake Early Warning System." Seismological Research Letters 91, no. 6 (August 26, 2020): 3236–55. http://dx.doi.org/10.1785/0220200169.

Full text
Abstract:
Abstract Following the recommendations of an international committee (Allen et al., 2012), since October 2017, the Israeli Seismic Network has been undergoing significant upgrades, with 120 stations being added or upgraded throughout the country and the addition of two new datacenters. These enhancements are the backbone of the TRUAA project, assigned to the Geological Survey of Israel (GSI) by the Israeli Government, to provide earthquake early warning (EEW) capabilities for the state of Israel. The GSI contracted Nanometrics (NMX), supported by Motorola Solutions Israel, to deliver these upgrades through a turnkey project, including detailed design, equipment supply, and deployment of the network and two datacenters. The TRUAA network was designed and tailored by the GSI, in collaboration with the NMX project team, specifically to achieve efficient and robust EEW. Several significant features comprise the pillars of this network:Coverage: Station distribution has high density (5–10 km spacing) along the two main fault systems—the Dead Sea Fault and the Carmel Fault System;Instrumentation: High-quality strong-motion accelerometers and broadband seismometers with modern three-channel and six-channel dataloggers sampling at 200 samples per second;Low latency acquisition: Data are encapsulated in small packets (&lt;1 s), with primary routing via high-speed, high-capacity telemetry links (&lt;1 s latency);Robustness: High level of redundancy throughout the system design:Dual active-active redundant acquisition routes from each station, each utilizing multicast streaming over an IP security Virtual Private Network tunnel, via independent high-bandwidth telemetry systemsTwo active-active independent geographically separate datacentersDual active-active redundant independent automatic seismic processing tool chains within each datacenter, implemented in a high availability protected virtual environment. At this time, both datacenters and over 100 stations are operational. The system is currently being commissioned, with initial early warning operation targeted for early 2021.
APA, Harvard, Vancouver, ISO, and other styles
45

Nayyar, Anand, Rudra Rameshwar, and Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Full text
Abstract:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
APA, Harvard, Vancouver, ISO, and other styles
46

Dvornikov, S. V., A. V. Pshenichnikov, S. S. Dvornikov, V. V. Borisov, and G. S. Potapov. "Ultra-wideband ultra-short pulse communication system." Radio industry (Russia) 31, no. 1 (April 7, 2021): 16–27. http://dx.doi.org/10.21778/2413-9599-2021-31-1-16-27.

Full text
Abstract:
Problem statement. The development and design of radio communication systems with enhanced structural and energy stealth properties are of the greatest interest in modern radio engineering. One method of implementing such radio systems is the use of ultra-wideband signals. Despite advances in radio engineering theory, the development and design of ultra-wideband radio systems are at the initial stage. The results obtained in this subject area are not systematized, limiting their practical application and indicating the relevance of the chosen research problem.The study's objective is to formalize an approach to the design and performance assessment of ultra-wideband radio systems based on statistical radio engineering methods. Bringing the obtained theoretical solutions to the level of practical implementation in a radio station layout.Results. The analysis of available theoretical and practical solutions in the subject area of ultra-wideband radio systems is carried out. The principles of development and evaluation taking into account the characteristics of radio equipment elements are justified. A model of an ultra-wideband radio pulse is presented. The requirements of guiding documents are summarized, based on which the requirements for radio equipment are clarified. The criteria for the formation and processing of ultra-wideband signals are determined. An approach to controlling the parameters of the applied signals is considered. The criterion of increasing the efficiency of radio systems is justified. An approach to calculating the size of the pulse packs defining the signal symbols is developed. Analytical calculations are presented following the developed approach.Practical implications. The authors developed a model of an ultra-wideband radio station based on LLC Scientific Production Enterprise "New Telecommunications Technologies". The obtained practical solutions can be used in the field of practical implementation of ultra-wideband radio communication systems
APA, Harvard, Vancouver, ISO, and other styles
47

Stefanović, Aleksandar, Emina Čolak, Gordana Stanojević, and Ljubinka Nikolić. "Effects of introducing Type&Screen system on rational use of transfusions." Hospital Pharmacology - International Multidisciplinary Journal 8, no. 2 (2021): 1051–57. http://dx.doi.org/10.5937/hpimj2102051s.

Full text
Abstract:
Introduction: The number of blood donors at the global level has decreased primarily due to ethical and age-related changes in the structure of the planet's population. In addition, there is over ordering of blood for surgical patients. Accordingly, there is a need for rationalizing the testing i.e. reducing the number of cross-matchings and decrease in the use of blood. A type and screen (T&S) upon admission is sufficient for most patients. Determination of ABO blood group and Rh type, and screens for clinically significant alloantibodies is denoted as type and screen (T&S). Aim: Comparison of pharmaco-economic effect using transfusion indices on the number of performed cross-matches and the amount of packed red blood cells issued. Material and Methods: The authors present the comparison between the year 2010 before the introduction of the Type and Screen (T&S) system and the year 2019 when the T&S system and restrictive policy in transfusion practice, were introduced. Data for 2010 were collected from written transfusion protocols of the clinic, and for 2019 were obtained from the hospital information system (Heliant) and written transfusion protocols. The difference between two groups of data was examine with Chi-square test and Fisher exact test, with the reliability level set at p<0.05. Results: With the introduction of the T&S system, the number of cross-matches was reduced from 0.63 to 0.49 and the number of blood units was reduced from 0.21 to 0.11 per hospitalized patient, which at the level of one clinic represents a significant pharmacoeconomic contribution of approximately 50%. In our study, after processing T&S in ordering of blood, the indices (CTR, %T, TI) failed to improve. Despite of unsatisfactory transfusion indices, the application of restrictive indication policies in accordance with national and international guidelines has led to highly significant reduction in the consumption of total blood from 3243 to 1867 blood units. The BOQ as an overall assessment of the results after the introduction of the T&S procedure indicated improvement. Conclusions: The introduction of validation in blood transfusion indirectly draws the attention of prescribing physicians to take into account the significance of blood therapy. The effects of the introduction of the T&S method and restrictive transfusion policy are savings in blood consumption, decreased number of patients tested, a significant reduction in used blood units, and the number of performed cross-matches, despite the increased number of patients.
APA, Harvard, Vancouver, ISO, and other styles
48

Rozhon, Jan, Filip Rezac, Jakub Jalowiczor, and Ladislav Behan. "Augmenting Speech Quality Estimation in Software-Defined Networking Using Machine Learning Algorithms." Sensors 21, no. 10 (May 17, 2021): 3477. http://dx.doi.org/10.3390/s21103477.

Full text
Abstract:
With the increased number of Software-Defined Networking (SDN) installations, the data centers of large service providers are becoming more and more agile in terms of network performance efficiency and flexibility. While SDN is an active and obvious trend in a modern data center design, the implications and possibilities it carries for effective and efficient network management are not yet fully explored and utilized. With most of the modern Internet traffic consisting of multimedia services and media-rich content sharing, the quality of multimedia communications is at the center of attention of many companies and research groups. Since SDN-enabled switches have an inherent feature of monitoring the flow statistics in terms of packets and bytes transmitted/lost, these devices can be utilized to monitor the essential statistics of the multimedia communications, allowing the provider to act in case of network failing to deliver the required service quality. The internal packet processing in the SDN switch enables the SDN controller to fetch the statistical information of the particular packet flow using the PacketIn and Multipart messages. This information, if preprocessed properly, can be used to estimate higher layer interpretation of the link quality and thus allowing to relate the provided quality of service (QoS) to the quality of user experience (QoE). This article discusses the experimental setup that can be used to estimate the quality of speech communication based on the information provided by the SDN controller. To achieve higher accuracy of the result, latency characteristics are added based on the exploiting of the dummy packet injection into the packet stream and/or RTCP packet analysis. The results of the experiment show that this innovative approach calculates the statistics of each individual RTP stream, and thus, we obtain a method for dynamic measurement of speech quality, where when quality decreases, it is possible to respond quickly by changing routing at the network level for each individual call. To improve the quality of call measurements, a Convolutional Neural Network (CNN) was also implemented. This model is based on two standard approaches to measuring the speech quality: PESQ and E-model. However, unlike PESQ/POLQA, the CNN-based model can take delay into account, and unlike the E-model, the resulting accuracy is much higher.
APA, Harvard, Vancouver, ISO, and other styles
49

Haroun, EL Mahdi Ahmed, Tisser Khalid, Abdelazim Mohd Altawil, Gammaa A. M. Osman, and Eiman Elrashid Diab. "Potentiality of municipal sludge for biological gas production at Soba Station South of Khartoum (Sudan)." World Journal of Biology and Biotechnology 5, no. 2 (August 15, 2020): 11. http://dx.doi.org/10.33865/wjb.005.02.0300.

Full text
Abstract:
Biogas production considered the most encouraging sources of renewable energy in Sudan. Anaerobic process of digestion is considered as efficient techniques of producing biogas. The process also a trustworthy method for treatment of municipal wastes, and the digested discharge could be utilized as soil conditioner to improve the productivity. This research work states at the option of using domestic sludge of the wastewater treatment plant in Soba municipal station (south of Khartoum-Sudan) to produce biological gas (biogas). A laboratory investigation was carried out using five-liter bioreactor to generate biogas for 30 days. The total volume of gas made was 270.25 Nml with a yield of 20 Nml of biogas/mg of COD removed. Chemical oxygen demand, Biological oxygen demand, & total solids drop produced were 89, 91 & 88.23% respectively. Microbial activity was declined from 1.8x107 (before starting the process of digestion) to 1.1x105 germs/mL (after completion of 30 days of digestion). This study offered a significant energetic opportunity by estimated the power production to 35 KWh.Key word: Sludge, municipal plant, organic material, anaerobic process, breakdown, biological gas potentialNTRODUCTIONIncreasing of urban industries style in the world has given rise to the production of effluents in huge amounts with abundant organic materials, which if handled properly, be able to end in a substantial source of energy. Although of a fact that there is an undesirable environmental effect related with industrialization, the influence can be diminished and energy can be tapped by means of anaerobic digestion of the wastewater (Deshpande et al., 2012). Biological wastewater treatment plant (WWTP) is a station for removal of mainly organic pollution from wastewaters. Organic materials are partly transformed into sludge that, with the use of up-to-date technologies, represents an important energy source. Chemical biological, and physical technology applied throughout handling of wastewater produce sludge as a by-product. Recent day-to-day totals, dry solids range from 60–90 g per population equivalent, i.e. EU produces per year 10 million tons of dry sludge (Bodík et al., 2011). Sludge disposal (fertilizers use, incineration, and landfills) is often explored since of increasingly limiting environmental legislation (Fytili and Zabaniotou, 2008). The energy present in sludge is obviously consumed in anaerobic digestion. Anaerobic Process is considering the most appropriate choice for the handling of organic effluents of strong content. This process upgraded in the last few years significantly with the applications of differently configured high rate treatment processes, particularly for the dealing of industrial releases (Bolzonella et al., 2005). Anaerobic process leads to the creation of biological gas with high content of methane, which can be recovered, and used as an energy source, making it a great energy saver. The produced gas volume during the breakdown process can oscillate over a wide range varying from 0.5 – 0.9 m3 kg–1 VS degraded (for waste activated sludge) (Bolzonella et al., 2005). This range rest on the concentration of volatile solids in the sludge nourish and the biological action in the anaerobic breakdown process. The residue after digestion process is stable, odorless, and free from the main portion of the pathogenic microorganism and finally be able to use as an organic nourishment for different application in agriculture. Sludge significant coming out from breakdown which allows to yield a renewable energy, that was cheap, obtainable, & no polluting. Sustainable development considered the production of biogas as environmentally friendly and an economic key (Poh and Chong, 2009).OBJECTIVES Sudan have huge tones of sewage sludge from domestic sewage water is accumulated daily in lagoon of soba sewage treatment plant, so this work, we were carried for energy production and treatment of sludge, which constitutes a plentiful waste which ever know any sort of handling after few years from establishing the station.MATERIALS AND METHODSExperimental apparatus: Anaerobic breakdown was done in five liters fermenter. The fermenter was maintained at 35oC in a thermostatic bath and stirred regularly. U shaped glass tube was connected to the fermenter, allowing the measurement of produced biogas volume and pressure. Water displacement technique was used for determination of the volume of produced biological gas (biogas) at the beginning of each sampling. Testing of the biogas combustibility was determined by connecting one of ends of the tube to a gas collection and storage device (balloon), the other end to a Bunsen burner. In the process of reduction of carbon dioxide (CO2) to maximum dissolution in the tube the liquid must be a salty saturated acid solution (5% citric acid, 20% NaCl, pH ¼ 2) (Connaughton et al., 2006).Substrate: About 5L sludge containing culture medium were taken from the lowest part of the first settling tank in Soba station. The moisture content of initial substrate was 35%. The collected sample was preserved at 4oC prior to loading the biological reactor (Tomei et al., 2008). Table 1 showed the sludge features in the reactor with a loading rate of 16 g TS/L, (Connaughton et al., 2006; Tomei et al., 2008).Analytical Methods: The pH was controlled by using HANNA HI 8314 model as pH meter device. Assay was used for determination of Alkanility & Volatile fatty acids (Kalloum et al., 2011). The standard method of analysis was used for recognized the Chemical Oxygen Demand (COD) (Raposo et al., 2009). Titrimetric method was used for analyzing Volatile fatty acids (VFA). Alkalinity assay was used for determination of Total Alkalinity (TA). Oxitop assay was used for measuring the biological oxygen demand. Ignition method was used for measuring Volatile Solids (VS) by losing weight in dry sample at 550oC in the furnace, & Total solids were done to constant weight at 104oC (Monou et al., 2009). A method of water displacement was used for determination of the total volume of Biological gas produced (Moletta, 2005). Microbial species & analyses were determined by microbial standard assay. Sample analysis was done by explore of three replicates and the outcomes were the middling of these replicates. Startup of experiments continues until a bubble of gas was detected.RESULTS AND DISCUSSIONMeasurement of pH: Figure 2 exhibited pH trends during 30 days with a drop pattern from 7.0 to 6.0 during the first five days; this was mainly because of the breakdown of organic materials and the development of (VFA). Then later, an increasing pattern in pH was noticed to 6.98, for the next week, then Steadying around this pH level was continued till the completion of the breakdown period which taken 30 days. Those out comes were also reported by other researchers (Raposo et al., 2008)Measurement of VFA: Development of VFA throughout 30 days was depicted in figure 3, an increase in volatile fatty acids up to 1400 mill equivalents per liter (meq/L) in the first ten days. This criterion of making of volatile fatty acid is typical to the researcher’s report of identification of hydrolysis in acidogenesis stage (Parawira et al., 2006). The decline in volatile fatty acids after the tenth day was owing to intake by bacteria which would relate to the stage of acetogenesis.Total alkalinity (TA): During the ten days, we observed rise in volatile fatty acids content followed by a drop in a pH in the same time (figures 4 and 5). Encountered to these alterations, an increase in the total alkalinity in the medium for reestablishing situations of alkalinity to the outbreak of methanogens stage (figure 4). Through all the digestion period the ratio of VFA/TA which was equal and lower than 0.6±0.1 were described in figure 6. These ratios designated the achievability of the procedure despite the essential production of volatile fatty acid (Chen and Huang, 2006; Nordberg et al., 2007). The anaerobic digestion process may be hinder by the production of volatile fatty acid.Biogas production: Pressure measurement and biogas volume were used for controlling biogas production. Figure 7 explained the changing in biogas pressure throughout the digestion period. quality of Biogas was obtained with minimum methane of 40% (Bougrier et al., 2005; Lefebvre et al., 2006). Total volume of biological gas production was 270.25 Nml. The yield of biological gas was 20.25 Nml/mg COD removed, which is in range of the others researcher report (Tomei et al., 2008). Biogas production can be calculated from the following formula (Álvarez et al., 2006): Biogas production= (Total quantity of biogas produced)/(Total solid).The COD and BOD removal: Chemical oxygen Demand (COD) and Biological Oxygen Demand (BOD) showed a significant reduction of 89% and 91% respectively (figures 8 and 9). Consequently these reduction in contaminants proved that anaerobic process of digestion was an operational technique for removal of organic pollution. Some researchers reported the same results (Bolzonella et al., 2005; Álvarez et al., 2006; Wang et al., 2006). Another criterion for proving the removal of organic pollutants was reduction of total solids (TS), where the drop approached 88.23% (figure 10). Some researcher’s reports approached the same drop (Hutnan et al., 2006; Linke, 2006; Raposo et al., 2009). Therefore it was possible to conclude that anaerobic digestion necessary showed decrease or reduction of organic pollutants rates because of the transformation of organic substances into biogas and accordingly led to the drop of chemical oxygen demand (COD). This could be explained in figure 11 by the comparison of the two techniques during the anaerobic digestion process. That means the chemical oxygen demand (COD) drop should be tailed essentially by Total solids drop (TS).Microbial activity: Figure 11 showed the microbial variation during anaerobic digestion. The total micro flora (total germs) declined from 1.8x107 (before starting the process of digestion) to1.1x105 germs/mL (after completion of 30 days of digestion). Moreover figure 12 obviously explained what was running during the process of digestion in the reactor, microbial species vanishing after the 30 days such as streptococci and Escherichia coli. Some researchers reports explained that there was some sort of relationship between physicochemical and the biological parameters of micro flora with total solid (TS). figure 13 described obviously this relationship of the drop of micro flora which go along with total solids reduction. This intended that consumption and a declining in the mass residue of organic materials created at the termination of digestion was the outcome of the transformation of organic materials into biological gas and also the sum of microorganism reduction. This attained result proved that the process of anaerobic digestion was a good process for decontamination (Deng et al., 2006; Perez et al., 2006; Davidsson et al., 2007).CONCLUSIONSoba sludge’s municipal station carried in this research paper demonstrated operative for biological gas production (biogas). During the first five days, breakdown of organic materials and the formation of volatile acids were started. Volatile fatty acids increased up to 1400 mill equivalents per liter (meq/L) in the first ten days, then started to decline in after the tenth day this owing to intake by bacteria which would resemble to acetogenesis stage. The biogas production lasted until the 21th day then starting decreasing till the last day (30 day) this due to instability of the culture medium of fermentation which became completely poor. COD and BOD showed a significant reduction of 89% and 91% respectively. Another criteria for proving of removal rate of organic pollutants was reduction of total solids (TS), where the reduction rate approached 88.23%. Total volume of biological gas production was 270.25 Nml. The yield of biological gas was 20.25 Nml/mg COD removed, which is in range of the others researcher report. The total micro flora (total germs) declined from 1.8x107 (before starting the process of digestion) to 1.1x105 germs/mL (after completion of 30 days of digestion). Study proved that process of anaerobic digestion was a good process for decontamination. Industries and will be usefulness for bioremediation in marine environment and petroleum industry.ACKNOWLEDGMENTSThe authors wish to express their appreciation to Soba treatment plant, for their financial support of this research.CONFLICT OF INTERESTThe authors wish to express their appreciation to Soba treatment plant, for their financial support of this research.REFERENCES Álvarez, J., I. Ruiz, M. Gómez, J. Presas and M. Soto, 2006. Start-up alternatives and performance of an uasb pilot plant treating diluted municipal wastewater at low temperature. Bioresource technology, 97(14): 1640-1649.Bodík, I., S. Sedláček, M. Kubaská and M. Hutňan, 2011. Biogas production in municipal wastewater treatment plants–current status in eu with a focus on the Slovak Republic. Chemical biochemical engineering quarterly, 25(3): 335-340.Bolzonella, D., P. Pavan, P. Battistoni and F. Cecchi, 2005. Mesophilic anaerobic digestion of waste activated sludge: Influence of the solid retention time in the wastewater treatment process. Process biochemistry, 40(3-4): 1453-1460.Bougrier, C., H. Carrere and J. Delgenes, 2005. Solubilisation of waste-activated sludge by ultrasonic treatment. Chemical engineering journal, 106(2): 163-169.Chen, T.-H. and J.-L. Huang, 2006. Anaerobic treatment of poultry mortality in a temperature-phased leachbed–uasb system. Bioresource technology, 97(12): 1398-1410.Connaughton, S., G. Collins and V. O’Flaherty, 2006. Psychrophilic and mesophilic anaerobic digestion of brewery effluent: A comparative study. Water research, 40(13): 2503-2510.Davidsson, Å., C. Gruvberger, T. H. Christensen, T. L. Hansen and J. la Cour Jansen, 2007. Methane yield in source-sorted organic fraction of municipal solid waste. Waste management, 27(3): 406-414.Deng, L.-W., P. Zheng and Z.-A. Chen, 2006. Anaerobic digestion and post-treatment of swine wastewater using ic–sbr process with bypass of raw wastewater. Process biochemistry, 41(4): 965-969.Deshpande, D., P. Patil and S. Anekar, 2012. Biomethanation of dairy waste. Research journal of chemical sciences, 2(4): 35-39.Fytili, D. and A. Zabaniotou, 2008. Utilization of sewage sludge in eu application of old and new methods—a review. Renewable sustainable energy reviews, 12(1): 116-140.Hutnan, M., M. Drtil and A. Kalina, 2006. Anaerobic stabilisation of sludge produced during municipal wastewater treatment by electrocoagulation. Journal of hazardous materials, 131(1-3): 163-169.Kalloum, S., H. Bouabdessalem, A. Touzi, A. Iddou and M. Ouali, 2011. Biogas production from the sludge of the municipal wastewater treatment plant of Adrar city (Southwest of Algeria). Biomass bioenergy, 35(7): 2554-2560.Lefebvre, O., N. Vasudevan, M. Torrijos, K. Thanasekaran and R. Moletta, 2006. Anaerobic digestion of tannery soak liquor with an aerobic post-treatment. Water research, 40(7): 1492-1500.Linke, B., 2006. Kinetic study of thermophilic anaerobic digestion of solid wastes from potato processing. Biomass bioenergy, 30(10): 892-896.Moletta, M., 2005. Characterization of the airborne microbial diversity of biogas. In: PhD diss. Montpellier 2.Monou, M., N. Kythreotou, D. Fatta and S. Smith, 2009. Rapid screening procedure to optimise the anaerobic codigestion of industrial biowastes and agricultural livestock wastes in cyprus. Waste management, 29(2): 712-720.Nordberg, Å., Å. Jarvis, B. Stenberg, B. Mathisen and B. H. Svensson, 2007. Anaerobic digestion of alfalfa silage with recirculation of process liquid. Bioresource technology, 98(1): 104-111.Parawira, W., M. Murto, R. Zvauya and B. Mattiasson, 2006. Comparative performance of a uasb reactor and an anaerobic packed-bed reactor when treating potato waste leachate. Renewable energy, 31(6): 893-903.Perez, M., R. Rodriguez-Cano, L. Romero and D. Sales, 2006. Anaerobic thermophilic digestion of cutting oil wastewater: Effect of co-substrate. Biochemical engineering journal, 29(3): 250-257.Poh, P. and M. Chong, 2009. Development of anaerobic digestion methods for palm oil mill effluent (pome) treatment. Bioresource technology, 100(1): 1-9.Raposo, F., R. Borja, M. Martín, A. Martín, M. De la Rubia and B. Rincón, 2009. Influence of inoculum–substrate ratio on the anaerobic digestion of sunflower oil cake in batch mode: Process stability and kinetic evaluation. Chemical engineering journal, 149(1-3): 70-77.Raposo, F., R. Borja, B. Rincon and A. Jimenez, 2008. Assessment of process control parameters in the biochemical methane potential of sunflower oil cake. Biomass bioenergy, 32(12): 1235-1244.Tomei, M., C. Braguglia and G. Mininni, 2008. Anaerobic degradation kinetics of particulate organic matter in untreated and sonicated sewage sludge: Role of the inoculum. Bioresource technology, 99(14): 6119-6126.Wang, J., D. Shen and Y. Xu, 2006. Effect of acidification percentage and volatile organic acids on the anaerobic biological process in simulated landfill bioreactors. Process biochemistry, 41(7): 1677-1681.
APA, Harvard, Vancouver, ISO, and other styles
50

Alpian, Yayan, and Sri Wulan Anggraeni. "PELATIHAN PENGOLAHAN SAMPAH SEBAGAI KARYA SENI APLIKATIF DI SDN KARANGJAYA III KECAMATAN PEDES KARAWANG." Jurnal Pengabdian Masyarakat (JPM-IKP) 1, no. 01 (October 8, 2018). http://dx.doi.org/10.31326/jmp-ikp.v1i01.77.

Full text
Abstract:
Abstract: The purpose of this Community Service activity is to provide training on waste processing, to increase the understanding of the type of garbage, to improve the way of understanding about the type of waste, to provide skills training of artwork from waste to valuables. The target of this activity is the students of SDN Karangjaya III Pedes District Karawang regency which amounted to 35 students. The method used in this activity is the lecture method with the material presentation technique of waste knowledge and the types followed by discussion, while the waste management problem is solved by giving the training of making the work of secondhand goods. This activity is packed in the form of workshop. After being trained, they are then guided to apply the training results in order to improve students' ability in managing waste to be applicative art or appropriate goods. The results of the implementation of Community Service run smoothly attended by 35 students as participants. The trainee looks enthusiastic about the training materials provided. This is visible from the beginning to the end of the event, all the participants follow well. Based on the results of the activity can be identified about the level of understanding of the dedication participants is that 85% of the participants of devotion understand the concept of waste processing as an applicative artworkKeyword: Waste Processing, Applied ArtworkAbstrak: Tujuan kegiatan Pengabdian Pada Masyarakat ini adalah bertujuan untuk memberikan pelatihan proses pengolahan sampah, meningkatkan pemahaman mengenai jenis sampah, meningkatkan cara pemahaman mengenai jenis sampah, memberikan pelatihan keterampilan pembuatan karya seni dari sampah menjadi barang berharga. Sasaran kegiatan ini adalah siswa SDN Karangjaya III Kecamatan Pedes Kabupaten Karawang yang berjumlah 35 siswa. Metode yang digunakan dalam kegiatan ini adalah metode ceramah dengan teknik presentasi materi pengetahuan sampah dan jenis-jenisnya dilanjutkan dengan diskusi, sedangkan masalah pengelolaan sampah diselesaikan dengan memberikan pelatihan pembuatan karya dari barang-barang bekas. Kegiatan ini dikemas dalam bentuk workshop. Setelah diberi pelatihan, selanjutnya mereka dibimbing untuk menerapkan hasil pelatihan dalam rangka meningkatkan kemampuan siswa dalam mengelola sampah menjadi seni aplikatif atau barang tepat guna. Hasil pelaksanaan Pengabdian Pada Masyarakat berjalan dengan lancar dihadiri oleh 35 siswa sebagai peserta. Peserta pelatihan terlihat antusias dengan materi pelatihan yang diberikan. Hal ini terlihat dari awal hingga akhir acara, semua peserta mengikuti dengan baik. Berdasarkan hasil kegiatan dapat diidentifikasi mengenai tingkat pemahaman peserta pengabdian adalah bahwa 85% peserta pengabdian memahami konsep pengolahan sampah sebagai karya seni aplikatif.Kata Kunci: Pengolahan Sampah, Karya Seni Aplikatif
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography