To see the other types of publications on this topic, follow the link: Round robin novel.

Journal articles on the topic 'Round robin novel'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Round robin novel.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shafi, Uferah, Munam Shah, Abdul Wahid, Kamran Abbasi, Qaisar Javaid, Muhammad Asghar, and Muhammad Haider. "A Novel Amended Dynamic Round Robin Scheduling Algorithm for Timeshared Systems." International Arab Journal of Information Technology 17, no. 1 (January 1, 2019): 90–98. http://dx.doi.org/10.34028/iajit/17/1/11.

Full text
Abstract:
Central Processing Unit (CPU) is the most significant resource and its scheduling is one of the main functions of an operating system. In timeshared systems, Round Robin (RR) is most widely used scheduling algorithm. The efficiency of RR algorithm is influenced by the quantum time, if quantum is small, there will be overheads of more context switches and if quantum time is large, then given algorithm will perform as First Come First Served (FCFS) in which there is more risk of starvation. In this paper, a new CPU scheduling algorithm is proposed named as Amended Dynamic Round Robin (ADRR) based on CPU burst time. The primary goal of ADRR is to improve the conventional RR scheduling algorithm using the active quantum time notion. Quantum time is cyclically adjusted based on CPU burst time. We evaluate and compare the performance of our proposed ADRR algorithm based on certain parameters such as, waiting time, turnaround time etc. and compare the performance of our proposed algorithm. Our numerical analysis and simulation results in MATLAB reveals that ADRR outperforms other well-known algorithms such as conventional Round Robin, Improved Round Robin (IRR), Optimum Multilevel Dynamic Round Robin (OMDRR) and Priority Based Round Robin (PRR)
APA, Harvard, Vancouver, ISO, and other styles
2

Rentzsch, Katrin, and Jochen E. Gebauer. "On the Popularity of Agentic and Communal Narcissists: The Tit-for-Tat Hypothesis." Personality and Social Psychology Bulletin 45, no. 9 (February 7, 2019): 1365–77. http://dx.doi.org/10.1177/0146167218824359.

Full text
Abstract:
Among well-acquainted people, those high on agentic narcissism are less popular than those low on agentic narcissism. That popularity-difference figures prominently in the narcissism literature. But why are agentic narcissists less popular? We propose a novel answer―the tit-for-tat hypothesis. It states that agentic narcissists like other people less than non-narcissists do and that others reciprocate by liking agentic narcissists less in return. We also examine whether the tit-for-tat hypothesis generalizes to communal narcissism. A large round-robin study ( N = 474) assessed agentic and communal narcissism (Wave 1) and included two round-robin waves (Waves 2-3). The round-robin waves assessed participants’ liking for all round-robin group members (2,488 informant-reports). The tit-for-tat hypothesis applied to agentic narcissists. It also applied to communal narcissists, albeit in a different way. Compared with non-narcissists, communal narcissists liked other people more and―in return―those others liked communal narcissists more. Our results elaborate on and qualify the thriving literature on narcissists’ popularity.
APA, Harvard, Vancouver, ISO, and other styles
3

Alhaidari, Fahd, and Taghreed Zayed Balharith. "Enhanced Round-Robin Algorithm in the Cloud Computing Environment for Optimal Task Scheduling." Computers 10, no. 5 (May 9, 2021): 63. http://dx.doi.org/10.3390/computers10050063.

Full text
Abstract:
Recently, there has been significant growth in the popularity of cloud computing systems. One of the main issues in building cloud computing systems is task scheduling. It plays a critical role in achieving high-level performance and outstanding throughput by having the greatest benefit from the resources. Therefore, enhancing task scheduling algorithms will enhance the QoS, thus leading to more sustainability of cloud computing systems. This paper introduces a novel technique called the dynamic round-robin heuristic algorithm (DRRHA) by utilizing the round-robin algorithm and tuning its time quantum in a dynamic manner based on the mean of the time quantum. Moreover, we applied the remaining burst time of the task as a factor to decide the continuity of executing the task during the current round. The experimental results obtained using the CloudSim Plus tool showed that the DRRHA significantly outperformed the competition in terms of the average waiting time, turnaround time, and response time compared with several studied algorithms, including IRRVQ, dynamic time slice round-robin, improved RR, and SRDQ algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Akram, Junaid, Arsalan Tahir, Hafiz Suliman Munawar, Awais Akram, Abbas Z. Kouzani, and M. A. Parvez Mahmud. "Cloud- and Fog-Integrated Smart Grid Model for Efficient Resource Utilisation." Sensors 21, no. 23 (November 25, 2021): 7846. http://dx.doi.org/10.3390/s21237846.

Full text
Abstract:
The smart grid (SG) is a contemporary electrical network that enhances the network’s performance, reliability, stability, and energy efficiency. The integration of cloud and fog computing with SG can increase its efficiency. The combination of SG with cloud computing enhances resource allocation. To minimise the burden on the Cloud and optimise resource allocation, the concept of fog computing integration with cloud computing is presented. Fog has three essential functionalities: location awareness, low latency, and mobility. We offer a cloud and fog-based architecture for information management in this study. By allocating virtual machines using a load-balancing mechanism, fog computing makes the system more efficient (VMs). We proposed a novel approach based on binary particle swarm optimisation with inertia weight adjusted using simulated annealing. The technique is named BPSOSA. Inertia weight is an important factor in BPSOSA which adjusts the size of the search space for finding the optimal solution. The BPSOSA technique is compared against the round robin, odds algorithm, and ant colony optimisation. In terms of response time, BPSOSA outperforms round robin, odds algorithm, and ant colony optimisation by 53.99 ms, 82.08 ms, and 81.58 ms, respectively. In terms of processing time, BPSOSA outperforms round robin, odds algorithm, and ant colony optimisation by 52.94 ms, 81.20 ms, and 80.56 ms, respectively. Compared to BPSOSA, ant colony optimisation has slightly better cost efficiency, however, the difference is insignificant.
APA, Harvard, Vancouver, ISO, and other styles
5

Nestler, Steffen, Kevin J. Grimm, and Felix D. Schönbrodt. "The Social Consequences and Mechanisms of Personality: How to Analyse Longitudinal Data from Individual, Dyadic, Round–Robin and Network Designs." European Journal of Personality 29, no. 2 (March 2015): 272–95. http://dx.doi.org/10.1002/per.1997.

Full text
Abstract:
There is a growing interest among personality psychologists in the processes underlying the social consequences of personality. To adequately tackle this issue, complex designs and sophisticated mathematical models must be employed. In this article, we describe established and novel statistical approaches to examine social consequences of personality for individual, dyadic and group (round–robin and network) data. Our overview includes response surface analysis (RSA), autoregressive path models and latent growth curve models for individual data; actor–partner interdependence models and dyadic RSAs for dyadic data; and social relations and social network analysis for round–robin and network data. Altogether, our goal is to provide an overview of various analytical approaches, the situations in which each can be employed and a first impression about how to interpret their results. Three demo data sets and scripts show how to implement the approaches in R. Copyright © 2015 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jianyong, Jianchun Li, Yanhong Liu, Ye Li, Jie Zhang, and Daoying Huang. "Round Robin Tournament Scheduling with Arbitrary Competitors: A Novel Divide and Conquer Approach." Advanced Science Letters 11, no. 1 (May 30, 2012): 408–11. http://dx.doi.org/10.1166/asl.2012.3020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Olofintuyi, Sunday Samuel, Temidayo Oluwatosin Omotehinwa, and Joshua Segun Owotogbe. "A SURVEY OF VARIANTS OF ROUND ROBIN CPU SCHEDULING ALGORITHMS." FUDMA JOURNAL OF SCIENCES 4, no. 4 (June 15, 2021): 526–46. http://dx.doi.org/10.33003/fjs-2020-0404-513.

Full text
Abstract:
Quite a number of scheduling algorithms have been implemented in the past, including First Come First Served (FCFS), Shortest Job First (SJF), Priority and Round Robin (RR). However, RR seems better than others because of its impartiality during the usage of its quantum time. Despite this, there is a big challenge with respect to the quantum time to use. This is because when the quantum time is too large, it leads to FCFS, and if the quantum time is too short, it increases the number of switches from the processes. As a result of this, this paper provides a descriptive review of various algorithms that have been implemented in the past 10 years, for various quantum time in order to optimize the performance of CPU utilization. This attempt will open more research areas for researchers, serve as a reference source and articulate various algorithms that have been used in the previous years – and as such, the paper will serve as a guide for future work. This research work further suggests novel hybridization and ensemble of two or more techniques so as to improve CPU performance by decreasing the number of context switch, turnaround time, waiting time and response time and in overall increasing the throughput and CPU utilization.
APA, Harvard, Vancouver, ISO, and other styles
8

Papaphilippou, Philippos, Jiuxi Meng, Nadeen Gebara, and Wayne Luk. "Hipernetch: High-Performance FPGA Network Switch." ACM Transactions on Reconfigurable Technology and Systems 15, no. 1 (March 31, 2022): 1–31. http://dx.doi.org/10.1145/3477054.

Full text
Abstract:
We present Hipernetch, a novel FPGA-based design for performing high-bandwidth network switching. FPGAs have recently become more popular in data centers due to their promising capabilities for a wide range of applications. With the recent surge in transceiver bandwidth, they could further benefit the implementation and refinement of network switches used in data centers. Hipernetch replaces the crossbar with a “combined parallel round-robin arbiter”. Unlike a crossbar, the combined parallel round-robin arbiter is easy to pipeline, and does not require centralised iterative scheduling algorithms that try to fit too many steps in a single or a few FPGA cycles. The result is a network switch implementation on FPGAs operating at a high frequency and with a low port-to-port latency. Our proposed Hipernetch architecture additionally provides a competitive switching performance approaching output-queued crossbar switches. Our implemented Hipernetch designs exhibit a throughput that exceeds 100 Gbps per port for switches of up to 16 ports, reaching an aggregate throughput of around 1.7 Tbps.
APA, Harvard, Vancouver, ISO, and other styles
9

Khalid, Faryal, Peter Davies, Peter Halswell, Nicolas Lacotte, Philipp R. Thies, and Lars Johanning. "Evaluating Mooring Line Test Procedures through the Application of a Round Robin Test Approach." Journal of Marine Science and Engineering 8, no. 6 (June 13, 2020): 436. http://dx.doi.org/10.3390/jmse8060436.

Full text
Abstract:
Innovation in materials and test protocols, as well as physical and numerical investigations, is required to address the technical challenges arising due to the novel application of components from conventional industries to the marine renewable energy (MRE) industry. Synthetic fibre ropes, widely used for offshore station-keeping, have potential application in the MRE industry to reduce peak mooring line loads. This paper presents the results of a physical characterisation study of a novel hybrid polyester-polyolefin rope for MRE mooring applications through a round robin testing (RRT) approach at two test facilities. The RRT was performed using standard guidelines for offshore mooring lines and the results are verified through the numerical modelling of the rope tensile behaviour. The physical testing provides quantifiable margins for the strength and stiffness properties of the hybrid rope, increases confidence in the test protocols and assesses facility-specific influences on test outcomes. The results indicate that the adopted guidance is suitable for rope testing in mooring applications and there is good agreement between stiffness characterisation at both facilities. Additionally, the numerical model provides a satisfactory prediction of the rope tensile behaviour and it can be used for further parametric studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Tahir, M. A., and A. Bouridane. "Novel Round-Robin Tabu Search Algorithm for Prostate Cancer Classification and Diagnosis Using Multispectral Imagery." IEEE Transactions on Information Technology in Biomedicine 10, no. 4 (October 2006): 782–93. http://dx.doi.org/10.1109/titb.2006.879596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhao, Wentao, Ping Dong, Min Guo, Yuyang Zhang, and Xuehong Chen. "BSS: A Burst Error-Correction Scheme of Multipath Transmission for Mobile Fog Computing." Wireless Communications and Mobile Computing 2020 (June 30, 2020): 1–10. http://dx.doi.org/10.1155/2020/8846545.

Full text
Abstract:
In the scenario of mobile fog computing (MFC), communication between vehicles and fog layer, which is called vehicle-to-fog (V2F) communication, needs to use bandwidth resources as much as possible with low delay and high tolerance for errors. In order to adapt to these harsh scenarios, there are important technical challenges concerning the combination of network coding (NC) and multipath transmission to construct high-quality V2F communication for cloud-aware MFC. Most NC schemes exhibit poor reliability in burst errors that often occur in high-speed movement scenarios. These can be improved by using interleaving technology. However, most traditional interleaving schemes for multipath transmission are designed based on round robin (RR) or weighted round robin (WRR), in practice, which can waste a lot of bandwidth resources. In order to solve those problems, this paper proposes a novel multipath transmission scheme for cloud-aware MFC, which is called Bidirectional Selection Scheduling (BSS) scheme. Under the premise of realizing interleaving, since BSS can be used in conjunction with a lot of path scheduling algorithms based on Earliest Delivery Path First (EDPF), it can make better use of bandwidth resources. As a result, BSS has high reliability and bandwidth utilization in harsh scenarios. It can meet the high-quality requirements of cloud-aware MFC for transmission.
APA, Harvard, Vancouver, ISO, and other styles
12

Horn, W., M. Richter, M. Nohr, O. Wilke, and O. Jann. "Application of a novel reference material in an international round robin test on material emissions testing." Indoor Air 28, no. 1 (September 11, 2017): 181–87. http://dx.doi.org/10.1111/ina.12421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ul Sabha, Saqib. "A Novel And Efficient Round Robin Algorithm With Intelligent Time Slice And Shortest Remaining Time First." Materials Today: Proceedings 5, no. 5 (2018): 12009–15. http://dx.doi.org/10.1016/j.matpr.2018.02.175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sougoumar, Yazhinian, and Tamilselvan Sadasivam. "Performance Enhancement of Bidirectional NOC Router With and Without Contention for Reconfigurable Coarse Grained Architecture." Indonesian Journal of Electrical Engineering and Computer Science 11, no. 3 (September 1, 2018): 1068. http://dx.doi.org/10.11591/ijeecs.v11.i3.pp1068-1074.

Full text
Abstract:
<p>Network on Chip (NoC) router plays a vital role in System on Chip (SoC) applications. Routing operation is difficult to perform inside the SoC chip. Because it contains millions of chips in one single Integrated Circuit (IC), in which every chip consists of millions of transistors. Hence NoC router is designed to enable efficient routing operation in the SoC board. NoC router consists of Network Interconnects (NI), Crossbar Switches, arbiters, a routing logic and buffers. Conventional unidirectional router is designed by priority based Round Robin Arbiter (RRA). It produces more delay to find the priority, which comes from various input channels and more area is consumed in unidirectional router. Also if any path failure occurs, it cannot route the data through other output channel. To overcome this problem, a novel bidirectional NoC router with and without contention is proposed, which offers less area and high speed than the existing unidirectional router. A novel bidirectional NoC router consists of round robin arbiter, Static RAM, switch allocator, virtual channel allocator and crossbar switch. The proposed bidirectional router can route the data from any input channel to each and every output channel. So it avoids conflict situation and path failure problems. If any path fails, immediately it will take the alternative path through the switch allocator. The proposed routing scheme is applied into the coarse grained architecture for improving the speed of the interconnection link between two processing elements. Simulation is performed by ModelSim6.3c and synthesis is carried out by Xilinx10.1.</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Shujing, Hui Liu, and Linguo Li. "Decoy state quantum-key-distribution by using odd coherent states without monitoring signal disturbance." International Journal of Quantum Information 17, no. 02 (March 2019): 1950012. http://dx.doi.org/10.1142/s0219749919500126.

Full text
Abstract:
Recently, a novel quantum-key-distribution (QKD) protocol, called Round-robin-differential-phase-shift (RRDPS) QKD, has been proposed to share a secure key without monitoring the signal disturbance. In this paper, we propose a decoy state RRDPS-QKD protocol with odd coherent states (OCS). We implement a one-intensity decoy state method into the RRDPS-QKD with OCS to estimate the key rate. The results show that both the maximum transmission distance and the key rate of our protocol are significantly improved. Moreover, only one-intensity decoy state is sufficient for the protocol to approach the asymptotic limit with infinite decoy states.
APA, Harvard, Vancouver, ISO, and other styles
16

Deng, Hongyu, Cheng Wu, and Yiming Wang. "A cognitive gateway-based spectrum sharing method in downlink round robin scheduling of LTE system." Modern Physics Letters B 31, no. 19-21 (July 27, 2017): 1740070. http://dx.doi.org/10.1142/s021798491740070x.

Full text
Abstract:
A key technique of LTE is how to allocate efficiently the resource of radio spectrum. Traditional Round Robin (RR) scheduling scheme may lead to too many resource residues when allocating resources. When the number of users in the current transmission time interval (TTI) is not the greatest common divisor of resource block groups (RBGs), and such a phenomenon lasts for a long time, the spectrum utilization would be greatly decreased. In this paper, a novel spectrum allocation scheme of cognitive gateway (CG) was proposed, in which the LTE spectrum utilization and CG’s throughput were greatly increased by allocating idle resource blocks in the shared TTI in LTE system to CG. Our simulation results show that the spectrum resource sharing method can improve LTE spectral utilization and increase the CG’s throughput as well as network use time.
APA, Harvard, Vancouver, ISO, and other styles
17

Marjanović, Ivica, Dejan Milić, Jelena Anastasov, and Aleksandra Cvetković. "PHYSICAL LAYER SECURITY OF WIRELESS SENSOR NETWORK BASED ON OPPORTUNISTIC SCHEDULING." Facta Universitatis, Series: Automatic Control and Robotics 19, no. 1 (July 28, 2020): 001. http://dx.doi.org/10.22190/fuacr2001001m.

Full text
Abstract:
In this paper, a physical layer security analysis of wireless sensor network in the presence of an attacker, employing opportunistic scheduling approach, is presented. The intended as well as unintended transmission paths experience the Weibull fading. A novel analytical expression for the intercept probability is derived. In order to emphasize the advantages of the opportunistic scheduling approach, a comparative analysis with round-robin and optimal scheduling schemes is also given. The impact of a number of active sensors and the impact of fading channel conditions over main and wiretap channels on the intercept probabilities is obtained. The accuracy of theoretical results is confirmed by independent Monte Carlo simulation results.
APA, Harvard, Vancouver, ISO, and other styles
18

Ni, Ye Peng, Xiao Sen Chen, and Jian Bo Liu. "Resolving Unordered Issues for CMT-SCTP in Heterogeneous Wireless Network." Applied Mechanics and Materials 380-384 (August 2013): 2152–56. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.2152.

Full text
Abstract:
Concurrent Multi-path Transfer extension of the Stream Control Transport Protocol (CMT-SCTP) has great potential to improve the utilization of scarce network bandwidth resource. Traditional CMT-SCTP adopts Round Robin (RR) algorithms to carry out packet scheduling that could affect the performance of SCTP traffic as unordered issues. In this paper, we discuss the unordered issues and propose a novel packet scheduling algorithm to improve the performance in heterogeneous wireless network. Our main idea is predicting the packet arriving time and then provides a path selection strategy. Through measuring round-trip time (RTT) and available bandwidth, we achieve an algorithm to predict the arriving time of data packets on one path, then give a reasonable path selection strategy to make sure that data arrives in the right order. By using proposed algorithm, the unordered packets and receive buffer blocking are reduced. We evaluate the performance of our algorithm by comparing it with the RR algorithm and show that it can resolve several performance issues.
APA, Harvard, Vancouver, ISO, and other styles
19

Gilman, Guin, Samuel S. Ogden, Tian Guo, and Robert J. Walls. "Demystifying the Placement Policies of the NVIDIA GPU Thread Block Scheduler for Concurrent Kernels." ACM SIGMETRICS Performance Evaluation Review 48, no. 3 (March 5, 2021): 81–88. http://dx.doi.org/10.1145/3453953.3453972.

Full text
Abstract:
In this work, we empirically derive the scheduler's behavior under concurrent workloads for NVIDIA's Pascal, Volta, and Turing microarchitectures. In contrast to past studies that suggest the scheduler uses a round-robin policy to assign thread blocks to streaming multiprocessors (SMs), we instead find that the scheduler chooses the next SM based on the SM's local resource availability. We show how this scheduling policy can lead to significant, and seemingly counter-intuitive, performance degradation; for example, a decrease of one thread per block resulted in a 3.58X increase in execution time for one kernel in our experiments. We hope that our work will be useful for improving the accuracy of GPU simulators and aid in the development of novel scheduling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Garcia-Walsh, Katerina. "Oscar Wilde’s Misattributions: A Legacy of Gross Indecency." Victorian Popular Fictions Journal 3, no. 2 (December 17, 2021): 188–207. http://dx.doi.org/10.46911/pyiv5690.

Full text
Abstract:
Drawing on correspondence and periodical advertising as well as paratextual and bibliographic detail, this paper compares editions of the three most prominent texts falsely associated with Oscar Wilde: The Green Carnation (1894), an intimate satire on Wilde’s relationship with Lord Alfred Douglas actually written by Douglas’ friend Robert Smythe Hichens; “The Priest and the Acolyte” (1894), a paedophilic story written by John Francis Bloxam and presented as evidence against Wilde during his libel trial and then privately reprinted; and the erotic novel Teleny (1893), which is still attributed to Wilde today. His name appeared in tandem with these novels over the course of a century, linking him further with sex and scandal. Two separate editions of Teleny in 1984 and 1986 feature introductions by Winston Leyland and John McRae, respectively justifying Wilde’s authorship and describing the work as likely a round-robin pornographic collaboration between Wilde and his young friends. By recognising and exposing these cases of literary impersonation, we can amend Wilde’s legacy.
APA, Harvard, Vancouver, ISO, and other styles
21

Hoshino, R., and K. Kawarabayashi. "Generating Approximate Solutions to the TTP using a Linear Distance Relaxation." Journal of Artificial Intelligence Research 45 (October 23, 2012): 257–86. http://dx.doi.org/10.1613/jair.3713.

Full text
Abstract:
In some domestic professional sports leagues, the home stadiums are located in cities connected by a common train line running in one direction. For these instances, we can incorporate this geographical information to determine optimal or nearly-optimal solutions to the n-team Traveling Tournament Problem (TTP), an NP-hard sports scheduling problem whose solution is a double round-robin tournament schedule that minimizes the sum total of distances traveled by all n teams. We introduce the Linear Distance Traveling Tournament Problem (LD-TTP), and solve it for n=4 and n=6, generating the complete set of possible solutions through elementary combinatorial techniques. For larger n, we propose a novel "expander construction" that generates an approximate solution to the LD-TTP. For n congruent to 4 modulo 6, we show that our expander construction produces a feasible double round-robin tournament schedule whose total distance is guaranteed to be no worse than 4/3 times the optimal solution, regardless of where the n teams are located. This 4/3-approximation for the LD-TTP is stronger than the currently best-known ratio of 5/3 + epsilon for the general TTP. We conclude the paper by applying this linear distance relaxation to general (non-linear) n-team TTP instances, where we develop fast approximate solutions by simply "assuming" the n teams lie on a straight line and solving the modified problem. We show that this technique surprisingly generates the distance-optimal tournament on all benchmark sets on 6 teams, as well as close-to-optimal schedules for larger n, even when the teams are located around a circle or positioned in three-dimensional space.
APA, Harvard, Vancouver, ISO, and other styles
22

Anand, Nirmala, Somashekhar Pujar, and Sakshi Rao. "A heutagogical interactive tutorial involving Fishbowl with Fish Battle and Round Robin Brainstorming: A novel syndicate metacognitive learning strategy." Medical Journal Armed Forces India 77 (February 2021): S73—S78. http://dx.doi.org/10.1016/j.mjafi.2020.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Xiangbin, Guocheng Zhang, Yushan Sun, Lei Wan, and Jian Cao. "Research on autonomous underwater vehicle wall following based on reinforcement learning and multi-sonar weighted round robin mode." International Journal of Advanced Robotic Systems 17, no. 3 (May 1, 2020): 172988142092531. http://dx.doi.org/10.1177/1729881420925311.

Full text
Abstract:
When autonomous underwater vehicle following the wall, a common problem is interference between sonars equipped in the autonomous underwater vehicle. A novel work mode with weighted polling (which can be also called “weighted round robin mode”) which can independently identify the environment, dynamically establish the environmental model, and switch the operating frequency of the sonar is proposed in this article. The dynamic weighted polling mode solves the problem of sonar interference. By dynamically switching the operating frequency of the sonar, the efficiency of following the wall is improved. Through the interpolation algorithm based on velocity interpolation, the data of different frequency ranging sonar are time registered to solve the asynchronous problem of multi-sonar and the system outputs according to the frequency of high-frequency sonar. With the reinforcement learning algorithm, autonomous underwater vehicle can follow the wall at a certain distance according to the distance obtained from the polling mode. At last, the tank test verified the effectiveness of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
24

Bashar, Abul, and Smys S. "Physical Layer Protection Against Sensor Eavesdropper Channels in Wireless Sensor Networks." June 2021 3, no. 2 (June 3, 2021): 59–67. http://dx.doi.org/10.36548/jsws.2021.2.001.

Full text
Abstract:
This paper presents an analysis of Wireless Sensor Network (WSN) security issues that take place due to eavesdropping. The sensor-eavesdropper channels and the sensor sinks are exposed to generalized K-fading. Based on the physical layer security framework we use cumulative distribution, optimal sensors and round robin scheduling scheme to decrease the probability of interception and to equip secure connection between the nodes. For identifying the interception probability, a novel analytical methodology is present with simple analytical expressions. Moreover, diversity orders of scheduling schemes and asymptotic closed-form expressions are evaluated. Numerical results show the crucial result of shadowing and fading parameters of wiretap and main links, selected schemes on WSN security and network size. We have analyzed the output using Monte Carlo simulation and conclusions show the validation of the proposed work.
APA, Harvard, Vancouver, ISO, and other styles
25

Schlembach, Florian, Marcello Passaro, Graham D. Quartly, Andrey Kurekin, Francesco Nencioli, Guillaume Dodet, Jean-François Piollé, et al. "Round Robin Assessment of Radar Altimeter Low Resolution Mode and Delay-Doppler Retracking Algorithms for Significant Wave Height." Remote Sensing 12, no. 8 (April 16, 2020): 1254. http://dx.doi.org/10.3390/rs12081254.

Full text
Abstract:
Radar altimeters have been measuring ocean significant wave height for more than three decades, with their data used to record the severity of storms, the mixing of surface waters and the potential threats to offshore structures and low-lying land, and to improve operational wave forecasting. Understanding climate change and long-term planning for enhanced storm and flooding hazards are imposing more stringent requirements on the robustness, precision, and accuracy of the estimates than have hitherto been needed. Taking advantage of novel retracking algorithms, particularly developed for the coastal zone, the present work aims at establishing an objective baseline processing chain for wave height retrieval that can be adapted to all satellite missions. In order to determine the best performing retracking algorithm for both Low Resolution Mode and Delay-Doppler altimetry, an objective assessment is conducted in the framework of the European Space Agency Sea State Climate Change Initiative project. All algorithms process the same Level-1 input dataset covering a time-period of up to two years. As a reference for validation, an ERA5-based hindcast wave model as well as an in-situ buoy dataset from the Copernicus Marine Environment Monitoring Service In Situ Thematic Centre database are used. Five different metrics are evaluated: percentage and types of outliers, level of measurement noise, wave spectral variability, comparison against wave models, and comparison against in-situ data. The metrics are evaluated as a function of the distance to the nearest coast and the sea state. The results of the assessment show that all novel retracking algorithms perform better in the majority of the metrics than the baseline algorithms currently used for operational generation of the products. Nevertheless, the performance of the retrackers strongly differ depending on the coastal proximity and the sea state. Some retrackers show high correlations with the wave models and in-situ data but significantly under- or overestimate large-scale spectral variability. We propose a weighting scheme to select the most suitable retrackers for the Sea State Climate Change Initiative programme.
APA, Harvard, Vancouver, ISO, and other styles
26

Provis, John L., and Frank Winnefeld. "Outcomes of the round robin tests of RILEM TC 247-DTA on the durability of alkali-activated concrete." MATEC Web of Conferences 199 (2018): 02024. http://dx.doi.org/10.1051/matecconf/201819902024.

Full text
Abstract:
Alkali-activated cements, including ‘geopolymer’ materials, are now reaching commercial uptake in various parts of the world, providing the opportunity to produce concretes of good performance and with reduced environmental footprint compared to established technologies. The development of performance-based specifications for alkali-activated cements and concretes is ongoing in several jurisdictions. However, the technical rigour, and thus practical value, of a performance-based approach to specification of novel cements and concretes will inevitably depend on the availability of appropriate, reliable testing methods, particularly regarding key aspects of durability where degradation mechanisms may be complex and depend on the chemistry and microstructure of the binder. This paper will briefly discuss the activities of RILEM Technical Committee 247-DTA in working to validate durability testing standards for alkali-activated materials, bringing scientific insight into the development of appropriate specifications for these materials.
APA, Harvard, Vancouver, ISO, and other styles
27

Xue, Hai, Kyung Kim, and Hee Youn. "Dynamic Load Balancing of Software-Defined Networking Based on Genetic-Ant Colony Optimization." Sensors 19, no. 2 (January 14, 2019): 311. http://dx.doi.org/10.3390/s19020311.

Full text
Abstract:
Load Balancing (LB) is one of the most important tasks required to maximize network performance, scalability and robustness. Nowadays, with the emergence of Software-Defined Networking (SDN), LB for SDN has become a very important issue. SDN decouples the control plane from the data forwarding plane to implement centralized control of the whole network. LB assigns the network traffic to the resources in such a way that no one resource is overloaded and therefore the overall performance is maximized. The Ant Colony Optimization (ACO) algorithm has been recognized to be effective for LB of SDN among several existing optimization algorithms. The convergence latency and searching optimal solution are the key criteria of ACO. In this paper, a novel dynamic LB scheme that integrates genetic algorithm (GA) with ACO for further enhancing the performance of SDN is proposed. It capitalizes the merit of fast global search of GA and efficient search of an optimal solution of ACO. Computer simulation results show that the proposed scheme substantially improves the Round Robin and ACO algorithm in terms of the rate of searching optimal path, round trip time, and packet loss rate.
APA, Harvard, Vancouver, ISO, and other styles
28

S, Pradeep. "A Novel HWRR-SJF Scheduling Algorithm for Optimal Performance Improvement in LTE System." International Journal of Engineering Education 1, no. 1 (June 15, 2019): 1–8. http://dx.doi.org/10.14710/ijee.1.1.1-8.

Full text
Abstract:
In currently, the revolution in a high-speed broadband network is the requirement and also endless demand for high data rate and mobility. To achieve above requirement, the 3rd Generation Partnership Project (3GPP) has been established the Long Time Evolution (LTE). LTE has established an improved LTE radio interface named LTE-Advanced (LTE-A) and it is a promising technology for providing broadband, mobile Internet access. But, better Quality of Service (QoS) to provide for customers is the main issue in LTE-A. To reduce the above issue, the packets should be utilized by using one of the most significant function of packet scheduling to upgrading system performance via determines the throughput performance. In existing scheme, the user with poor Channel Quality Indicator (CQI) has smaller throughput issue is not focused. In this paper, a Hybrid Weighted Round Robin with Shortest Job First (HWRR-SJF) Scheduling technique is proposed to enhance efficient throughput and fairness in LTE system for stationary and mobile users. In this proposed scheduling, to schedule users according to a different criterion like fairness and CQI. HWRR-SJF Scheduling has been proposed for scheduling of the users and it produces increased throughput for various SNR values simulated alongside Pedestrian and Vehicular moving models. The proposed method also uses a 4G-LTE filter or Digital Dividend (DD) in order to align the incoming signal. The digital dividend is used to remove white spaces, which refer to frequencies assigned to a broadcasting service but not used locally. The proposed model is very effective for users in terms of the performance metrics like packet loss, throughput, packet delay, spectral efficiency, fairness and it has been verified through MATLAB simulations.
APA, Harvard, Vancouver, ISO, and other styles
29

Sahkhar, Lizia, Bunil Kumar Balabantaray, and Satyendra Singh Yadav. "Efficient Cloudlet Allocation to Virtual Machine to Impact Cloud System Performance." International Journal of Information System Modeling and Design 13, no. 6 (November 2022): 1–21. http://dx.doi.org/10.4018/ijismd.297630.

Full text
Abstract:
Performance is an essential characteristic of any cloud computing system. It can be enhance through parallel computing, scheduling and load balancing. This work evaluates the connection between the response time (RT) and virtual machine’s (VM) CPU utilization when cloudlets are allocated from the datacenter broker to VM. To accentuate the RT and VM’s CPU utilization, a set of 100 and 500 heterogeneous cloudlets are analyzed under hybridized provisioning, scheduling and allocation algorithm using CloudSim simulator. These includes space shared (SS) and time shared (TS) provisioning policy, shortest job first (SJF), first come first search (FCFS), round robin (RR) and a novel length-wise allocation (LwA) algorithm. The experimental analysis shows that the RT is the least when SJF is combined with RR allocation at 40.665 secs and VM’s CPU utilization is the least when SJF is combined with LwA policy at 12.48 in all combinations of SS and TS provisioning policy.
APA, Harvard, Vancouver, ISO, and other styles
30

Liang, Peidong, Habte Tadesse Likassa, and Chentao Zhang. "New Robust Regression Method for Outliers and Heavy Sparse Noise Detection via Affine Transformation for Head Pose Estimation and Image Reconstruction in Highly Complex and Correlated Data: Applications in Signal Processing." Mathematical Problems in Engineering 2022 (February 18, 2022): 1–14. http://dx.doi.org/10.1155/2022/2054546.

Full text
Abstract:
In this work, we propose a novel method for head pose estimation and face recovery, particularly to solve the potential impacts of noises in signal processing to get an efficient and effective model that is more resilient with annoying effects through adding affine transformation with the low-rank robust subspace regression. Consequently, the corrupted images can be correctly recovered by affine transformations to render more best regression outcomes. Thereby, we need to search so as to get optimal parameters which can be regarded as convex constrained optimization techniques. Afterward, the alternating direction method for multipliers (ADMM) approach is considered and a new set of updated equations is well established so as to update the optimization parameters and affine transformations iteratively in a round-robin manner. Additionally, the convergence of these new updating equations is well scrutinized as well. Thus, the experimental simulations reveal that the proposed method outperforms the state-of-the-art works for head pose estimation and face recovery on some public databases.
APA, Harvard, Vancouver, ISO, and other styles
31

Jia, Jia, and Dejun Mu. "Low-Energy-Orientated Resource Scheduling in Cloud Computing by Particle Swarm Optimization." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 36, no. 2 (April 2018): 339–44. http://dx.doi.org/10.1051/jnwpu/20183620339.

Full text
Abstract:
In order to reduce the energy cost in cloud computing, this paper represents a novel energy-orientated resource scheduling method based on particle swarm optimization. The energy cost model in cloud computing environment is studied first. The optimization of energy cost is then considered as a multiobjective optimization problem, which generates the Pareto optimization set. To solve this multiobjective optimization problem, the particle swarm optimization is involved. The states of one particle consist of both the allocation plan for servers and the frequency plans on servers. Each particle in this algorithm obtains its Pareto local optimization. After the assembly of local optimizations, the algorithm generates the Pareto global optimization for one server plan. The final solution to our problem is the optimal one among all server plans. Experimental results show the good performance of the proposed method. Comparing with the widely-used Round robin scheduling method, the proposed method requires only 45.5% dynamic energy cost.
APA, Harvard, Vancouver, ISO, and other styles
32

Taha, Mohammed Qasim, Zaid Hussien Ali, and Abdullah Khalid Ahmed. "Two-level scheduling scheme for integrated 4G-WLAN network." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 2633. http://dx.doi.org/10.11591/ijece.v10i3.pp2633-2643.

Full text
Abstract:
In this paper, a novel scheduling scheme for the Fourth Generation (4G)-Wireless Local Area Network (WLAN) network is proposed to ensure that end to end traffic transaction is provisioned seamlessly. The scheduling scheme is divided into two stages; in stage one, traffic is separated into Actual Time Traffic (ATT) and Non-Actual-Time Traffic (NATT), while in stage two, complex queuing strategy is performed. In stage one, Class-Based Queuing (CBQ) and Deficit Round Robin (DRR) are used for NATT and ATT applications, respectively to separate and forward traffic themselves according to source requirements. Whereas in the stage, two Control Priority Queuing (CPQ) is used to assign each class the appropriate priority level. Evaluation of the performance of the integrated network was done according to several metrics such as end-to-end delay, jitter, packet loss, and network’s throughput. Results demonstrate major improvements for AT services with minor degradation on NAT applications after implementing the new scheduling scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Liang, Peidong, Habte Tadesse Likassa, Chentao Zhang, and Jielong Guo. "New Robust PCA for Outliers and Heavy Sparse Noises’ Detection via Affine Transformation, the L ∗ , w and L 2,1 Norms, and Spatial Weight Matrix in High-Dimensional Images: From the Perspective of Signal Processing." International Journal of Mathematics and Mathematical Sciences 2021 (September 28, 2021): 1–9. http://dx.doi.org/10.1155/2021/3047712.

Full text
Abstract:
In this paper, we propose a novel robust algorithm for image recovery via affine transformations, the weighted nuclear, L ∗ , w , and the L 2,1 norms. The new method considers the spatial weight matrix to account the correlated samples in the data, the L 2,1 norm to tackle the dilemma of extreme values in the high-dimensional images, and the L ∗ , w norm newly added to alleviate the potential effects of outliers and heavy sparse noises, enabling the new approach to be more resilient to outliers and large variations in the high-dimensional images in signal processing. The determination of the parameters is involved, and the affine transformations are cast as a convex optimization problem. To mitigate the computational complexity, alternating iteratively reweighted direction method of multipliers (ADMM) method is utilized to derive a new set of recursive equations to update the optimization variables and the affine transformations iteratively in a round-robin manner. The new algorithm is superior to the state-of-the-art works in terms of accuracy on various public databases.
APA, Harvard, Vancouver, ISO, and other styles
34

Likassa, Habte Tadesse, Wen Xian, and Xuan Tang. "New Robust Regularized Shrinkage Regression for High-Dimensional Image Recovery and Alignment via Affine Transformation and Tikhonov Regularization." International Journal of Mathematics and Mathematical Sciences 2020 (November 6, 2020): 1–10. http://dx.doi.org/10.1155/2020/1286909.

Full text
Abstract:
In this work, a new robust regularized shrinkage regression method is proposed to recover and align high-dimensional images via affine transformation and Tikhonov regularization. To be more resilient with occlusions and illuminations, outliers, and heavy sparse noises, the new proposed approach incorporates novel ideas affine transformations and Tikhonov regularization into high-dimensional images. The highly corrupted, distorted, or misaligned images can be adjusted through the use of affine transformations and Tikhonov regularization term to ensure a trustful image decomposition. These novel ideas are very essential, especially in pruning out the potential impacts of annoying effects in high-dimensional images. Then, finding optimal variables through a set of affine transformations and Tikhonov regularization term is first casted as mathematical and statistical convex optimization programming techniques. Afterward, a fast alternating direction method for multipliers (ADMM) algorithm is applied, and the new equations are established to update the parameters involved and the affine transformations iteratively in the form of the round-robin manner. Moreover, the convergence of these new updating equations is scrutinized as well, and the proposed method has less time computation as compared to the state-of-the-art works. Conducted simulations have shown that the new robust method surpasses to the baselines for image alignment and recovery relying on some public datasets.
APA, Harvard, Vancouver, ISO, and other styles
35

Joung, Jinoo. "Regulating Scheduler (RSC): A Novel Solution for IEEE 802.1 Time Sensitive Network (TSN)." Electronics 8, no. 2 (February 6, 2019): 189. http://dx.doi.org/10.3390/electronics8020189.

Full text
Abstract:
Emerging applications such as industrial automation, in-vehicle, professional audio-video, and wide area electrical utility networks require strict bounds on the end-to-end network delay. Solutions so far to such a requirement are either impractical or ineffective. Flow based schedulers suggested in a traditional integrated services (IntServ) framework are O(N) or O(log N), where N is the number of flows in the scheduler, which can grow to tens of thousands in a core router. Due to such complexity, class-based schedulers are adopted in real deployments. The class-based systems, however, cannot provide bounded delays in networks with cycle, since the maximum burst grows infinitely along the cycled path. Attaching a regulator in front of a scheduler to limit the maximum burst is considered as a viable solution. International standards, such as IEEE 802.1 time sensitive network (TSN) and IETF deterministic network (DetNet) are adopting this approach as a standard. The regulator in TSN and DetNet, however, requires flow state information, therefore contradicts to the simple class-based schedulers. This paper suggests non-work conserving fair schedulers, called ‘regulating schedulers’ (RSC), which function as a regulator and a scheduler at the same time. A deficit round-robin (DRR) based RSC, called nw-DRR, is devised and proved to be both a fair scheduler and a regulator. Despite the lower complexity, the input port-based nw-DRR is shown to perform better than the current TSN approach, and to bind the end-to-end delay within a few milliseconds in realistic network scenarios.
APA, Harvard, Vancouver, ISO, and other styles
36

Kadhim, Abrar Saad, and Mehdi Ebady Manaa. "Hybrid load-balancing algorithm for distributed fog computing in internet of things environment." Bulletin of Electrical Engineering and Informatics 11, no. 6 (December 1, 2022): 3462–70. http://dx.doi.org/10.11591/eei.v11i6.4127.

Full text
Abstract:
Fog computing is a novel idea created by Cisco that provides the same capabilities as cloud computing but close to objects to improve performance, such as by minimizing latency and reaction time. Packet failure can happen on a single fog server across a large number of messages from internet of things (IoT) sensors due to several variables, including inadequate bandwidth and server queue capacity. In this paper, a fog-to-server architecture based on the IoT is proposed to solve the problem of packet loss in fog and servers using hybrid load balancing and a distributed environment. The proposed methodology is based on hybrid load balancing with least connection and weighted round robin algorithms combined together in fog nodes to take into consideration the load and time to distribute requests to the active servers. The results show the proposed system improved network evaluation parameters such as total response time of 131.48 ms, total packet loss rate of 15.670%, average total channel idle of 99.55%, total channel utilization of 77.44%, average file transfer protocol (FTP) file transfer speed (256 KB to 15 MB files) of 260.77 KB/sec, and average time (256 KB to 15 MB) of 19.27 sec.
APA, Harvard, Vancouver, ISO, and other styles
37

Noble, Donald R., Michael O’Shea, Frances Judge, Eider Robles, Rodrigo Martinez, Faryal Khalid, Philipp R. Thies, et al. "Standardising Marine Renewable Energy Testing: Gap Analysis and Recommendations for Development of Standards." Journal of Marine Science and Engineering 9, no. 9 (September 6, 2021): 971. http://dx.doi.org/10.3390/jmse9090971.

Full text
Abstract:
Marine renewable energy (MRE) is still an emerging technology. As such, there is still a lack of mature standards and guidance for the development and testing of these devices. The sector covers a wide range of disciplines, so there is a need for more comprehensive guidance to cover these. This paper builds on a study undertaken in the MaRINET2 project to summarise recommendations and guidance for testing MRE devices and components, by reviewing the recently published guidance. Perceived gaps in the guidance are then discussed, expanding on the previous study. Results from an industry survey are also used to help quantify and validate these gaps. The main themes identified can be summarised as: the development progression from concept to commercialisation, including more complex environmental conditions in testing, accurately modelling and quantifying the power generated, including grid integration, plus modelling and testing of novel moorings and foundation solutions. A pathway to a standardised approach to MRE testing is presented, building on recommendations learnt from the MaRINET2 round-robin testing, showing how these recommendations are being incorporated into the guidance and ultimately feeding into the development of international standards for the marine renewable energy sector.
APA, Harvard, Vancouver, ISO, and other styles
38

Vihol, Ronak, Hiren Patel, and Nimisha Patel. "Workload Consolidation using Task Scheduling Strategy Based on Genetic Algorithm in Cloud Computing." Oriental journal of computer science and technology 10, no. 1 (February 16, 2017): 60–65. http://dx.doi.org/10.13005/ojcst/10.01.08.

Full text
Abstract:
Offering “Computing as a utility” on pay per use plan, Cloud computing has emerged as a technology of ease and flexibility for thousands of users over last few years. Distribution of dynamic workload among available servers and efficient utilization of existing resources in datacenter is one of the major concerns in Cloud computing. The load balancing issue needs to take into consideration the utilization of servers, i.e. the resultant utilization should not exceed the preset upper limits to avoid service level agreement (SLA) violation and should not fall beneath stipulated lower limits to avoid keeping some servers in active use. Scheduling of workload is regarded as an optimization problem that considers many varying criterion such as dynamic environment, priority of incoming applications, their deadlines etc. to improve resource utilization and overall performance of Cloud computing. In this work, a Genetic Algorithm (GA) based novel load balancing mechanism is proposed. Though not done in this work, in future, we aim to compare performance of proposed algorithms with existing mechanisms such as first come first serve (FCFS), Round Robin (RR) and other search algorithms through simulations.
APA, Harvard, Vancouver, ISO, and other styles
39

AL-SAFAR, AHMED. "Hybrid CPU Scheduling algorithm SJF-RR In Static Set of Processes." Journal of Al-Rafidain University College For Sciences ( Print ISSN: 1681-6870 ,Online ISSN: 2790-2293 ), no. 1 (October 20, 2021): 36–60. http://dx.doi.org/10.55562/jrucs.v29i1.377.

Full text
Abstract:
Round Robin (RR) algorithm is widely used in modern operating systems (OS) as it has a better responsiveness as periodic quantum (occurring at regular intervals) in addition to have a good feature such as low scheduling overhead of n processes in a ready queue which takes a constant time O(1). But, RR algorithms have the worse features such as having low throughput, long average turnaround and waiting time, in addition to the number of context switches for (n) processes is (n) switches. Shortest Job First (SJF) however, itself is not practical in time sharing Oss due to its low response. More over the scheduling overhead of n processes in a ready queue is O(n), But the good features of SJF algorithm are the best average turnaround time and waiting time.By considering a static set of n processes, desirable features of CPU scheduler to maximize CPU utilization, response time and minimize waiting time and turnaround time are obtained by combining the kernel properties of SJF algorithm with the best features of RR algorithm to produce a new algorithm as an original and novel algorithm called; " Hybrid CPU Scheduling algorithm SJF-RR in Static Set of Processes " which, proposed in this research.The proposed algorithm is implemented through an innovative optimal equation to adapt time quantum factor for each process in each round as a periodic quantum (occurred at irregular intervals). That is while applying proposed algorithm, mathematical calculations take place to have particular time quantum for each process. Once, a criterion has been selected for comparison, deterministic modeling with the same numbers for input is proven that proposed algorithm is the best.
APA, Harvard, Vancouver, ISO, and other styles
40

Park, Jong-Hyeok, Dong-Joo Park, Tae-Sun Chung, and Sang-Won Lee. "A Crash Recovery Scheme for a Hybrid Mapping FTL in NAND Flash Storage Devices." Electronics 10, no. 3 (February 1, 2021): 327. http://dx.doi.org/10.3390/electronics10030327.

Full text
Abstract:
An FTL (flash translation layer), which most flash storage devices are equipped with, needs to guarantee the consistency of modified metadata from a sudden power failure. This crash recovery scheme significantly affects the writing performance of a flash storage device during its normal operation, as well as its reliability and recovery performance; therefore, it is desirable to make the crash recovery scheme efficient. Despite the practical importance of a crash recovery scheme in an FTL, few works exist that deal with the crash recovery issue in FTL in a comprehensive manner. This study proposed a novel crash recovery scheme called FastCheck for a hybrid mapping FTL called Fully Associative Sector Translation (FAST). FastCheck can efficiently secure the newly generated address-mapping information using periodic checkpoints, and at the same time, leverages the characteristics of an FAST FTL, where the log blocks in a log area are used in a round-robin way. Thus, it provides two major advantages over the existing FTL recovery schemes: one is having a low logging overhead during normal operations in the FTL and the other to have a fast recovery time in an environment where the log provisioning rate is relatively high, e.g., over 20%, and the flash memory capacity is very large, e.g., 32 GB or 64 GB.
APA, Harvard, Vancouver, ISO, and other styles
41

Annaheim, Simon, Li-chu Wang, Agnieszka Psikuta, Matthew Patrick Morrissey, Martin Alois Camenzind, and René Michel Rossi. "A new method to assess the influence of textiles properties on human thermophysiology. Part I." International Journal of Clothing Science and Technology 27, no. 2 (April 20, 2015): 272–82. http://dx.doi.org/10.1108/ijcst-02-2014-0020.

Full text
Abstract:
Purpose – The purpose of this paper is to determine the validity and inter-/intra-laboratory repeatability of the first part of a novel, three-phase experimental procedure using a sweating Torso device. Design/methodology/approach – Results from a method comparison study (comparison with the industry-standard sweating guarded hotplate method) and an inter-laboratory comparison study are presented. Findings – A high correlation was observed for thermal resistance in the method comparison study (r=0.97, p<0.01) as well as in the inter-laboratory comparison study (r=0.99, p<0.01). Research limitations/implications – The authors conclude that the first phase of the standardised procedure for the sweating Torso provides reliable data for the determination of the dry thermal resistance of single and multi-layer textiles, and is therefore suitable as standard method to be used by different laboratories with this type of device. Further work is required to validate the applicability of the method for textiles with high thermal resistance. Originality/value – This study provides the first “round-robin” data for measuring thermal resistance using a Torso device. In future publications the authors will provide similar data examining the repeatability of measurements that quantify combined heat and mass transfer.
APA, Harvard, Vancouver, ISO, and other styles
42

Memon, Kamran, Khalid Mohammadani, Noor Ain, Arshad Shaikh, Sibghat Ullah, Qi Zhang, Bhagwan Das, Rahat Ullah, Feng Tian, and Xiangjun Xin. "Demand Forecasting DBA Algorithm for Reducing Packet Delay with Efficient Bandwidth Allocation in XG-PON." Electronics 8, no. 2 (January 31, 2019): 147. http://dx.doi.org/10.3390/electronics8020147.

Full text
Abstract:
In a typical 10G-Passive Optical Network (XG-PON), the propagation delay between the Optical Network Unit (ONU) and Optical Line Terminal (OLT) is about 0.3 ms. With a frame size of 125 μs, this amounts to three frames of data in the upstream and three frames of data in the downstream. Assuming no processing delays, the grants for any bandwidth requests reach the ONU after six frames in this request-grant cycle. Often, during this six-frame delay, the queue situation is changed drastically, as much, more data would arrive in the queue. As a result, the queued data that is delayed loses its significance due to its real-time nature. Unfortunately, almost all dynamic bandwidth allocation (DBA) algorithms follow this request-grant cycle and hence lacking in their performance. This paper introduces a novel approach for bandwidth allocation, called Demand Forecasting DBA (DF-DBA), which predicts ONU’s future demands by statistical modelling of the demand patterns and tends to fulfil the predicted demands just in time, which results in reduced delay. Simulation results indicate that the proposed technique out-performs previous DBAs, such as GigaPON access network (GIANT) and round robin (RR) employing the request-grant cycle in terms of Throughput and Packet delivery ratio (PDR). Circular buffers are introduced in statistical predictions, which produce the least delay for this novel DF-DBA. This paper, hence, opens up a new horizon of research in which researchers may come up with better statistical models to brew better and better results for Passive optical networks.
APA, Harvard, Vancouver, ISO, and other styles
43

Pekař, Libor, Radek Matušů, Jiří Andrla, and Martina Litschmannová. "Review of Kalah Game Research and the Proposition of a Novel Heuristic–Deterministic Algorithm Compared to Tree-Search Solutions and Human Decision-Making." Informatics 7, no. 3 (September 14, 2020): 34. http://dx.doi.org/10.3390/informatics7030034.

Full text
Abstract:
The Kalah game represents the most popular version of probably the oldest board game ever—the Mancala game. From this viewpoint, the art of playing Kalah can contribute to cultural heritage. This paper primarily focuses on a review of Kalah history and on a survey of research made so far for solving and analyzing the Kalah game (and some other related Mancala games). This review concludes that even if strong in-depth tree-search solutions for some types of the game were already published, it is still reasonable to develop less time-consumptive and computationally-demanding playing algorithms and their strategies Therefore, the paper also presents an original heuristic algorithm based on particular deterministic strategies arising from the analysis of the game rules. Standard and modified mini–max tree-search algorithms are introduced as well. A simple C++ application with Qt framework is developed to perform the algorithm verification and comparative experiments. Two sets of benchmark tests are made; namely, a tournament where a mid–experienced amateur human player competes with the three algorithms is introduced first. Then, a round-robin tournament of all the algorithms is presented. It can be deduced that the proposed heuristic algorithm has comparable success to the human player and to low-depth tree-search solutions. Moreover, multiple-case experiments proved that the opening move has a decisive impact on winning or losing. Namely, if the computer plays first, the human opponent cannot beat it. Contrariwise, if it starts to play second, using the heuristic algorithm, it nearly always loses.
APA, Harvard, Vancouver, ISO, and other styles
44

Reyes-Gil, Karla R., Josh Whaley, Ryan Nishimoto, and Nancy Yang. "Development of Transport Properties Characterization Capabilities for Thermoelectric Materials and Modules." MRS Proceedings 1774 (2015): 7–12. http://dx.doi.org/10.1557/opl.2015.578.

Full text
Abstract:
ABSTRACTThermoelectric (TE) generators have very important applications, such as emerging automotive waste heat recovery and cooling applications. However, reliable transport properties characterization techniques are needed in order to scale-up module production and thermoelectric generator design. DOE round-robin testing found that literature values for figure of merit (ZT) are sometimes not reproducible in part for the lack of standardization of transport properties measurements. In Sandia National Laboratories (SNL), we have been optimizing transport properties measurements techniques of TE materials and modules. We have been using commercial and custom-built instruments to analyze the performance of TE materials and modules. We developed a reliable procedure to measure thermal conductivity, seebeck coefficient and resistivity of TE materials to calculate the ZT as function of temperature. We use NIST standards to validate our procedures and measure multiple samples of each specific material to establish consistency. Using these developed thermoelectric capabilities, we studied transport properties of Bi2Te3 based alloys thermal aged up to 2 years. Parallel with analytical and microscopy studies, we correlated transport properties changes with chemical changes. Also, we have developed a resistance mapping setup to measure the contact resistance of Au contacts on TE materials and TE modules as a whole in a non-destructive way. The development of novel but reliable characterization techniques has been fundamental to better understand TE materials as function of aging time, temperature and environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
45

Jayaram, Asokan, and Sanjoy Deb. "EA-MAC: A QoS Aware Emergency Adaptive MAC Protocol for Intelligent Scheduling of Packets in Smart Emergency Monitoring Applications." Journal of Circuits, Systems and Computers 29, no. 13 (February 28, 2020): 2050205. http://dx.doi.org/10.1142/s0218126620502059.

Full text
Abstract:
The evolution of the wireless sensor network (WSN) in recent years has reached its greatest heights and applications are increasing day by day, one such application is Smart Emergency Monitoring Systems (SMESs) which is in vision of implementation in every urban and rural areas. The implementation of WSN architecture in the Smart Monitoring Systems needs an intelligent scheduling mechanism that efficiently handles the high traffic load as well as the emergency traffic load without sacrificing the energy efficiency of the network. However, the traditional scheduling algorithms such as First Come First Served (FCFS), Round Robin, and Shortest Job First (SJF) cannot meet the requirements of high traffic load in SMESs. To address these shortcomings, this paper presents Emergency Adaptive Medium Access Control protocol (EA-MAC), a fuzzy priority scheduling based Quality-of-service (QoS)-aware medium access control (MAC) protocol for hierarchical WSNs. EA-MAC protocol employs the most powerful fuzzy logics to schedule the sensor nodes with both normal and emergency traffic load without any data congestion, and packet loss and maintaining the better QoS which is considered to be more important in SMESs applications. Moreover, a novel rank-based clustering mechanism in EA-MAC protocol prolongs the network lifetime by minimizing the distance between the Cluster Head (CH) and the Base Station (BS). Both analytical and simulation models demonstrate the superiority of the EA-MAC protocol in terms of energy consumption, transmission delay and data throughput when compared with the existing Time Division Multiple Access (TDMA) based MAC protocols such as LEACH protocol and Cluster Head Election Mechanism-Based On Fuzzy Logic (CHEF) protocol.
APA, Harvard, Vancouver, ISO, and other styles
46

Resma, KS, GS Sharvani, and Ramasubbareddy Somula. "Optimization of cloud load balancing using fitness function and duopoly theory." International Journal of Intelligent Computing and Cybernetics 14, no. 2 (February 18, 2021): 198–217. http://dx.doi.org/10.1108/ijicc-11-2020-0176.

Full text
Abstract:
PurposeCurrent industrial scenario is largely dependent on cloud computing paradigms. On-demand services provided by cloud data centre are paid as per use. Hence, it is very important to make use of the allocated resources to the maximum. The resource utilization is highly dependent on the allocation of resources to the incoming request. The allocation of requests is done with respect to the physical machines present in the datacenter. While allocating the tasks to these physical machines, it needs to be allocated in such a way that no physical machine is underutilized or over loaded. To make sure of this, optimal load balancing is very important.Design/methodology/approachThe paper proposes an algorithm which makes use of the fitness functions and duopoly game theory to allocate the tasks to the physical machines which can handle the resource requirement of the incoming tasks. The major focus of the proposed work is to optimize the load balancing in a datacenter. When optimization happens, none of the physical machine is neither overloaded nor under-utilized, hence resulting in efficient utilization of the resources.FindingsThe performance of the proposed algorithm is compared with different existing load balancing algorithms such as round-robin load (RR) ant colony optimization (ACO), artificial bee colony (ABC) with respect to the selected parameters response time, virtual machine migrations, host shut down and energy consumption. All the four parameters gave a positive result when the algorithm is simulated.Originality/valueThe contribution of this paper is towards the domain of cloud load balancing. The paper is proposing a novel approach to optimize the cloud load balancing process. The results obtained show that response time, virtual machine migrations, host shut down and energy consumption are reduced in comparison to few of the existing algorithms selected for the study. The proposed algorithm based on the duopoly function and fitness function brings in an optimized performance compared to the four algorithms analysed.
APA, Harvard, Vancouver, ISO, and other styles
47

Coulter, Robert W. S., Shannon Mitchell, Kelly Prangley, Seth Smallwood, Leyna Bonanno, Elizabeth N. Foster, Abby Wilson, Elizabeth Miller, and Carla D. Chugani. "Generating Intervention Concepts for Reducing Adolescent Relationship Abuse Inequities Among Sexual and Gender Minority Youth: Protocol for a Web-Based, Longitudinal, Human-Centered Design Study." JMIR Research Protocols 10, no. 4 (April 12, 2021): e26554. http://dx.doi.org/10.2196/26554.

Full text
Abstract:
Background Sexual and gender minority youth (SGMY; eg, lesbian, gay, bisexual, and transgender youth) are at greater risk than their cisgender heterosexual peers for adolescent relationship abuse (ARA; physical, sexual, or psychological abuse in a romantic relationship). However, there is a dearth of efficacious interventions for reducing ARA among SGMY. To address this intervention gap, we designed a novel web-based methodology leveraging the field of human-centered design to generate multiple ARA intervention concepts with SGMY. Objective This paper aims to describe study procedures for a pilot study to rigorously test the feasibility, acceptability, and appropriateness of using web-based human-centered design methods with SGMY to create novel, stakeholder-driven ARA intervention concepts. Methods We are conducting a longitudinal, web-based human-centered design study with 45-60 SGMY (aged between 14 and 18 years) recruited via social media from across the United States. Using MURAL (a collaborative, visual web-based workspace) and Zoom (a videoconferencing platform), the SGMY will participate in four group-based sessions (1.5 hours each). In session 1, the SGMY will use rose-thorn-bud to individually document their ideas about healthy and unhealthy relationship characteristics and then use affinity clustering as a group to categorize their self-reported ideas based on similarities and differences. In session 2, the SGMY will use rose-thorn-bud to individually critique a universal evidence-based intervention to reduce ARA and affinity clustering to aggregate their ideas as a group. In session 3, the SGMY will use a creative matrix to generate intervention ideas for reducing ARA among them and force-rank the intervention ideas based on their potential ease of implementation and potential impact using an importance-difficulty matrix. In session 4, the SGMY will generate and refine intervention concepts (from session 3 ideations) to reduce ARA using round robin (for rapid iteration) and concept poster (for fleshing out ideas more fully). We will use content analyses to document the intervention concepts. In a follow-up survey, the SGMY will complete validated measures about the feasibility, acceptability, and appropriateness of the web-based human-centered design methods (a priori benchmarks for success: means >3.75 on each 5-point scale). Results This study was funded in February 2020. Data collection began in August 2020 and will be completed by April 2021. Conclusions Through rigorous testing of the feasibility of our web-based human-centered design methodology, our study may help demonstrate the use of human-centered design methods to engage harder-to-reach stakeholders and actively involve them in the co-creation of relevant interventions. Successful completion of this project also has the potential to catalyze intervention research to address ARA inequities for SGMY. Finally, our approach may be transferable to other populations and health topics, thereby advancing prevention science and health equity. International Registered Report Identifier (IRRID) DERR1-10.2196/26554
APA, Harvard, Vancouver, ISO, and other styles
48

Thiede, Christian, Lars Bullinger, Jesús M. Hernández-Rivas, Michael Heuser, Claude Preudhomme, Steven Best, Dolors Colomer, et al. "Results of the “Evaluation of NGS in AML-Diagnostics (ELAN)” Study – an Inter-Laboratory Comparison Performed in 10 European Laboratories." Blood 124, no. 21 (December 6, 2014): 2374. http://dx.doi.org/10.1182/blood.v124.21.2374.2374.

Full text
Abstract:
Abstract Background: The invention of Next Generation Sequencing (NGS) has spurred research into human diseases, especially in the field of malignancy. In acute myeloid leukemia (AML), a plethora of novel alterations have been identified, including mutations in epigenetic regulator genes (e.g. IDH1, IDH2, DNMT3A), genes coding for proteins of the cohesin complex (e.g. SMC1A, SMC3, STAG2) and spliceosome genes (e.g. SF3B1, U2AF1). Although the diagnostic and prognostic implications of many of these alterations are not yet clear, there is increasing evidence that several of them might have major implications for understanding the disease biology or for patient-treatment. Thus, there is increasing need to reliably detect these mutations in large patient groups in clinically relevant time-frames and at an affordable cost. Due to the large number of genes to be screened, amplicon-based NGS represents an attractive detection method. Although, several assays have been reported, integrating different numbers of genes, it is currently unclear whether they really allow reliable detection of alterations in a reproducible way. Here we report our results from a round robin comparison of the detection of known AML-variants using a highly multiplexed, single tube assay coamplifying a total of 568 amplicons covering 54 entire genes or hot spot gene regions involved in leukemia (TruSight Myeloid sequencing panel; Illumina), with respect to the sensitivity, reproducibility and quantitative accuracy. Material and Methods: Ten European laboratories routinely involved in molecular AML diagnostics participated in this study. All groups performed two sequencing runs, each containing 8 samples. These samples were centrally aliquoted and distributed, the analyses were done in a blinded fashion. Six out of the 8 samples on each run were derived from a set of 9 samples composed of DNA isolated from the blasts of 18 different newly diagnosed AML patients mixed at a 1:1 ratio, with 50 ng of DNA being used for the library preparation. Three of these 9 samples were analyzed in replicate in separate runs by each group. The remaining two samples were a commercial test DNA containing 10 known single nucleotide variants (SNV) or insertion/deletion (InDel) alterations with defined variant allele frequencies (VAF) between 4 and 25% and DNA derived from the OCI-AML3 AML cell line (mutant for DNMT3A and NPM1). Sequencing was performed on MiSeq NGS systems (Illumina) using 2x151 bp-runs. Sequencing data were analyzed by all laboratories using the VariantStudio software (Illumina), with the threshold for mutation calling set at 3%. Results: Analysis of data quality indicated that 85% of the samples met the predefined acceptance criteria (>=95% amplicons with at least 500 reads/amplicon), the median coverage was 7379 reads/amplicon (range 0-47403 reads). Of the 9 mutations present in the positive control, 7 were called at least once in the two replicates by all labs, two mutations with a VAF of 5% were missed by 1 and 4 participants, respectively. Overall, the VAF calls for this sample showed a high level of accuracy across the participants (median coefficient of variation 5%, range 0-22.5%) as well as excellent intra- and inter-laboratory reproducibility (Fig.1). In total, the 9 primary leukemic samples contained 43 known variants in 19 genes, including all commonly mutated genes in AML, i.e. CEBPA, DNMT3A,RUNX1, NPM1, FLT3, WT1. For these samples, the sensitivity was 95.7%. Based on the entire data set (positive control and leukemic samples), the calculated sensitivity of the assay for known variants with an expected VAF>=5% was 93.3%. The rate of non-calls was slightly higher for InDels (14/179; 7.8%) than for SNVs (25/407; 6.1%; P=.47). Two 57-bp long insertions in FLT3 exons 14/15 were not called, which is expected due to the specifications of the assay (max. detectable InDel length <=25 bps). The standard deviation of VAF estimates for the primary leukemic samples was 1.7% with a mean CV of 0.094. Conclusions: This inter-laboratory comparison shows a high sensitivity and impressive quantitative accuracy of NGS-based characterization for known variants down to a minor VAF of 5%. Although additional optimization in individual parameters might be necessary, these initial results clearly indicate that rapid comprehensive molecular characterization of patient samples appears feasible, even in the clinical setting. Figure 1 Figure 1. Disclosures Thiede: AgenDix GmbH: Equity Ownership, Research Funding; Illumina: Research Support, Research Support Other. Bullinger:Illumina: Research Support Other. Hernández-Rivas:Illumina: Research Support Other. Heuser:Illumina: Research Support Other. Preudhomme:Illumina: Research Support Other. Lo Coco:Illumina: Research Support Other. Martinelli:Illumina: Research Support Other. Schuh:Illumina: Research Support Other. Enjuanes:Illumina: Research Support Other. Lea:Illumina: Research Support Other. Schlesinger:Illumina: Employment.
APA, Harvard, Vancouver, ISO, and other styles
49

Sharma, Prem Sagar, Sanjeev Kumar, Madhu Sharma Gaur, and Vinod Jain. "A novel intelligent round robin CPU scheduling algorithm." International Journal of Information Technology, March 9, 2021. http://dx.doi.org/10.1007/s41870-021-00630-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

GIBET TANI, Hicham, and Chaker EL AMRANI. "Smarter Round Robin Scheduling Algorithm for Cloud Computing and Big Data." Journal of Data Mining & Digital Humanities Special Issue on Scientific... (January 2, 2018). http://dx.doi.org/10.46298/jdmdh.3104.

Full text
Abstract:
Cloud Computing and Big Data are the upcoming Information Technology (IT) computing models. These groundbreaking paradigms are leading IT to a new set of rules that aims to change computing resources delivery and exploitation model, thus creating a novel business market that is exponentially growing and attracting more and more investments from both providers and end users that are looking forward to make profits from these innovative models of computing. In the same context, researchers and investigators are wrestling time in order to develop, test and optimize Cloud Computing and Big Data platforms, whereas several studies are ongoing to determine and enhance the essential aspects of these computing models especially compute resources allocation. The processing power scheduling is crucial when it comes to Cloud Computing and Big Data because of the data growth management and delivery design proposed by these new computing models, that requires faster responses from platforms and applications. Hence originates the importance of developing high efficient scheduling algorithms that are compliant with these computing models platforms and infrastructures requirement. Cloud Computing et Big Data sont les prochains modèles informatiques. Ces paradigmes révolutionnaires conduisent l'informatique à un nouveau jeu de règles qui vise à changer la livraison des ressources informatiques et le modèle d'exploitation, créant ainsi un monde d'affaires nouveau qui croît de façon exponentielle et attire de plus en plus d'investissements des fournisseurs et des utilisateurs finaux qui attendent Amener profit de ces modèles innovants de l'informatique. Dans le même contexte, les chercheurs combattent pour développer, tester et optimiser les plates-formes Cloud Computing et Big Data, alors que plusieurs études sont en cours pour déterminer et améliorer les aspects essentiels de ces modèles informatiques, en particulier l'allocation des ressources. La planification de la puissance de traitement est cruciale quand il s'agit de Cloud Computing et Big Data en raison de la gestion de la croissance des données et la conception de livraison proposée par ces nouveaux modèles informatiques, qui nécessite des réponses plus rapides des plates-formes et des applications. D'où l'origine de l'importance de développer des algorithmes d'ordonnancement efficaces qui sont conformes à ces plates-formes de modèles informatiques et aux exigences d'infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography