Journal articles on the topic 'Web servers Web servers Client/server computing Client/server computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Web servers Web servers Client/server computing Client/server computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gupta, Meenakshi, and Atul Garg. "A Comparative Analysis of Content Delivery Network and Other Techniques for Web Content Delivery." International Journal of Service Science, Management, Engineering, and Technology 6, no. 4 (October 2015): 43–58. http://dx.doi.org/10.4018/ijssmet.2015100104.

Full text
Abstract:
Web content delivery is based on client-server model. In this model, all the web requests for specific contents are serviced by a single web server as the requested contents reside only on one server. Therefore, with the increasing reliance on the web, the load on the web servers is increasing, thus causing scalability, reliability and performance issues for the web service providers. Various techniques have been implemented to handle these issues and improve the Quality of Service of the web content delivery to end-users such as clustering of servers, client-side caching, proxy server caching, mirroring of servers, multihoming and Content Delivery Network (CDN). This paper gives an analytical and comparative look on these approaches. It also compares CDN with other distributed systems such as grid, cloud and peer-to-peer computing.
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Harikesh, and Shishir Kumar. "Dispatcher Based Dynamic Load Balancing on Web Server System." International Journal of System Dynamics Applications 1, no. 2 (April 2012): 15–27. http://dx.doi.org/10.4018/ijsda.2012040102.

Full text
Abstract:
The traffic increasing in the network creates bulk congestion while the bulk transfer of data evolves. Performance evaluation and high availability of servers are important factors to resolve this problem using various cluster based systems. There are several low-cost servers using the load sharing cluster system which are connected to high speed networks, and apply load balancing technique between servers. It offers high computing power and high availability. A distributed website server can provide scalability and flexibility to manage with emergent client demands. Efficiency of a replicated web server system will depend on the way of distributed incoming requests among these replicas. A distributed Web-server architectures schedule client requests among the multiple server nodes in a user-transparent way that affects the scalability and availability. The aim of this paper is the development of a load balancing techniques on distributed Web-server systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Afriansyah, Muhammad Faizal, Adian Fatchur Rochim, and Eko Didik Widianto. "Rancang Bangun Layanan Cloud Computing Berbasis IaaS Menggunakan Virtualbox." Jurnal Teknologi dan Sistem Komputer 3, no. 1 (January 30, 2015): 87–94. http://dx.doi.org/10.14710/jtsiskom.3.1.2015.87-94.

Full text
Abstract:
Today, the growth of technology is very fast and a lot of technology that can facilitate users in helping their activity is made. A technology need a server to store both systems and users data. More users needed more servers to store user data. The server room became full and needed the extra space so that it required high cost to build the server and the server space itself. The purpose of this research is to create an IaaS based server virtualization that is connected to a router, switch, virtual client and administrator with VirtualBox application. The purpose of this research can be achieved by designing an appropriate research’s methodology. There are 5 stages of implementation/methodology in building virtualization server at this research which are the system definition, requirements specification, system configuration, system testing, and system analysis. First, phase of the definition system is by describing a system of early identification, system requirements and network topology on the implemented system. The second phase is the specification requirements to determine specifications hardware and software. The hardware consists of a computer with resources 8GB RAM and AMD Phenom II X6 as a processor. The software consists of VirtualBox and operating systems. The third stage is the system configuration to declare the source code in application on each server, routers and switches to perform the function of each device. The final stage is system testing and system analysis by checking the system is ready to use and works the best. Results in this research show IaaS-based server virtualization can be connected to display a web page on all clients through virtual switches and routers on a computer.
APA, Harvard, Vancouver, ISO, and other styles
4

Antonova, Аlfiia, and Svitlana Bartkova. "An overview of the advantages of cloud computing and online IDE." Automation of technological and business processes 12, no. 3 (November 5, 2020): 47–50. http://dx.doi.org/10.15673/atbp.v12i3.1927.

Full text
Abstract:
The article discusses cloud computing and their impact on the field of software development, and analyzes several issues of developers that can be solved using the online IDE. Usage of cloud computing in the enterprise is not new, and it is not difficult in terms of implementation. That is why it is gaining popularity. First, due to the large number of technologies that allow you to optimize internal processes. Secondly, due to the large number of giant companies and small businesses that use these technologies. Cloud computing is very interesting in financial terms. After all, they allow not spending money on building and supporting the infrastructure. One can also not worry about risks, such as, for example, equipment that affects the system, weather conditions, and so on. It takes on all these moments. Evolving of architectural solutions also increases the impact on cloud technology. A service-oriented approach to software development is becoming increasingly popular. It is less and less possible to see the usual thin client and a single server-monolith. Clients are becoming more complex, attracting part of the business logic, servers are divided into parts, each of which is responsible for a particular part of the subject area, and in some cases may not know about the existence of others. The IDE, which requires developers for programming, also has analogs on the web platform. The online IDE has its advantages for solving some tasks and has become increasingly popular lately. Users of the online IDE can create, run, and customize software that works with a simple browser. The main goal of this study is to determine the main advantages of cloud technologies in the application development process, analyze the segment of online IDE. Based on these data to identify the main situations that determine their use, predict further development, and identify principles and technologies used in this area.
APA, Harvard, Vancouver, ISO, and other styles
5

Archana, E., V. Dickson Irudaya Raj, M. Vidhya, and J. S. Umashankar. "Secured information exchange in cloud using cross breed property based encryption." International Journal of Engineering & Technology 7, no. 1.9 (March 1, 2018): 205. http://dx.doi.org/10.14419/ijet.v7i1.9.9823.

Full text
Abstract:
Individuals have the ability to get to the web anyplace and whenever. Distributed computing is an idea that treats the assets on the Internet as a brought together element, to be specific cloud. The server farm administrators virtualized the assets as per the necessities of the clients and uncover them as capacity pools, which the clients can themselves use to store documents or information protest s. Physically, the assets may get put away over numerous servers. Thus, Data heartiness is a noteworthy prerequisite for such stockpiling frameworks. In this paper we have proposed one approach to give information power that is by imitating the message with the end goal that every capacity server stores a duplicate of the message. We have improved the safe distributed storage framework by utilizing a limit intermediary re-encryption strategy. This encryption conspire bolsters decentralized deletion codes connected over scrambled messages and information sending operations over encoded and encoded messages. Our framework is exceedingly dispersed where every capacity server freely encodes and forward messages and key servers autonomously perform incomplete decoding.
APA, Harvard, Vancouver, ISO, and other styles
6

Gurusaran, M., P. Sivaranjan, K. S. Dinesh Kumar, P. Radha, K. P. S. Thulaa Tharshan, S. N. Satheesh, K. Jayanthan, et al. "Hydrogen Bonds Computing Server (HBCS): an online web server to compute hydrogen-bond interactions and their precision." Journal of Applied Crystallography 49, no. 2 (February 24, 2016): 642–45. http://dx.doi.org/10.1107/s1600576716002041.

Full text
Abstract:
Hydrogen bonds in biological macromolecules play significant structural and functional roles. They are the key contributors to most of the interactions without which no living system exists. In view of this, a web-based computing server, the Hydrogen Bonds Computing Server (HBCS), has been developed to compute hydrogen-bond interactions and their standard deviations for any given macromolecular structure. The computing server is connected to a locally maintained Protein Data Bank (PDB) archive. Thus, the user can calculate the above parameters for any deposited structure, and options have also been provided for the user to upload a structure in PDB format from the client machine. In addition, the server has been interfaced with the molecular viewers Jmol and JSmol to visualize the hydrogen-bond interactions. The proposed server is freely available and accessible via the World Wide Web at http://bioserver1.physics.iisc.ernet.in/hbcs/.
APA, Harvard, Vancouver, ISO, and other styles
7

Sharma, Amit. "HULK and DDoS Attacks in Web Applications with Detection Mechanism." International Journal of Emerging Research in Management and Technology 6, no. 6 (June 29, 2018): 192. http://dx.doi.org/10.23956/ijermt.v6i6.268.

Full text
Abstract:
Distributed Denial of Service attacks are significant dangers these days over web applications and web administrations. These assaults pushing ahead towards application layer to procure furthermore, squander most extreme CPU cycles. By asking for assets from web benefits in gigantic sum utilizing quick fire of solicitations, assailant robotized programs use all the capacity of handling of single server application or circulated environment application. The periods of the plan execution is client conduct checking and identification. In to beginning with stage by social affair the data of client conduct and computing individual user’s trust score will happen and Entropy of a similar client will be ascertained. HTTP Unbearable Load King (HULK) attacks are also evaluated. In light of first stage, in recognition stage, variety in entropy will be watched and malevolent clients will be recognized. Rate limiter is additionally acquainted with stop or downsize serving the noxious clients. This paper introduces the FAÇADE layer for discovery also, hindering the unapproved client from assaulting the framework.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, K. S. Dinesh, M. Gurusaran, S. N. Satheesh, P. Radha, S. Pavithra, K. P. S. Thulaa Tharshan, John R. Helliwell, and K. Sekar. "Online_DPI: a web server to calculate the diffraction precision index for a protein structure." Journal of Applied Crystallography 48, no. 3 (April 25, 2015): 939–42. http://dx.doi.org/10.1107/s1600576715006287.

Full text
Abstract:
An online computing server,Online_DPI(where DPI denotes the diffraction precision index), has been created to calculate the `Cruickshank DPI' value for a given three-dimensional protein or macromolecular structure. It also estimates the atomic coordinate error for all the atoms available in the structure. It is an easy-to-use web server that enables users to visualize the computed values dynamically on the client machine. Users can provide the Protein Data Bank (PDB) identification code or upload the three-dimensional atomic coordinates from the client machine. The computed DPI value for the structure and the atomic coordinate errors for all the atoms are included in the revised PDB file. Further, users can graphically view the atomic coordinate error along with `temperature factors' (i.e.atomic displacement parameters). In addition, the computing engine is interfaced with an up-to-date local copy of the Protein Data Bank. New entries are updated every week, and thus users can access all the structures available in the Protein Data Bank. The computing engine is freely accessible online at http://cluster.physics.iisc.ernet.in/dpi/.
APA, Harvard, Vancouver, ISO, and other styles
9

A., Selvakumar, and Gunasekaran G. "A Novel Approach of Load Balancing and Task Scheduling Using Ant Colony Optimization Algorithm." International Journal of Software Innovation 7, no. 2 (April 2019): 9–20. http://dx.doi.org/10.4018/ijsi.2019040102.

Full text
Abstract:
Cloud computing is a model for conveying data innovation benefits in which assets are recovered from the web through online devices and applications, instead of an immediate association with a server. Clients can set up and boot the required assets and they need to pay just for the required assets. Subsequently, later on giving a component to a productive asset administration and the task will be a vital target of Cloud computing. Load balancing is one of the major concerns in cloud computing, and the main purpose of it is to satisfy the requirements of users by distributing the load evenly among all servers in the cloud to maximize the utilization of resources, to increase throughput, provide good response time and to reduce energy consumption. To optimize resource allocation and ensure the quality of service, this article proposes a novel approach for load-balancing based on the enhanced ant colony optimization.
APA, Harvard, Vancouver, ISO, and other styles
10

Boldrin, Fabio, Chiara Taddia, and Gianluca Mazzini. "Web Distributed Computing Systems Implementation and Modeling." International Journal of Adaptive, Resilient and Autonomic Systems 1, no. 1 (January 2010): 75–91. http://dx.doi.org/10.4018/jaras.2010071705.

Full text
Abstract:
This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of JavaScript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex® technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.
APA, Harvard, Vancouver, ISO, and other styles
11

Dykstra, Dave, Brian Bockelman, Jakob Blomer, and Laurence Field. "The Open High Throughput Computing Content Delivery Network." EPJ Web of Conferences 214 (2019): 04023. http://dx.doi.org/10.1051/epjconf/201921404023.

Full text
Abstract:
LHC experiments make extensive use of web proxy caches, especially for software distribution via the CernVM File System and for conditions data via the Frontier Distributed Database Caching system. Since many jobs read the same data, cache hit rates are high and hence most of the traffic flows efficiently over Local Area Networks. However, it is not always possible to have local web caches, particularly for opportunistic cases where experiments have little control over site services. The Open High Throughput Computing (HTC) Content Delivery Network (CDN), openhtc.io, aims to address this by using web proxy caches from a commercial CDN provider. Cloudflare provides a simple interface for registering DNS aliases of any web server and does reverse proxy web caching on those aliases. The openhtc.io domain is hosted on Cloudflare's free tier CDN which has no bandwidth limit and makes use of data centers throughout the world, so the average performance for clients is much improved compared to reading from CERN or a Tier 1. The load on WLCG servers is also significantly reduced. WLCG Web Proxy Auto Discovery is used to select local web caches when they are available and otherwise select openhtc.io caching. This paper describes the Open HTC CDN in detail and provides initial results from its use for LHC@Home and USCMS opportunistic computing.
APA, Harvard, Vancouver, ISO, and other styles
12

Manikanta, K., and K. S. Rajan. "LSIVIEWER 2.0 – A CLIENT-ORIENTED ONLINE VISUALIZATION TOOL FOR GEOSPATIAL VECTOR DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 107–13. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-107-2017.

Full text
Abstract:
Geospatial data visualization systems have been predominantly through applications that are installed and run in a desktop environment. Over the last decade, with the advent of web technologies and its adoption by Geospatial community, the server-client model for data handling, data rendering and visualization respectively has been the most prevalent approach in Web-GIS. While the client devices have become functionally more powerful over the recent years, the above model has largely ignored it and is still in a mode of serverdominant computing paradigm. In this paper, an attempt has been made to develop and demonstrate LSIViewer – a simple, easy-to-use and robust online geospatial data visualisation system for the user’s own data that harness the client’s capabilities for data rendering and user-interactive styling, with a reduced load on the server. The developed system can support multiple geospatial vector formats and can be integrated with other web-based systems like WMS, WFS, etc. The technology stack used to build this system is Node.js on the server side and HTML5 Canvas and JavaScript on the client side. Various tests run on a range of vector datasets, upto 35 MB, showed that the time taken to render the vector data using LSIViewer is comparable to a desktop GIS application, QGIS, over an identical system.
APA, Harvard, Vancouver, ISO, and other styles
13

Triyanto, Hari, Arif Bijaksana Putra Negara, and Muhammad Azhar Irwansyah. "Analisa Perbandingan Performa Openstack dan Apache Cloudstack dalam Model Cloud Computing Berbasis Infrastructure As a Service." Jurnal Sistem dan Teknologi Informasi (JUSTIN) 8, no. 1 (January 30, 2020): 78. http://dx.doi.org/10.26418/justin.v8i1.31936.

Full text
Abstract:
Efisiensi pemanfaatan sumber daya komputasi dapat di lakukan dengan melakukan virtualisasi pada mesin fisik komputer. Pada cloud computing, sumber daya seperti CPU, memori, storage dan jaringan dapat dipandang sebagai suatu layanan dengan virtualisasi sebagai jantungnya. Openstack dan Cloudstack merupakan salah satu opensource untuk membangun cloud computing dengan model IaaS. Penelitian ini bertujuan untuk membandingkan Openstack dan Cloudstack dalam perancangan private cloud computing dengan aspek pengujian web server, komputasi, oltp database dan jaringan. Pengujian menggunakan metrik skalabilitas dengan metode pengujian overhead dan linearity. Implementasi dilakukan pada satu server dengan menggunakan satu router dan satu laptop sebagai client. Client bertugas untuk menjalankan pengujian menggunakan tools seperti Httperf, Sysbench dan Iperf. Sejumlah beban pengujian diberikan untuk tiap tiap instance berdasarkan skenario pengujian yang telah di buat. Hasil pengujian menunjukan bahwa waktu eksekusi yang diperlukan untuk mengakses web server dan komputasi pada instance Openstack lebih rendah daripada instance Cloudstack. Sedangkan pada pengujian oltp database dan jaringan, hasil pengujian menunjukan bahwa instance pada Cloudstack lebih unggul dengan waktu eksekusi oltp database yang lebih rendah, throughput yang lebih tinggi serta jitter yang lebih rendah. Oleh karena itu, Openstack unggul dalam penggunaan aplikasi berbasis web dan komputasi sedangkan Cloudstack unggul dalam aplikasi dengan transaksi database yang tinggi serta membutuhkan jaringan yang baik.
APA, Harvard, Vancouver, ISO, and other styles
14

Goswami, Veena, Sudhansu Shekhar Patra, and G. B. Mund. "Optimal Management of Cloud Centers with Different Arrival Modes for Cloud Computing Environment." International Journal of Cloud Applications and Computing 2, no. 3 (July 2012): 86–97. http://dx.doi.org/10.4018/ijcac.2012070104.

Full text
Abstract:
Cloud computing is a new computing paradigm in which information and computing services can be accessed from a Web browser by clients. Understanding of the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services (QoS) is crucial. Based on the Service level agreement, the requests are processed in the cloud centers in different modes. This paper analyzes a finite-buffer multi-server queuing system where client requests have two arrival modes. It is assumed that each arrival mode is serviced by one or more Virtual machines, and both the modes have equal probabilities of receiving service. Various performance measures are obtained and optimal cost policy is presented with numerical results. The genetic algorithm is employed to search the optimal values of various parameters for the system.
APA, Harvard, Vancouver, ISO, and other styles
15

Shi, Shuang Yuan, Ju Song Zhang, and Zong Guo Qiu. "Study of the Characteristic and Information Architecture Model of Enterprise Web Application." Advanced Engineering Forum 1 (September 2011): 31–37. http://dx.doi.org/10.4028/www.scientific.net/aef.1.31.

Full text
Abstract:
In this paper, we systematically described the characteristics of traditional Web applications, as well as the advantages of Ajax technology, analyzed the differences between enterprise web application and public web application, and the differences between enterprise web applications and desktop applications; studied the characteristics of enterprise Web applications and Information Architecture Model; on the basis of the analysis of enterprise-class framework, proposed functional requirement of components and framework which support enterprise-class Web development, discusses computing balance between the client and server.
APA, Harvard, Vancouver, ISO, and other styles
16

Simic, Dragan, Srecko Ristic, and Slobodan Obradovic. "Measurement of the achieved performance levels of the web applications with distributed relational database." Facta universitatis - series: Electronics and Energetics 20, no. 1 (2007): 31–43. http://dx.doi.org/10.2298/fuee0701031s.

Full text
Abstract:
This paper describes the methods and means used for creating a computer cluster using ordinary PCs. The cluster is running a replicated relational database, and two custom Web applications, used as the database clients. Operating system running all this is Linux 2.4, with Linux Virtual Server (LVS) used as the load-balancing solution, MySQL 4.0 as the replicated database (supporting transactions and referential integrity), and Apache 1.3 as the Web server. PHP4 is used for Web applications development. Additionally, a High Performance Computing (HPC) cluster is implemented using OpenMOSIX. Measurement and comparison of achieved performance levels is done as the final aim, using two custom applications developed for that purpose, acting as clients of two deployed Web applications. Performance-measurement applications are running under Microsoft Windows, and are developed using Borland Delphi 7.
APA, Harvard, Vancouver, ISO, and other styles
17

Danilovskiy, K. N., A. R. Dudaev, V. N. Glinskikh, M. N. Nikitenko, and I. A. Moskaev. "Web-Technologies Based Software for Oil and Gas Wells Geosteering." Vestnik NSU. Series: Information Technologies 17, no. 2 (2019): 5–17. http://dx.doi.org/10.25205/1818-7900-2019-17-2-5-17.

Full text
Abstract:
Accuracy of the horizontal well placement in the target reservoir becomes essential for efficient oilfield development. Geosteering of a well with a complex trajectory is performed using real-time geophysical data obtained while drilling. The presented work is devoted to the development of a new software for horizontal oil and gas wells geosteering. Algorithms based on logging data correlation and electromagnetic logging data numerical inversion methods are used for well placement. The developed application is based on web-technologies and has a client-server architecture. To optimize the resource-intensive calculations execution time, high-performance cloud computing is used.
APA, Harvard, Vancouver, ISO, and other styles
18

Mishra, Sudipan, and Xumin Liu. "Optimizing Concurrency Performance of Complex Services in Mobile Environment." International Journal of Web Services Research 11, no. 1 (January 2014): 94–110. http://dx.doi.org/10.4018/ijwsr.2014010105.

Full text
Abstract:
Hosting services on mobile devices has been considered as the key solution for domains that have special requirement on portability, timeliness, and flexibility on service deployment. Typical examples include, among many others, military, music, healthcare, gaming, and data sharing. Despite the recent boom of mobile computing makes service deployment in mobile environment possible, significant challenges arise due to the limitations in existing mobile hardware/software capable of managing resource intensive applications. The situation gets worse when managing complex services that allow concurrent clients and requests. This paper addresses the issue related specifically to concurrency control improvement in mobile web servers to support the mobile deployment of complex services. The authors identify key factors that affect a system to respond a request, including request related factors, system resource related factors, and context. Based on this, the authors propose a dynamic heavy request classification model (DHRC) to estimate the heaviness of an incoming request using machine-learning methods. The heavy request will be detected, which requires relatively heavy system resources of the mobile server to generate a response. The authors design a dynamic request management strategy (DRMS), which reduces the number of discarded requests by adding heavy requests to a queue and processing them asynchronously. The proposed solution is implemented on Android-based mobile devices as an extension of I-Jetty web server. Experimental studies are conducted and the result indicates the effectiveness of our solution.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Yi Fei. "The Web Foreign Language Teaching Research Based on P2P Technology." Applied Mechanics and Materials 519-520 (February 2014): 132–36. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.132.

Full text
Abstract:
The author analyzes the traditional foreign language teaching website system structure and function, and points out the weakness of its client/server network computing model. By using the Peer to Peer (P2P) system, we can make full use of every online node computing power and communication bandwidth; we can provide better online communication resource sharing among students, teachers, students and teachers. This paper proposes a foreign language teaching system based on P2P, which can effectively make up for the inadequacy of traditional teaching website. We can also explore the application of P2P technology in multimedia network teaching.
APA, Harvard, Vancouver, ISO, and other styles
20

Kovalets, Ivan, Svitlana Maistrenko, Alexander Khalchenkov, Olexander Polonsky, Taras Dontsov-Zagreba, Kostyantyn Khurtsylava, and Oleg Udovenko. "ADAPTATION OF THE WEB-SERVICE OF AIR POLLUTION FORECASTING FOR OPERATION WITHIN CLOUD COMPUTING PLATFORM OF THE UKRAINIAN NATIONAL GRID INFRASTRUCTURE." Science and Innovation 17, no. 1 (March 3, 2021): 78–88. http://dx.doi.org/10.15407/scine17.01.078.

Full text
Abstract:
Introduction. Air pollution modeling is a powerful tool that allows developing scientifically justified solutions to reduce the risks posed by atmospheric emissions of pollutants. Problem Statement. Cloud computing infrastructures provide new opportunities for web-based air pollution forecasting systems. However the implementation of these capabilities requires changes in the architecture of the existing systems. Purpose. The purpose is to adapt the web service of forecasting the atmospheric pollution in Ukraine to operate in the cloud computing platform of the Ukrainian National Grid infrastructure. Materials and Methods. The web client – web server – cloud computing architecture was used. The calculation of the model is performed in the cloud infrastructure, while the client and server parts operate on separate computers. Results. With the developed service the forecast of air pollution is possible for every point at the territory of Ukraine for more than thirty substances, including chlorine, ammonia, hydrogen sulfide and others. The forecast is performed using the data of the WRF-Ukraine numerical weather prediction system and visualized through a web interface. The capabilities of the developed system were demonstrated by the example of simulation of air pollution in part of Kyiv affected by the releases from the Energia incineration plant during pollution episode in September, 2019. The total releases of toluene gas from incineration plant and from the fire on spontaneous waste landfill, which is located a few km from Kyiv, were estimated and analyzed. For the considered period the fire could bring considerable additional amounts of pollutants to the studied region. The confidence interval for the maximum airborne concentration for the considered period is estimated from 0.7 to 2.1 mg·m-3 which is higher than the permissible value (0.6 mg· m-3). Conclusions. The presented system could be used by institutions responsible for response to environmental accidents. Keywords: air pollution, atmospheric dispersion, web-systems, cloud computing. Introduction. Air pollution modeling is a powerful tool that allows developing scientifically justified solutions to reduce the risks posed by atmospheric emissions of pollutants. Problem Statement. Cloud computing infrastructures provide new opportunities for web-based air pollution forecasting systems. However the implementation of these capabilities requires changes in the architecture of the existing systems. Purpose. The purpose is to adapt the web service of forecasting the atmospheric pollution in Ukraine to operate in the cloud computing platform of the Ukrainian National Grid infrastructure. Materials and Methods. The web client – web server – cloud computing architecture was used. The calculation of the model is performed in the cloud infrastructure, while the client and server parts operate on separate computers. Results. With the developed service the forecast of air pollution is possible for every point at the territory of Ukraine for more than thirty substances, including chlorine, ammonia, hydrogen sulfide and others. The forecast is performed using the data of the WRF-Ukraine numerical weather prediction system and visualized through a web interface. The capabilities of the developed system were demonstrated by the example of simulation of air pollution in part of Kyiv affected by the releases from the Energia incineration plant during pollution episode in September, 2019. The total releases of toluene gas from incineration plant and from the fire on spontaneous waste landfill, which is located a few km from Kyiv, were estimated and analyzed. For the considered period the fire could bring considerable additional amounts of pollutants to the studied region. The confidence interval for the maximum airborne concentration for the considered period is estimated from 0.7 to 2.1 mg·m-3 which is higher than the permissible value (0.6 mg· m-3). Conclusions. The presented system could be used by institutions responsible for response to environmental accidents.
APA, Harvard, Vancouver, ISO, and other styles
21

Alsobeh, Anas M. R., Aws Abed Al Raheem Magableh, and Emad M. AlSukhni. "Runtime Reusable Weaving Model for Cloud Services Using Aspect-Oriented Programming." International Journal of Web Services Research 15, no. 1 (January 2018): 71–88. http://dx.doi.org/10.4018/ijwsr.2018010104.

Full text
Abstract:
Cloud computing technology has opened an avenue to meet the critical need to securely share distributed resources and web services, and especially those that belong to clients who have sensitive data and applications. However, implementing crosscutting concerns for cloud-based applications is a challenge. This challenge stems from the nature of distributed Web-based technology architecture and infrastructure. One of the key concerns is security logic, which is scattered and tangled across all the cloud service layers. In addition, maintenance and modification of the security aspect is a difficult task. Therefore, cloud services need to be extended by enriching them with features to support adaptation so that these services can become better structured and less complex. Aspect-oriented programming is the right technical solution for this problem as it enables the required separation when implementing security features without the need to change the core code of the server or client in the cloud. Therefore, this article proposes a Runtime Reusable Weaving Model for weaving security-related crosscutting concerns through layers of cloud computing architecture. The proposed model does not require access to the source code of a cloud service and this can make it easier for the client to reuse the needed security-related crosscutting concerns. The proposed model is implemented using aspect orientation techniques to integrate cloud security solutions at the software-as-a-service layer.
APA, Harvard, Vancouver, ISO, and other styles
22

Ravalia, Varun, and Neha Sehrawat. "Vivid analysis of Cloud Computing along with its security issues and challenges." Journal of University of Shanghai for Science and Technology 23, no. 07 (July 8, 2021): 458–63. http://dx.doi.org/10.51201/jusst/21/07113.

Full text
Abstract:
In the modern era, technologies are being used by everyone.” Cloud” refers to a collaborative expression for boundless advancements and progression. Cloud computing is a disruptive technology for providing on-demand access to data and applications from anywhere at any time in the world. Cloud computing incorporates various available innovations and technologies like virtualization, bandwidth networks, Web 2.0, browser interfaces, and time-sharing. Cloud computing enables us to share the resources like storage, applications, services, and networks without physically obtaining them. The data is stored in the databases on the servers and users/clients need to request access by sending the request to these servers. This paper includes the various details of cloud technology, its characteristics, its models alongside the challenges and problems faced in cloud computing. Here the focus is on the theoretical explanation of the cloud, models of the cloud, and the problems in the security and confrontation faced during the exertion of the cloud technology.
APA, Harvard, Vancouver, ISO, and other styles
23

Bagrao, Darshan. "A Survey of Emerging Threats in Cloud Security." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2943–45. http://dx.doi.org/10.22214/ijraset.2021.37909.

Full text
Abstract:
Abstract: The original aim of the research was to investigate the conceptual dimensions of cloud security threats and vulnerabilities. Cloud computing has changed the whole picture from centralized (client-server not web-based) to distributed systems and now we are getting back to virtual centralization (cloud computing). Although potential gain achieved from cloud computing but still model security is questionable. . The cloud computing concept offers dynamically scalable resources and so it uses internet as a communication media. This paper proposes survey on emerging threats of cloud and also discussed the existing threat report and their remediation. The result and analysis show that solution of this work will be helpful in summarizing the main security risks of cloud computing from different organizations. Keywords: Threat, vulnerabilities, model security.
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Wenquan, Rongxu Xu, Sunhwan Lim, Dong-Hwan Park, Chanwon Park, and Dohyeun Kim. "Dynamic Inference Approach Based on Rules Engine in Intelligent Edge Computing for Building Environment Control." Sensors 21, no. 2 (January 18, 2021): 630. http://dx.doi.org/10.3390/s21020630.

Full text
Abstract:
Computation offloading enables intensive computational tasks in edge computing to be separated into multiple computing resources of the server to overcome hardware limitations. Deep learning derives the inference approach based on the learning approach with a volume of data using a sufficient computing resource. However, deploying the domain-specific inference approaches to edge computing provides intelligent services close to the edge of the networks. In this paper, we propose intelligent edge computing by providing a dynamic inference approach for building environment control. The dynamic inference approach is provided based on the rules engine that is deployed on the edge gateway to select an inference function by the triggered rule. The edge gateway is deployed in the entry of a network edge and provides comprehensive functions, including device management, device proxy, client service, intelligent service and rules engine. The functions are provided by microservices provider modules that enable flexibility, extensibility and light weight for offloading domain-specific solutions to the edge gateway. Additionally, the intelligent services can be updated through offloading the microservices provider module with the inference models. Then, using the rules engine, the edge gateway operates an intelligent scenario based on the deployed rule profile by requesting the inference model of the intelligent service provider. The inference models are derived by training the building user data with the deep learning model using the edge server, which provides a high-performance computing resource. The intelligent service provider includes inference models and provides intelligent functions in the edge gateway using a constrained hardware resource based on microservices. Moreover, for bridging the Internet of Things (IoT) device network to the Internet, the gateway provides device management and proxy to enable device access to web clients.
APA, Harvard, Vancouver, ISO, and other styles
25

M. Mousa, Hamdy, and Gamal F. Elhady. "Trust Model Development for Cloud Environment using Fuzzy Mamdani and Simulators." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 11 (November 30, 2014): 5142–54. http://dx.doi.org/10.24297/ijct.v13i11.2784.

Full text
Abstract:
Nowadays, Cloud computing is an expanding area in research and industry, which involves virtualization, distributed computing, internet, software, security, web services and etc. A cloud consists of several elements such as clients, data centers and distributed servers, internet and it includes fault tolerance, high availability, effectiveness, scalability, flexibility, reduced overhead for users, reduced cost of ownership, on demand services and etc. Now the next factor is coming, cost of Virtual machines on Data centers and response time. So this paper develop trust model in cloud computing based on fuzzy logic, to explores the coordination between Data Centers and user bound to optimize the application performance, cost of Virtual machines on Data centers and response time using Cloud Computing Analyst.
APA, Harvard, Vancouver, ISO, and other styles
26

Lai, Andy S. Y., and S. Y. Leung. "Mobile Bluetooth-Based Game Development Using Arduino on Android Platform." Applied Mechanics and Materials 427-429 (September 2013): 2192–96. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.2192.

Full text
Abstract:
This paper describes the use of Arduino on Android open-source platform in an experimental way to develop a remote control multi-player game. Android mobile is programmed to be a remote control device used to send sockets via Bluetooth network to Arduino microprocessor embedded in a toy car to control movements in all directions along its pathway. We applied the open-source microcontroller Arduino and the Java-Based technology Android in developing a multi-player mobile game in distributed network computing and mobile telecommunication, which strongly focuses on the emergence of technologies that embrace android mobile and Arduino open-sources. Our investigation focuses on an extended form of microprocessor network computing which game software developers utilize to develop remote control games for multi-players. We call this study an experimental mobile computing application in which the Arduino embedded in the toy car can sense the color pattern changes with infrared along its pathway and instantaneously send the data via Bluetooth piconet to the connected Android mobile device. In turn, the Android mobile device sends the data to game server via web services on internet. Currently, mobile computing feeds information into the game server. However, designing concurrent network broadcasting and real-time remote control game is still a daunting task and much theoretical and practical research remains to be done to catch up with the mobile computing and telecommunication era. In this paper, we present the overall architecture and discuss, in detail, the implementation steps taken to create the Arduino and Android based remote control context-aware game. We have developed a multi-player game server and prepared the client and server codes in mobile computing, providing adaptive routines to handle connection information requests in telecommunication and delivery for speedy throughput and context-triggered actions.
APA, Harvard, Vancouver, ISO, and other styles
27

Landa, M., P. Kavka, L. Strouhal, and J. Cepicky. "BUILDING A COMPLETE FREE AND OPEN SOURCE GIS INFRASTRUCTURE FOR HYDROLOGICAL COMPUTING AND DATA PUBLICATION USING GIS.LAB AND GISQUICK PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W2 (July 5, 2017): 101–5. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w2-101-2017.

Full text
Abstract:
Building a complete free and open source GIS computing and data publication platform can be a relatively easy task. This paper describes an automated deployment of such platform using two open source software projects – GIS.lab and Gisquick. GIS.lab (<a href=" http: //web.gislab.io"target="_blank">http: //web.gislab.io</a>) is a project for rapid deployment of a complete, centrally managed and horizontally scalable GIS infrastructure in the local area network, data center or cloud. It provides a comprehensive set of free geospatial software seamlessly integrated into one, easy-to-use system. A platform for GIS computing (in our case demonstrated on hydrological data processing) requires core components as a geoprocessing server, map server, and a computation engine as eg. GRASS GIS, SAGA, or other similar GIS software. All these components can be rapidly, and automatically deployed by GIS.lab platform. In our demonstrated solution PyWPS is used for serving WPS processes built on the top of GRASS GIS computation platform. GIS.lab can be easily extended by other components running in Docker containers. This approach is shown on Gisquick seamless integration. Gisquick (<a href=" http://gisquick.org"target="_blank">http://gisquick.org</a>) is an open source platform for publishing geospatial data in the sense of rapid sharing of QGIS projects on the web. The platform consists of QGIS plugin, Django-based server application, QGIS server, and web/mobile clients. In this paper is shown how to easily deploy complete open source GIS infrastructure allowing all required operations as data preparation on desktop, data sharing, and geospatial computation as the service. It also includes data publication in the sense of OGC Web Services and importantly also as interactive web mapping applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Aliga, Aliga Paul, Adetokunbo MacGregor John-Otumu, Rebecca E. Imhanhahimi, and Atuegbelo Confidence Akpe. "Cross Site Scripting Attacks in Web-Based Applications." Journal of Advances in Science and Engineering 1, no. 2 (September 15, 2018): 25–35. http://dx.doi.org/10.37121/jase.v1i2.19.

Full text
Abstract:
Web-based applications has turn out to be very prevalent due to the ubiquity of web browsers to deliver service oriented application on-demand to diverse client over the Internet and cross site scripting (XSS) attack is a foremost security risk that has continuously ravage the web applications over the years. This paper critically examines the concept of XSS and some recent approaches for detecting and preventing XSS attacks in terms of architectural framework, algorithm used, solution location, and so on. The techniques were analysed and results showed that most of the available recognition and avoidance solutions to XSS attacks are more on the client end than the server end because of the peculiar nature of web application vulnerability and they also lack support for self-learning ability in order to detect new XSS attacks. Few researchers as cited in this paper inculcated the self-learning ability to detect and prevent XSS attacks in their design architecture using artificial neural networks and soft computing approach; a lot of improvement is still needed to effectively and efficiently handle the web application security menace as recommended.
APA, Harvard, Vancouver, ISO, and other styles
29

Fox, Geoffrey, Shrideep Pallickara, Marlon Pierce, and Harshawardhan Gadgil. "Building messaging substrates for Web and Grid applications." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 363, no. 1833 (July 18, 2005): 1757–73. http://dx.doi.org/10.1098/rsta.2005.1605.

Full text
Abstract:
Grid application frameworks have increasingly aligned themselves with the developments in Web services. Web services are currently the most popular infrastructure based on service-oriented architecture (SOA) paradigm. There are three core areas within the SOA framework: (i) a set of capabilities that are remotely accessible, (ii) communications using messages and (iii) metadata pertaining to the aforementioned capabilities. In this paper, we focus on issues related to the messaging substrate hosting these services; we base these discussions on the NaradaBrokering system. We outline strategies to leverage capabilities available within the substrate without the need to make any changes to the service implementations themselves. We also identify the set of services needed to build Grids of Grids. Finally, we discuss another technology, HPS earch , which facilitates the administration of the substrate and the deployment of applications via a scripting interface. These issues have direct relevance to scientific Grid applications, which need to go beyond remote procedure calls in client-server interactions to support integrated distributed applications that couple databases, high performance computing codes and visualization codes.
APA, Harvard, Vancouver, ISO, and other styles
30

Ayala, Inmaculada, Mercedes Amor, and Lidia Fuentes. "An Energy Efficiency Study of Web-Based Communication in Android Phones." Scientific Programming 2019 (April 4, 2019): 1–19. http://dx.doi.org/10.1155/2019/8235458.

Full text
Abstract:
Currently, mobile devices are the most popular pervasive computing devices, and they are becoming the primary way for accessing Internet. Battery is a critical resource in such personal computing gadgets, network communications being one of the primary energy consuming activities in any mobile app. Indeed, as web-based communication is the most used explicitly or implicitly by mobile devices, HTTP-based traffic is the most power demanding one. So, mobile web developers should be aware of how much energy demands the different web-based communication alternatives. The goal of this paper is to measure and compare the energy consumption of three asynchronous HTTP-based methods in mobile devices in different browsers. Our experiments focus on three HTTP-based asynchronous communication models that allow a web server to push data to a client browser through a HTTP/1.1 interaction: Polling, Long Polling, and WebSockets. The resulted measurements are then analysed to get more accurate understanding of the impact of the selected method, and the mobile browser, in the energy consumption of the asynchronous HTTP-based communication. The utility of these experiments is to show developers what are the factors and settings that mostly influence the energy consumption when different web-based asynchronous communication methods are used, helping them to choose the most beneficial solution if possible. With this information, mobile web developers should be able to reduce the power consumption of the front-end of web applications for mobile devices, just selecting and configuring the best asynchronous method or mobile browser, improving the performance of HTTP-based communication in terms of energy demand.
APA, Harvard, Vancouver, ISO, and other styles
31

Svatos, Michal, Alessandro De Salvo, Alastair Dewhurst, Emmanouil Vamvakopoulos, Julio Lozano Bahilo, Nurcan Ozturk, Javier Sanchez, and Dave Dykstra. "Understanding the evolution of conditions data access through Frontier for the ATLAS Experiment." EPJ Web of Conferences 214 (2019): 03020. http://dx.doi.org/10.1051/epjconf/201921403020.

Full text
Abstract:
The ATLAS Distributed Computing system uses the Frontier system to access the Conditions, Trigger, and Geometry database data stored in the Oracle Offline Database at CERN by means of the HTTP protocol. All ATLAS computing sites use Squid web proxies to cache the data, greatly reducing the load on the Frontier servers and the databases. One feature of the Frontier client is that in the event of failure, it retries with different services. While this allows transient errors and scheduled maintenance to happen transparently, it does open the system up to cascading failures if the load is high enough. Throughout LHC Run 2 there has been an ever increasing demand on the Frontier service. There have been multiple incidents where parts of the service failed due to high load. A significant improvement in the monitoring of the Frontier service wasrequired. The monitoring was needed to identify both problematic tasks, which could then be killed or throttled, and to identify failing site services as the consequence of a cascading failure is much higher. This presentation describes the implementation and features of the monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
32

Saldaña Barrios, Juan Jose, Luis Mendoza, Edgardo Pitti, and Miguel Vargas. "Ubiquitous and ambient-assisted living eHealth platforms for Down’s syndrome and palliative care in the Republic of Panama: A systematic review." Health Informatics Journal 24, no. 4 (October 21, 2016): 356–67. http://dx.doi.org/10.1177/1460458216671560.

Full text
Abstract:
In this work, the authors present two eHealth platforms that are examples of how health systems are migrating from client-server architecture to the web-based and ubiquitous paradigm. These two platforms were modeled, designed, developed and implemented with positive results. First, using ambient-assisted living and ubiquitous computing, the authors enhance how palliative care is being provided to the elderly patients and patients with terminal illness, making the work of doctors, nurses and other health actors easier. Second, applying machine learning methods and a data-centered, ubiquitous, patient’s results’ repository, the authors intent to improve the Down’s syndrome risk estimation process with more accurate predictions based on local woman patients’ parameters. These two eHealth platforms can improve the quality of life, not only physically but also psychologically, of the patients and their families in the country of Panama.
APA, Harvard, Vancouver, ISO, and other styles
33

Grigorieva, M. A., A. A. Alekseev, A. A. Artamonov, T. P. Galkin, D. V. Grin, T. A. Korchuganova, S. V. Padolski, M. A. Titov, and A. A. Klimentov. "Enhancements in Functionality of the Interactive Visual Explorer for ATLAS Computing Metadata." EPJ Web of Conferences 245 (2020): 05032. http://dx.doi.org/10.1051/epjconf/202024505032.

Full text
Abstract:
The development of the Interactive Visual Explorer (InVEx), a visual analytics tool for the computing metadata of the ATLAS experiment at LHC, includes research of various approaches for data handling both on server and client sides. InVEx is implemented as a web-based application which aims at the enhancing of analytical and visualization capabilities of the existing monitoring tools and facilitates the process of data analysis with the interactivity and human supervision. The current work is focused on the architecture enhancements of the InVEx application. First, we will describe the user-manageable data preparation stage for cluster analysis. Then, the Level-of-Detail approach for the interactive visual analysis will be presented. It starts with the low detailing, when all data records are grouped (by clustering algorithms or by categories) and aggregated. We provide users with means to look deeply into this data, incrementally increasing the level of detail. Finally, we demonstrate the development of data storage backend for InVEx, which is adapted for the Level-of-Detail method to keep all stages of data derivation sequence.
APA, Harvard, Vancouver, ISO, and other styles
34

Reyes Chirino, Raymari, Isabel Cristina Ramos Nieves, Claudia Jimenez Heredia, Marcos Pedro Ramos Rodríguez, and Alfredo Jimenez González. "APLICACIÓN WEB PARA LA GESTIÓN DE LA INFORMACIÓN EN LA ESCUELA DE CAPACITACIÓN DE LA CONSTRUCCIÓN DE PINAR DEL RÍO, CUBA." UNESUM-Ciencias. Revista Científica Multidisciplinaria. ISSN 2602-8166 2, no. 3 (January 26, 2019): 101–16. http://dx.doi.org/10.47230/unesum-ciencias.v2.n3.2018.107.

Full text
Abstract:
Desde la década de los años 60 el Ministerio de la Construcción en Cuba se ha preocupado por la capacitación de su fuerza laboral, desarrollando una importante labor tanto en la formación vocacional como en la orientación profesional. En la actualidad la Escuela de Capacitación de dicho Ministerio en Pinar del Río, dirige su trabajo hacia la capacitación en aulas anexas y de reclusos, generándose grandes volúmenes de información a partir de una gestión que se hace de forma manual. Esta situación ocasiona una búsqueda engorrosa, pérdida y deterioro de la información, y demora en la entrega de reportes mensuales, incurriendo en ocasiones, en errores humanos. En atención a lo anterior y considerando que las Tecnologías de la Información y las Comunicaciones constituyen una herramienta inmejorable para el manejo de la información, el presente trabajo se desarrolló con el objetivo de diseñar una aplicación web para la gestión de la información en las aulas anexas y de reclusos de la Escuela de Capacitación de la Construcción en Pinar del Río, Cuba. Se analizaron sistemas informáticos relacionados con la investigación, así como tecnologías del lado del cliente y del servidor y herramientas de desarrollo. Se construyó una base de datos para almacenar la información de las aulas anexas y los reclusos y se diseñó una aplicación web que mejora el proceso de gestión de dicha información.PALABRAS CLAVE: Base de datos; Tecnologías de la Información y las Comunicaciones; informática; servidor; clienteSUSTAINABLE FOREST MANAGEMENT AT LOT 121 IN THE BASIC UNIT OF FOREST PRODUCTION RÍO MANTUA, MACURIJE FOREST ENTERPRISE, PINAR DEL RÍO, CUBAABSTRACTSince the 1960s, the Ministry of Construction in Cuba has been concerned with the training of its workforce, developing an important task in both vocational training and career guidance. Currently the Training School of the Ministry in Pinar del Rio, directs its work towards training in adjoining classrooms and convicts, generating large volumes of information from a management that is done manually. This situation causes a cumbersome search, loss and deterioration of the information, and delay in the delivery of monthly reports, sometimes incurring in human errors. In view of the above and considering that Information and Communication Technologies are an excellent tool for managing information, this work was developed with the aim of designing a web application for information management in the adjoining classrooms and of convicts of the Construction Training School in Pinar del Río, Cuba. We analyzed computer systems related to research, as well as client-side and server-side technologies and development tools. A database was built to store the information of the adjoining classrooms and the inmates and a web application was designed that improves the management process of said information.KEYWORDS: Database; Information Technology and Communications; computing; server; client
APA, Harvard, Vancouver, ISO, and other styles
35

Chivukula, Sreerama Prabhu, Rajasekhar Krovvidi, and Aneesh Sreevallabh Chivukula. "Eucalyptus Cloud to Remotely Provision e-Governance Applications." Journal of Computer Networks and Communications 2011 (2011): 1–15. http://dx.doi.org/10.1155/2011/268987.

Full text
Abstract:
Remote rural areas are constrained by lack of reliable power supply, essential for setting up advanced IT infrastructure as servers or storage; therefore, cloud computing comprising an Infrastructure-as-a-Service (IaaS) is well suited to provide such IT infrastructure in remote rural areas. Additional cloud layers of Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) can be added above IaaS. Cluster-based IaaS cloud can be set up by using open-source middleware Eucalyptus in data centres of NIC. Data centres of the central and state governments can be integrated with State Wide Area Networks and NICNET together to form the e-governance grid of India. Web service repositories at centre, state, and district level can be built over the national e-governance grid of India. Using Globus Toolkit, we can achieve stateful web services with speed and security. Adding the cloud layer over the e-governance grid will make a grid-cloud environment possible through Globus Nimbus. Service delivery can be in terms of web services delivery through heterogeneous client devices. Data mining using Weka4WS and DataMiningGrid can produce meaningful knowledge discovery from data. In this paper, a plan of action is provided for the implementation of the above proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
36

Özbay, Kaan, and Shirsendu Mukherjee. "Web-Based Expert Geographical Information System for Advanced Transportation Management Systems." Transportation Research Record: Journal of the Transportation Research Board 1719, no. 1 (January 2000): 200–208. http://dx.doi.org/10.3141/1719-26.

Full text
Abstract:
The Internet is fast becoming the standard environment for client-server applications that involve multiple users. The proliferation of Internet-based application development tools opens new doors to transportation researchers who work in real-time decision support system development. In the 1990s, one of the most important problems in advanced transportation management systems research was the development of better incident management systems. Although the incident management process has been well studied, the development of real-time decision support systems that can be used by all the involved agencies remains a challenging area of transportation engineering research. Existing incident management systems are developed on various traditional computing platforms, including UNIX and Windows. However, with the advent of the World Wide Web and Internet-based programming tools such as Java, it is possible to develop platform independent decision support tools for the incident management agencies. Web-based support tools offer an invaluable opportunity to develop next-generation online decision support tools for real-time traffic management. The applicability of Web-based tools to the development of online decision support systems for incident management is explored and demonstrated, and a prototype incident management decision support system (DSS) that has most of the capabilities of similar UNIX-based DSS support systems is developed and tested. Briefly described are the implementation and development of a prototype wide-area incident management system using Web-based tools.
APA, Harvard, Vancouver, ISO, and other styles
37

Vivek, V., R. Srinivasan, and R. Elijah Blessing. "Resource provisioning methodology for cloud environment with producer and consumer favorable: an approach of virtual cloud compiler." International Journal of Engineering & Technology 7, no. 2.4 (March 10, 2018): 123. http://dx.doi.org/10.14419/ijet.v7i2.4.13022.

Full text
Abstract:
Cloud computing is a model where traditional resources such as CPU cycles, storage, security etc. are delivered through web based. It is a technology which has ability to change large part of software development cycle, 3D rendering or any other computationally expensive tasks execution. Much amount of time is wasted on compiling and rendering such computationally expensive tasks due to low power machines, which directly proportional to efficiency of user who is working on that project. Extreme computational tasks such as weather forecast, DNA analyses, encryption breaking takes so much time in consumer grade computing devices that they are realistically not possible to perform. We have proposed a novel approach to perform payload distribution, for the users who wanted to run their computationally expensive tasks efficiently. We have used virtualization technique on data center resources to perform scheduling. Up to 32% cost has been reduced in an environment of 30 users when our technology used instead of traditional standalone desktop environment. This is achieved by replacing 30 standalone computers with a powerful server and thin clients like Raspberry pi as clients. Time wasted in computational task such as rendering and compiling is greatly reduced. We have not only improved the efficiency, but also make sure both cloud producer and consumer are favorable. With simulations and outcomes, we validate that our methodology for payload distribution performs well.
APA, Harvard, Vancouver, ISO, and other styles
38

Taghavi, Mona, Ahmed Patel, and Hamed Taghavi. "Design of an Integrated Project Management Information System for Large Scale Public Projects." Journal of Information Technology Research 4, no. 3 (July 2011): 14–28. http://dx.doi.org/10.4018/jitr.2011070102.

Full text
Abstract:
Due to the unprecedented growth of outsourcing ICT projects by the Iranian government, a critical need exists for the proper execution and monitoring of these projects. In this paper, the authors propose a web-based project management system to improve the efficiency and effectiveness of the management processes and accelerate decision making. Based on the requirements and information flow between various units involved in the complete life-cycle of ICT project management, a functional model and system architecture with various underlying structures has been designed. The functional model contains two sub-systems: process management and information service. The proposed system structure is based on a four-layer client-server computing model. As a part of a publically available ICT system, it must be secure against cybercrime activities. This system can bring efficiency in managing the projects, improve decision making, and increase the overall management process with total accounting and management transparency. The proposed system overcomes the problems associated with a central system and traditional management processes, as is currently the case in Iran.
APA, Harvard, Vancouver, ISO, and other styles
39

CHEN, QIMING, PARVATHI CHUNDI, UMESHWAR DAYAL, and MEICHUN HSU. "DYNAMIC AGENTS." International Journal of Cooperative Information Systems 08, no. 02n03 (June 1999): 195–223. http://dx.doi.org/10.1142/s0218843099000101.

Full text
Abstract:
We claim that a dynamic agent infrastructure can provide a shift from static distributed computing to dynamic distributed computing, and we have developed an infrastructure to realize such a shift. We shall compare this infrastructure with other distributed computing infrastructures such as CORBA and DCOM, and demonstrate its value in highly dynamic system integration, service provisioning and distributed applications such as data mining on the Web. The infrastructure is Java-based, light-weight, and extensible. It differs from other agent platforms and client/server infrastructures in its support of dynamic behavior modification of agents. A dynamic agent is not designed to have a fixed set of predefined functions, but instead, to carry application-specific actions, which can be loaded and modified on the fly. This allows a dynamic agent to adjust its capability to accommodate changes in the environment and requirements, and play different roles across multiple applications. The above features are supported by the light-weight, built-in management facilities of dynamic agents, which can be commonly used by the "carried" application programs to communicate, manage resources and modify their problem-solving capabilities. Therefore, the proposed infrastructure allows application-specific multi-agent systems to be developed easily on top of it, provides "nuts and bolts" for run-time system integration, and supports dynamic service construction, modification and movement. A prototype has been developed at HP Labs and made available to several external research groups.
APA, Harvard, Vancouver, ISO, and other styles
40

Farion, K., W. Michalowski, D. O’Sullivan, S. Rubin, D. Weiss, and S. Wilk. "Clinical Decision Support System for Point of Care Use." Methods of Information in Medicine 48, no. 04 (2009): 381–90. http://dx.doi.org/10.3414/me0574.

Full text
Abstract:
Summary Objectives: The objective of this research was to design a clinical decision support system (CDSS) that supports heterogeneous clinical decision problems and runs on multiple computing platforms. Meeting this objective required a novel design to create an extendable and easy to maintain clinical CDSS for point of care support. The proposed solution was evaluated in a proof of concept implementation. Methods: Based on our earlier research with the design of a mobile CDSS for emergency triage we used ontology-driven design to represent essential components of a CDSS. Models of clinical decision problems were derived from the ontology and they were processed into executable applications during runtime. This allowed scaling applications’ functionality to the capabilities of computing platforms. A prototype of the system was implemented using the extended client-server architecture and Web services to distribute the functions of the system and to make it operational in limited connectivity conditions. Results: The proposed design provided a common framework that facilitated development of diversified clinical applications running seamlessly on a variety of computing platforms. It was prototyped for two clinical decision problems and settings (triage of acute pain in the emergency department and postoperative management of radical pros-tatectomy on the hospital ward) and implemented on two computing platforms – desktop and handheld computers. Conclusions: The requirement of the CDSS heterogeneity was satisfied with ontology-driven design. Processing of application models described with the help of ontological models allowed having a complex system running on multiple computing platforms with different capabilities. Finally, separation of models and runtime components contributed to improved extensibility and maintainability of the system.
APA, Harvard, Vancouver, ISO, and other styles
41

Curdt, Constanze, and Dirk Hoffmeister. "Research data management services for a multidisciplinary, collaborative research project." Program: electronic library and information systems 49, no. 4 (September 1, 2015): 494–512. http://dx.doi.org/10.1108/prog-02-2015-0016.

Full text
Abstract:
Purpose – Research data management (RDM) comprises all processes, which ensure that research data are well-organized, documented, stored, backed up, accessible, and reusable. RDM systems form the technical framework. The purpose of this paper is to present the design and implementation of a RDM system for an interdisciplinary, collaborative, long-term research project with focus on Soil-Vegetation-Atmosphere data. Design/methodology/approach – The presented RDM system is based on a three-tier (client-server) architecture. This includes a file-based data storage, a database-based metadata storage, and a self-designed user-friendly web-interface. The system is designed in cooperation with the local computing centre, where it is also hosted. A self-designed interoperable, project-specific metadata schema ensures the accurate documentation of all data. Findings – A RDM system has to be designed and implemented according to requirements of the project participants. General challenges and problems of RDM should be considered. Thus, a close cooperation with the scientists obtains the acceptance and usage of the system. Originality/value – This paper provides evidence that the implementation of a RDM system in the provided and maintained infrastructure of a computing centre offers many advantages. Consequently, the designed system is independent of the project funding. In addition, access and re-use of all involved project data is ensured. A transferability of the presented approach to another interdisciplinary research project was already successful. Furthermore, the designed metadata schema can be expanded according to changing project requirements.
APA, Harvard, Vancouver, ISO, and other styles
42

Popescu, Radu, Jakob Blomer, and Gerardo Ganis. "Towards a responsive CernVM-FS architecture." EPJ Web of Conferences 214 (2019): 03036. http://dx.doi.org/10.1051/epjconf/201921403036.

Full text
Abstract:
The CernVM File System (CernVM-FS) provides a scalable and reliable software distribution service implemented as a POSIX read-only filesystem in user space (FUSE). It was originally developed at CERN to assist High Energy Physics (HEP) collaborations in deploying software on the worldwide distributed computing infrastructure for data processing applications. Files are stored remotely as content-addressed blocks on standard web servers and are retrieved and cached on-demand through outgoing HTTP connections only. Repository metadata is recorded in SQLite catalogs, which represent implicit Merkle treeencodings of the repository state. For writing, CernVM-FS follows a publish-subscribe pattern with a single source of new content that is propagated to a large number of readers. This paper focuses on the work to move the CernVM-FS architecturein the direction of a responsive data distribution system. A new distributed publication backend allows scaling out large publication tasks across multiple machines, reducing the time to publish. For the faster propagation of new published content, the addition of a notification system allows clients to subscribe to messages about changes in the repository and to request new root catalogs as soon as they become available. These devel-opments make CernVM-FS more responsive and are particularly relevant for use cases where a short propagation delay from repository down to individual clients is important, such as using CernVM-FS as an AFS replacement for distributing software stacks. Additionally, they permit the implementation of more complex workflows, with producer-consumer pipelines, as for example in the ALICE analysis trains system.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Shu Ren, and Peng Peng. "Application of GeogrAaphic Information System Technique in the Design of Gas Pipe Network Plan." Applied Mechanics and Materials 195-196 (August 2012): 933–37. http://dx.doi.org/10.4028/www.scientific.net/amm.195-196.933.

Full text
Abstract:
Geographic Information System is widely used in different fields and it has different backgrounds and models these are introduced briefly in this article. Based on the analysis of development background, application and the related trends of development both at home and abroad for pipe network system, some problems are indicated, such as, building the pipe network geographic information system needs to grasp the key technology, including component technology, object-oriented design, databases and pipe network modeling technology, according to the data characteristics of gas pipe network and using MS Windows 2000 (Advance) Server or UNIX as operating system. The overall design plan of Daqing gas company pipe network information system based on the way of Web is put forward and selecting Windows 2000 Professional as client computing operating system for running the ArcInfo 9.x application program and using other geographic information system software to build the geographic information development platform that gives priority to the ArcGIS of ESRI company. The management method of geographic information system is mainly database, elaborating the implementation of all systematic functions and the management process of data and corresponding software and the development of user interface, the access to the database are finished as well. Design of user-interface is practical and convenient, which has the characteristics of easy and flexible operation situation and high visibility. Therefore, it indicates the application of geographic information system technique is to lay a solid foundation for the standardization of enterprises, in the meantime, it can effectively promote the urban construction and economic development.
APA, Harvard, Vancouver, ISO, and other styles
44

Abraham, Ajith, Sung-Bae Cho, Thomas Hite, and Sang-Yong Han. "Special Issue on Web Services Practices." Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no. 5 (September 20, 2006): 703–4. http://dx.doi.org/10.20965/jaciii.2006.p0703.

Full text
Abstract:
Web services – a new breed of self-contained, self-describing, modular applications published, located, and invoked across the Web – handle functions, from simple requests to complicated business processes. They are defined as network-based application components with a services-oriented architecture (SOA) using standard interface description languages and uniform communication protocols. SOA enables organizations to grasp and respond to changing trends and to adapt their business processes rapidly without major changes to the IT infrastructure. The Inaugural International Conference on Next-Generation Web Services Practices (NWeSP'05) attracted researchers who are also the world's most respected authorities on the semantic Web, Web-based services, and Web applications and services. NWeSP'05 was held in cooperation with the IEEE Computer Society Task Force on Electronic Commerce, the Technical Committee on Internet, and the Technical Committee on Scalable Computing. This special issue presents eight papers focused on different aspects of Web services and their applications. Papers were selected based on fundamental ideas and concepts rather than the thoroughness of techniques employed. Papers are organized as follows: <I>Taher et al.</I> present the first paper, on a Quality of Service Information and Computational framework (QoS-IC) supporting QoS-based service selection for SOA. The framework's functionality is expanded using a QoS constraints model that establishes an association relationship between different QoS properties and is used to govern QoS-based service selection in the underlying algorithm. Using a prototype implementation, the authors demonstrate how QoS constraints improve QoS-based service selection and save consumers valuable time. Due to the complex infrastructure of web applications, response times perceived by clients may be significantly longer than desired. To overcome some of the current problems, <I>Vilas et al.</I>, in the second paper, propose a cache-based extension of the architecture that enhances the current web services architecture, which is mainly based on program-logic or protocol-dependent optimization. In the third paper, Jo and Yoo present authorization for securing XML sources on the Web. One of the disadvantages of existing access control is that the DOM tree must be loaded into memory while all XML documents are parsed to generate the DOM tree, such that a lot of memory is used in repetitive search for tree to authorize access to all nodes in the DOM tree. The complex authorization evaluation process required thus lowers system performance. Existing access control fails to consider information structure and semantics sufficiently due to basic HTML limitations. The authors overcome some of these limitations in the proposed model. In the fourth paper, Jung and Cho propose a novel behavior-network-based method for Web service composition. The behavior network selects services automatically through internal and external links with environmental information from sensors and goals. An optimal service is selected at each step, resulting in a globally optimal service sequence for achieving preset goals. The authors detail experimental results for the proposed model by comparing them with rule-based system and user tests. <I>Kong et al.</I> present an efficient method in the fifth paper for merging heterogeneous ontologies – no ontology building standard currently exists – and the many ontology-building tools available are based on different ontology languages, mostly focusing on how to create, edit and infer the ontology efficiently. Even ontologies about the same domain differ because ontology experts hold different view points. For these reasons, interoperability between ontologies is very low. The authors propose merging heterogeneous domain ontologies by overcoming some of the above limitations. In the sixth paper, Chen and Che provide polynomial-time tree pattern query minimization algorithm whose efficiency stems from two key observations: (i) Inherent redundant "components" usually exist inside the rudimentary query provided by the user, and (ii) nonedundant nodes may become redundant when constraints such as co-occurrence and required child/descendant are given. They show that the algorithm obtained by first augmenting the input tree pattern using constraints, then applying minimization, invariably finds a unique minimal equivalent to the original query. Chen and Che present a polynomial-time algorithm for tree pattern query (TPQ) minimization without XML constraints in the seventh paper. The two-part algorithm is a dynamic programming strategy for finding all matching subtrees within a TPQ. The algorithm consists of one for subtree recognization and a second for subtree deletion. In the last paper, <I>Bagchi et al.</I> present the mobile distributed virtual memory (MDVM) concept and architecture for cellular networks containing server-groups (SG). They detail a two-round randomized distributed algorithm to elect a unique leader and co-leader of the SG that is free of any assumption about network topology, and buffer space limitations and is based on dynamically elected coordinators eliminating single points of failure. As guest editors, we thank all authors featured in this special issue for their contributions and the referees for critically evaluating the papers within the short time allotted. We sincerely believe that readers will share our enjoyment of this special issue and find the information it presents both timely and useful.
APA, Harvard, Vancouver, ISO, and other styles
45

Shahzadi, Hafiza Mahrukh, and Shazia Riaz. "IT UAF CLOUD: A Trusted Storage Architecture for Cloud Computing." Asian Journal of Engineering and Technology 6, no. 6 (December 19, 2018). http://dx.doi.org/10.24203/ajet.v6i6.5572.

Full text
Abstract:
The issue for the understudies and clients to where they need to situate to their records and critical stuff. In this way we attempt to determine this issue by building up an Online Cloud known "It Uaf Cloud”. When we say cloud, we have to comprehend that word 'cloud' is a similitude for Internet and along these lines it just means a type of web-based registering. It depends on the model of shared registering assets rather than neighborhood servers, stockpiling gadgets that are utilized to deal with client applications. Cloud accordingly tackle more than what customary methods for registering could accomplish without requiring physical server and capacity frameworks at the area you are in. It has a colossal potential and offer extraordinary preferences clients. Cloud frameworks in a general sense give access to extensive pools of information and computational assets through an assortment of interfaces comparative in soul to existing lattice and HPC asset administration and programming frameworks. These kinds of frameworks offer another programming focus for versatile application engineers and have picked up prevalence in the course of recent years. Nonetheless, most distributed computing frameworks in task today are exclusive, depend upon foundation that is imperceptible to the examination network, or are not unequivocally intended to be instrumented and changed by frameworks specialists. Cloud suppliers hypothetically offer their clients boundless assets for their applications on an on-request premise. In any case, versatility isn't just controlled by the accessible assets, yet in addition by how the control and information stream of the application or administration is outlined and actualized.
APA, Harvard, Vancouver, ISO, and other styles
46

Muresan, Lorena Daiana, Ulrich L. Rohde, and A. M. Silaghi. "The Development of a Touristic Web Application On the Island of São Miguel." Scientific Bulletin of Electrical Engineering Faculty, April 6, 2017. http://dx.doi.org/10.1515/sbeef-2016-0008.

Full text
Abstract:
Abstract In computing, a web application is a client-server software application in which the client runs in a web browser. This paper presents the process of developing a web application for the tourists of São Miguel.
APA, Harvard, Vancouver, ISO, and other styles
47

Sundaram, Rajasekar. "Effective Reengineering Computing Model For Ethiopian Health Care Center." JOURNAL OF SCIENCE, COMPUTING AND ENGINEERING RESEARCH, April 2020, 21–24. http://dx.doi.org/10.46379/jscer.2020.010105.

Full text
Abstract:
The ordinary information get to control framework is to keep up the specific sharing composite Personal Health Information Records (PHIRs)in health center, operations from different health centers in cloud major research looks into in the present IT. A PHIR administration license a patient to making, overseeing, and controlling the individual wellbeing information in one spot through the web, which has made the extra room, recovery, and appropriation of the clinical data progressively effective. Uncommonly, every patient is secure the full control of clinical records and can impart the wellbeing information to an enormous scope of clients with social insurance donors and relatives. Because of the significant expense of building and keep up secured and specific server, numerous PHIR administrations are redistributed to make accessible by client from specialist organizations. Decentralized server farms the emission of CO2 is high and nature get spoiled and polluted by making reusable datacenters the data's or information's can shared through the PHIR's. By this we can keep away from the high emission of CO2, the adaptability; accessibility and similarity are increments according to Moore's law. To share the cutting edge data's from other server farm information with reusable asset and e-Health Care Service in Ethiopian health center.
APA, Harvard, Vancouver, ISO, and other styles
48

"Load Balancing Technique with an Efficient Method." International Journal of Engineering and Advanced Technology 8, no. 6 (August 30, 2019): 1257–62. http://dx.doi.org/10.35940/ijeat.f8394.088619.

Full text
Abstract:
There are a huge number of nodes connected to web computing to offer various types of web services to provide cloud clients. Limited numbers of nodes connected to cloud computing have to execute more than a thousand or a million tasks at the same time. So it is not so simple to execute all tasks at the same particular time. Some nodes execute all tasks, so there is a need to balance all the tasks or loads at a time. Load balance minimizes the completion time and executes all the tasks in a particular way.There is no possibility to keep an equal number of servers in cloud computing to execute an equal number of tasks. Tasks that are to be performed in cloud computing would be more than the connected servers. Limited servers have to perform a great number of tasks.We propose a task scheduling algorithm where few nodes perform the jobs, where jobs are more than the nodes and balance all loads to the available nodes to make the best use of the quality of services with load balancing.
APA, Harvard, Vancouver, ISO, and other styles
49

"Load Balancing of Unbalanced Matrix Problem of Maximum Machines with Min Min Algorithm." International Journal of Recent Technology and Engineering 8, no. 3 (September 30, 2019): 627–29. http://dx.doi.org/10.35940/ijrte.b2419.098319.

Full text
Abstract:
Most eminent computing technology related to the cloud is an exclusively web-based approach where resources are hosted on a cloud; it prospers the resources. Cloud computing is the most promising technologies that offer a standard for bulky sized computing. That is a structure for enabling applications to execution on virtualized resources and accessed by a network protocol. It provides resource and services in a very elastic behavior that can be scaled according to the required of the clients. Limited numbers of devices execute less number of tasks at that time. So it is more complex to perform each task at once. Several devices execute each task, so it has required balancing total loads that reduce the completion time and executes each task in a definite way. We have said earlier that there are not feasible to stay behind an equal server to execute similar tasks. The tasks that are to be executed by machine in the cloud system must be less than the united VM for a time. Overloaded servers have to perform a less number of jobs. Here in our approach, we want to show a scheduling algorithm for balancing of loads and presentation with minimum execution time and makespan.
APA, Harvard, Vancouver, ISO, and other styles
50

"Exploring Data Security Scheme into Cloud Using Encryption Algorithms." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 2271–73. http://dx.doi.org/10.35940/ijrte.b2504.078219.

Full text
Abstract:
Cloud computing is come out as computing network throughout web. Cloud information pamper accumulating of the data within the cloud additionally has sharing qualifications amid manifold clients. Since malfunction of human being or hardware and constant software package blunder, cloud information is interrelated to information veracity. Numerous systems have been anticipated in order to tolerate equally the data proprietors as well as the community auditors to review cloud data truthfulness unmistakably devoid of salvage the intact data commencing the cloud servers. A third party inspector can carry out reliability inspection and also the distinctiveness of the signer on collective information which is held in reserve private from them. Throughout this work, exploration for auditing the truthfulness of public information surrounded by the cloud with imaginative client deletion whereas immobile protecting distinctiveness seclusion. This work have predisposition to additionally improve presented method; formerly any patron revolutionize the consequence from table then it have a predisposition to scrutiny that is import and repeatedly refurbish inventive value.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography