To see the other types of publications on this topic, follow the link: Performance web.

Dissertations / Theses on the topic 'Performance web'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Performance web.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nadimpalli, Sucheta. "High performance Web servers." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/MQ57734.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chiew, Thiam Kian. "Web page performance analysis." Thesis, University of Glasgow, 2009. http://theses.gla.ac.uk/658/.

Full text
Abstract:
Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved.
APA, Harvard, Vancouver, ISO, and other styles
3

Nadimpalli, Sucheta Carleton University Dissertation Engineering Systems and Computer. "High performance web servers." Ottawa, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Damaraju, Sarita. "Performance measurements of Web services." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4581.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.<br>Title from document title page. Document formatted into pages; contains v, 42 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 32-34).
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Heng. "Analyse et diagnostic des performances du web du point de vue de l'utilisateur." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0017/document.

Full text
Abstract:
Au cours des dernières années, l'intérêt porté aux performances de la navigation internet n'a cessé de croître au sein de la communauté scientifique. Afin de comprendre la perception qu'a l'utilisateur des performances de la navigation Web, au cours de cette thèse nous abordons différents problèmes liés aux performances de la navigation Internet telle qu'elle est perçue par l'utilisateur final. Cette thèse se compose de trois parties: la première partie présente notre nouvelle plateforme basée sur des mesures faites au niveau du navigateur. Nous présentons les différents paramètres que nous obtenons facilement à partir du navigateur, ainsi que des indicateurs du ressenti de l'utilisateur final. Ensuite nous utilisons des techniques de partitionnement de données afin de trouver les corrélations existantes entre performances de haut niveau et de bas niveau. Dans une seconde partie, nous présentons notre outil de diagnostic appelé “Firelog”. Nous étudions tout d'abord les différentes causes qui peuvent affecter le rendu d'une page Web. Ensuite nous décrivons en détails les différents composant de notre outil et les mesures qu'il effectue. Sur la base des paramètres mesurés, nous illustrons notre modèle pour le diagnostic des performances d'une manière automatique. Dans la dernière partie, nous proposons une nouvelle méthodologie, “Critical Path Method” (ou Méthode du Chemin Critique) pour l'analyse des performances de la navigation Web. Nous expliquons d'abord en détails les caractéristiques intrinsèques du Navigateur lors du rendu d'une page Web, puis nous présentons formellement notre méthodologie<br>In recent years, the interest of the research community in the performance of Web browsing has grown steadily. In order to reveal end-user perceived performance of Web browsing, in this thesis work, we address multiple issues of Web browsing performance from the perspective of the end-user. The thesis is composed by three parts: the first part introduces our initial platform which is based on browser-level measurements. We explain measurement metrics that can be easily acquired from the browser and indicators for end-user experience. Then, we use clustering techniques to correlate higher-level performance metrics with lower level metrics. In the second part, we present our diagnosis tool called FireLog. We first discuss different possible causes that can prevent a Web page to achieve fast rendering; then, we describe details of the tool's components and its measurements. Based on the measured metrics, we illustrate our model for the performance diagnosis in an automatic fashion. In the last part, we propose a new methodology named Critical Path Method for the Web performance analysis. We first explain details about Web browser's intrinsic features during page rendering and then we formalize our the methodology
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Shuai. "Benchmarking Performance of Web Service Operations." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156425.

Full text
Abstract:
Web services are often used for retrieving data from servers providing information of different kinds. A data providing web service operation returns collections of objects for a given set of arguments without any side effects. In this project a web service benchmark (WSBENCH) is developed to simulate the performance of web service calls. Web service operations are specified as SQL statements. The function generator of WSBENCH converts user specified SQL queries into functions and automatically generates a web service. WSBENCH can automatically both generate and deploy web the service operations for exported functions. Furthermore WSBENCH supports controlled experiments, since users can control the characteristics of web service operations such as scalability of data and delay time. The database used in this project is generated by the Berlin Benchmark database generator. A WSBENCH demo is built to demonstrate the functionality. The demo is implemented as a JavaScript program acting as a SOAP client, directly calls WSBENCH services from a web browser. Users can make a web service request by simply providing the web service operation’s name and parameter values list as the input. It makes the WSBENCH very simple to the use.
APA, Harvard, Vancouver, ISO, and other styles
7

Nylén, Håkan. "PHP Framework Performance for Web Development." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3874.

Full text
Abstract:
[Context] PHP Frameworks, such as CakePHP and Codei- gniter, have become popular among developers, since they offer ease of development save time and provide already made libraries to use. Considering that more and more websites are built using these frameworks it is important to know how they impact the performance of the website. Comparing the two top frameworks with each other can shed some light on what the performance looks like today on the web with PHP as its base. [Problem] Visitors nowadays have less patience to wait for a website to load. Meanwhile, PHP Framework has become known among developers, but the part of the performance that reduces the load time even more so visitors can surf without any problems, is missing. Therefore, it is a good idea to try to discover how the performance of the PHP Framework can change and improve the visitor experience. [Con- tribution] In this paper is a description of one of the first performance experiments on PHP Frameworks. It can help people make the right de- cision regarding PHP Framework in the future. The lack of data in this area is also one of the decisions to make this paper as well.
APA, Harvard, Vancouver, ISO, and other styles
8

Arshinov, Alex. "Building high-performance web-caching servers." Thesis, De Montfort University, 2004. http://hdl.handle.net/2086/13257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Said, Tahirshah Farid. "Comparison between Progressive Web App and Regular Web App." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18384.

Full text
Abstract:
In 2015 the term Progressive Web Application was coined to describe applications that are getting the advantage of all Progressive App features. Some of the essential features are offline support, app-like interface, secure connection, etc. Since then, case studies from PWA’s implementation showed optimistic promises for improving web page performance, time spent on site, user engagement, etc. The goal of this report is to analyze some of the effects of PWA. This work will investigate the browser compatibility of PWA’s features, compare and analyze performance and memory consumption effect of PWA’s features compared to Regular WebApp. Results showed a lot of the features of PWA are still not sup-ported by some major browsers. Performance benchmark showed that required https connection for PWA is slowing down all of the PWA’s performance metrics on the first visit. On a repeat visit, some of the PWA features like speed index is outperforming the Regular Web App. Memory consumption on PWA increased more than 2 times the size of RWA. The conclusion is that even if some features are not directly supported by browsers, they still might have workaround solutions. PWA is slower than regular web app if https on your web server is not optimized. Different browsers have different memory limitations for PWA caches. You should implement https and PWA features only if you have HTTP/2 support on your web server, otherwise, performance can decrease.
APA, Harvard, Vancouver, ISO, and other styles
10

Cui, Heng. "Analyse et diagnostic des performances du web du point de vue de l'utilisateur." Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0017.

Full text
Abstract:
Au cours des dernières années, l'intérêt porté aux performances de la navigation internet n'a cessé de croître au sein de la communauté scientifique. Afin de comprendre la perception qu'a l'utilisateur des performances de la navigation Web, au cours de cette thèse nous abordons différents problèmes liés aux performances de la navigation Internet telle qu'elle est perçue par l'utilisateur final. Cette thèse se compose de trois parties: la première partie présente notre nouvelle plateforme basée sur des mesures faites au niveau du navigateur. Nous présentons les différents paramètres que nous obtenons facilement à partir du navigateur, ainsi que des indicateurs du ressenti de l'utilisateur final. Ensuite nous utilisons des techniques de partitionnement de données afin de trouver les corrélations existantes entre performances de haut niveau et de bas niveau. Dans une seconde partie, nous présentons notre outil de diagnostic appelé “Firelog”. Nous étudions tout d'abord les différentes causes qui peuvent affecter le rendu d'une page Web. Ensuite nous décrivons en détails les différents composant de notre outil et les mesures qu'il effectue. Sur la base des paramètres mesurés, nous illustrons notre modèle pour le diagnostic des performances d'une manière automatique. Dans la dernière partie, nous proposons une nouvelle méthodologie, “Critical Path Method” (ou Méthode du Chemin Critique) pour l'analyse des performances de la navigation Web. Nous expliquons d'abord en détails les caractéristiques intrinsèques du Navigateur lors du rendu d'une page Web, puis nous présentons formellement notre méthodologie<br>In recent years, the interest of the research community in the performance of Web browsing has grown steadily. In order to reveal end-user perceived performance of Web browsing, in this thesis work, we address multiple issues of Web browsing performance from the perspective of the end-user. The thesis is composed by three parts: the first part introduces our initial platform which is based on browser-level measurements. We explain measurement metrics that can be easily acquired from the browser and indicators for end-user experience. Then, we use clustering techniques to correlate higher-level performance metrics with lower level metrics. In the second part, we present our diagnosis tool called FireLog. We first discuss different possible causes that can prevent a Web page to achieve fast rendering; then, we describe details of the tool's components and its measurements. Based on the measured metrics, we illustrate our model for the performance diagnosis in an automatic fashion. In the last part, we propose a new methodology named Critical Path Method for the Web performance analysis. We first explain details about Web browser's intrinsic features during page rendering and then we formalize our the methodology
APA, Harvard, Vancouver, ISO, and other styles
11

Krupp, Brian. "Exploration of Dynamic Web Page Partitioning for Increased Web Page Delivery Performance." Cleveland State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=csu1290629377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Datla, Venu. "Measurements based performance analysis of Web services." Morgantown, W. Va. : [West Virginia University Libraries], 2005. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4158.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2005.<br>Title from document title page. Document formatted into pages; contains v, 47 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 37-41).
APA, Harvard, Vancouver, ISO, and other styles
13

Krishnamurthy, Diwakar. "Performance characterization of Web-based shopping systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq36890.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jiang, Min. "Building high performance main memory Web databases." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61915.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zarei, Alireza. "Performance improvements in crawling modern Web applications." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46072.

Full text
Abstract:
Today, a considerable portion of our society relies on Web applications to perform numerous tasks in every day life; for example, transferring money over wire or purchasing flight tickets. To ascertain such pervasive Web applications perform robustly, various tools are introduced in the software engineering research community and the industry. Web application crawlers are an instance of such tools used in testing and analysis of Web applications. Software testing, and in particular testing Web applications, play an imperative role in ensuring the quality and reliability of software systems. In this thesis, we aim at optimizing the crawling of modern Web applications in terms of memory and time performances. Modern Web applications are event driven and have dynamic states in contrast to classic Web applications. Aiming at improving the crawling process of modern Web applications, we focus on state transition management and scalability of the crawling process. To improve the time performance of the state transition management mechanism, we propose three alternative techniques revised incrementally. In addition, aiming at increasing the state coverage, i.e. increasing the number of states crawled in a Web application, we propose an alternative solution, reducing the memory consumption, for storage and retrieval of dynamic states in Web applications. Moreover, a memory analysis is performed by using memory profiling tools to investigate the areas of memory performance optimization. The enhancements proposed are able to improve the time performance of the state transition management by 253.34%. That is, the time consumption of the default state transition management is 3.53 times the proposed solution time, which in turn means time consumption is reduced by 71.69%. Moreover, the scalability of the crawling process is improved by 88.16%. That is, the proposed solution covers a considerably greater number of states in crawling Web applications. Finally, we identified the bottlenecks of scalability so as to be addressed in future work.
APA, Harvard, Vancouver, ISO, and other styles
16

Das, Somak R. "Evaluation of QUIC on web page performance." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91444.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.<br>19<br>Title as it appears in MIT commencement exercises program, June 6, 2014: Designing a better transport protocol for the web. Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 53-54).<br>This work presents the first study of a new protocol, QUIC, on Web page performance. Our experiments test the HTTP/1.1, SPDY, and QUIC multiplexing protocols on the Alexa U.S. Top 500 websites, across 100+ network configurations of bandwidth and round-trip time (both static links and cellular networks). To do so, we design and implement QuicShell, a tool for measuring QUIC's Web page performance accurately and reproducibly. Using QuicShell, we evaluate the strengths and weaknesses of QUIC. Due to its design of stream multiplexing over UDP, QUIC outperforms its predecessors over low-bandwidth links and high-delay links by 10 - 60%. It also helps Web pages with small objects and HTTPS-enabled Web pages. To improve QUIC's performance on cellular networks, we implement the Sprout-EWMA congestion control protocol and find that it improves QUIC's performance by > 10% on high-delay links.<br>by Somak R. Das.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
17

Peña, Ortiz Raúl. "Accurate workload design for web performance evaluation." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/21054.

Full text
Abstract:
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica<br>Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
18

Steinberg, David A. "Shots: A High-Performance Web Templating Language." Kent State University Honors College / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1387918200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Huamin. "Web server performance improvement and QoS provisioning /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2003. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Djärv, Karltorp Johan, and Eric Skoglund. "Performance of Multi-threaded Web Applications using Web Workers in Client-side JavaScript." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19552.

Full text
Abstract:
Context - Software applications on the web are more commonly used nowadays than before. As a result of this, the performance needed to run the applications is increasing. One method to increase performance is writing multi-threaded code using Web Workers in JavaScript. Objectives - We will investigate how using Web Workers can increase responsiveness, raw computational power and decrease load time. Additionally, we will conduct a survey that targets software developers to find out their opinions about performance in web applications, multi-threading and more specifically Web Workers. Realization (Method) - We created three experiments that concentrate on the areas mentioned above. The experiments are hosted on a web server inside an isolated Docker container to eliminate external factors as much as possible. To complement the experiments we sent out a survey to collect information of developers' opinions about Web Workers. The criteria for the selection of developers were some JavaScript experience. The survey contained questions about their opinions on switching to a multi-threaded workflow on the web. Do they experience performance issues in today's web applications? Could Web Workers be useful in their projects? Results - Responsiveness shifted from freezing the website to perfect responsiveness when using Web Workers. Raw computational power increased at best 67% when using eight workers with tasks that took between 100 milliseconds and 15 seconds. Over 15 seconds, sixteen workers improved the computational power further with around 3% - 9% compared to eight workers. At best the completion time decreased with 74% in Firefox and 72% in Chrome. Using Web Workers to help with load time gave a big improvement but is somewhat restricted to specific use cases. Conclusions - Using Web Workers to increase responsiveness made an immense difference when moving tasks that is affecting the users responsiveness to background threads. Completion time for big computational tasks was quicker in use cases where you can split the workload to separate portions and use multiple threads in parallel to complete the tasks. Load time can be improved with Web Workers by completing some tasks after the page is done loading, instead of waiting for all tasks to complete and then load the page. The survey indicated that many have performance in mind and would consider writing code in a multi-threaded way. The knowledge about multi-threading and Web Workers was low. Although, most of the participants believe that Web Workers would be useful in their current and future projects, and are worth the effort to implement.
APA, Harvard, Vancouver, ISO, and other styles
21

Ashkanasy, Neal M. "Supervisors' responses to subordinate performance /." Online version, 1989. http://bibpurl.oclc.org/web/32903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Khan, Majid, and Muhammad Faisal Amin. "Web Server Performance Evaluation in Cloud Computing and Local Environment." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1965.

Full text
Abstract:
Context: Cloud computing is a concept in which a user get services like SaaS, PaaS and IaaS by deploying their data and application on remotely servers. Users have to pay only for the time the resources are acquired. They do not need to install and upgrade software and hardware. Due to these benefits organization are willing to move their data into the cloud and minimize their overhead. Organizations need to confirm that cloud can replace the traditional platform, software and hardware in an efficient way and provide robust performance. Web servers play a vital role providing services and deploying application. One might be interested to have information about a web server performance in the cloud. With this aim, we have compared cloud server performance with a local web server. Objectives: The objective of this study is to investigate cloud performance. For this purpose, we first find out the parameters and factors that affect a web server performance. Finding the parameters helped us in measuring the actual performance of a cloud server on some specific task. These parameters will help users, developers and IT specialists to measure cloud performance based on their requirements and needs. Methods: In order to fulfill the objective of this study, we performed a Systematic literature review and an experiment. The Systematic literature review is performed by studying articles from electronic sources including ACM Digital Library, IEEE, EiVillage (Compendx,Inspec). The Snowball method is used to minimize the chance of missing articles and to increase the validity of our findings. In experiment, two performance parameters (Throughput and Execution Time) are used to measure the performance of the Apache Web Server in Local and Cloud environment. Results: In Systematic literature review, we found many factors that affect the performance of a web server in Cloud computing. Most common of them are throughput, response time, execution time, CPU and other resource utilization. The experimental results revealed that web server performed well in local environment as compared to cloud environment. But there are other factors like cost overhead, software/ hardware configuration, software/hardware up -gradation and time consumption due to which cloud computing cannot be neglected. Conclusions: The parameters that affect the cloud performance are throughput, response time, execution time, CPU utilization and memory utilization. Increase and decrease in values of these parameters can affect cloud performance to a great extent. Overall performance of a cloud is not that effective but there are other reasons for using cloud computing
APA, Harvard, Vancouver, ISO, and other styles
23

Steinberg, Jesse. "The web stream customizer architecture improving performance, reliability, and security for wireless web access /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3129955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jones, Robert M. "Content Aware Request Distribution for High Performance Web Service: A Performance Study." PDXScholar, 2002. https://pdxscholar.library.pdx.edu/open_access_etds/2662.

Full text
Abstract:
The World Wide Web is becoming a basic infrastructure for a variety of services, and the increases in audience size and client network bandwidth create service demands that are outpacing server capacity. Web clusters are one solution to this need for highperformance, highly available web server systems. We are interested in load distribution techniques, specifically Layer-7 algorithms that are content-aware. Layer- 7 algorithms allow distribution control based on the specific content requested, which is advantageous for a system that offers highly heterogenous services. We examine the performance of the Client Aware Policy (CAP) on a Linux/Apache web cluster consisting of a single web switch that directs requests to a pool of dual-processor SMP nodes. We show that the performance advantage of CAP over simple algorithms such as random and round-robin is as high as 29% on our testbed consisting of a mixture of static and dynamic content. Under heavily loaded conditions however, the performance decreases to the level of random distribution. In studying SMP vs. uniprocessor performance using the same number of processors with CAP distribution, we find that SMP dual-processor nodes under moderate workload levels provide equivalent throughput as the same number of CPU’s in a uniprocessor cluster. As workload increases to a heavily loaded state however, the SMP cluster shows reduced throughput compared to a cluster using uniprocessor nodes. We show that the web cluster’s maximum throughput increases linearly with the addition of more nodes to the server pool. We conclude that CAP is advantageous over random or round-robin distribution under certain conditions for highly dynamic workloads, and suggest some future enhancements that may improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Maharshi, Shivam. "Performance Measurement and Analysis of Transactional Web Archiving." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78371.

Full text
Abstract:
Web archiving is necessary to retain the history of the World Wide Web and to study its evolution. It is important for the cultural heritage community. Some organizations are legally obligated to capture and archive Web content. The advent of transactional Web archiving makes the archiving process more efficient, thereby aiding organizations to archive their Web content. This study measures and analyzes the performance of transactional Web archiving systems. To conduct a detailed analysis, we construct a meaningful design space defined by the system specifications that determine the performance of these systems. SiteStory, a state-of-the-art transactional Web archiving system, and local archiving, an alternative archiving technique, are used in this research. We experimentally evaluate the performance of these systems using the Greek version of Wikipedia deployed on dedicated hardware on a private network. Our benchmarking results show that the local archiving technique uses a Web server’s resources more efficiently than SiteStory for one data point in our design space. Better performance than SiteStory in such scenarios makes our archiving solution favorable to use for transactional archiving. We also show that SiteStory does not impose any significant performance overhead on the Web server for the rest of the data points in our design space.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Ho, Si Meng. "Web visualization for performance evaluation of e-Government." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Regmi, Saroj Sharan, and Suyog Man Singh Adhikari. "Network Performance of HTML5 Web Application in Smartphone." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3369.

Full text
Abstract:
Hypertext markup language 5 (HTML5), a new standard for HTML, enriched with additional features is expected to override all the basic underlying overhead needed by other applications. By the advent of new extension, HTML5, the web’s basic language is transplanted from a simple page layout into rich web application development language. Furthermore, with the release of HTML5, traditional browsing is expected to change and modify accordingly and on the other hand the potential users will have an alternative rather than sticking in platform and OS dependent native applications. This thesis deals with the readiness assessment of HTML5 with regard to different smart phones- Android and Windows. In order to visualize the fact, we analyzed different constraints like DNS lookup time, page loading time, memory and CPU consumption associated with two applications-Flash and HTML5 running right through the smart phones. Furthermore, the comparative analysis is performed in different network scenarios- Wi-Fi and 3G and user experience is estimated based on network parameters. From the experiments and observations taken, we found that android phones provide better support for HTML5 web applications than windows mobile devices. Also, the HTML5 applications loading time is limited by the browser rendering time rather that the content loading time from the network and is also dependent on hardware configuration of device used.
APA, Harvard, Vancouver, ISO, and other styles
28

Khan, Mohsin Javed, and Hussan Iftikhar Iftikhar. "Performance Testing and Analysis of Modern Web Technologies." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-11177.

Full text
Abstract:
The thesis is an empirical case study to predict or estimate the performance and variability of contemporary software frameworks used for web application development. Thesis can be mainly divided into 3 phases. In Phase I, we theoretically try to explore and analyze PHP, EJB 3.0 and ASP.NET considering quality attributes or ilitis of mentioned technologies. In Phase II, we develop two identical web applications i.e. online component’s webstore (applications to purchase components online) in PHP and ASP.NET. In phase III, we conduct automated testing to determine and analyze applications’ performance. We developed web applications in PHP 5.3.0 and Visual Studio 2008 using ASP.NET 3.5 to practically measure and compare the applications’ performance. We used SQL Server 2005 with ASP.NET 3.5 and MySql 5.1.36 with PHP as database servers. Software architecture, CSS, database design, database constraints were tried to keep simple and same for both applications i.e.  Applications developed in PHP and ASP.NET. This similarity helps to establish realistic comparison of applications performance and variability. The applications’ performance and variability is measured with help of automated scripts. These scripts were used to generate thousands of requests on application servers and downloading components simultaneously. More details of performance testing can be found in chapter 6, 7 and 8.<br>We have gain alot of knowledge from this thesis. We are very happy to finish our Software Engineering.
APA, Harvard, Vancouver, ISO, and other styles
29

Haig, Andrew, and andrew@panghaig com. "The design & aesthetic performance of web sites." Swinburne University of Technology, 2002. http://adt.lib.swin.edu.au./public/adt-VSWT20060614.113648.

Full text
Abstract:
This thesis investigates the visual aesthetic performance of Web sites. An experiment was conducted in which a Web site, designed with three controlled levels of 'visual enrichment', was evaluated on a number of measures by two subject groups. The measures used represent facets of the Categorical-Motivation model of aesthetics, plus others directly related to the performance of Web sites. The results of the experiment indicate that the drivers of site evaluation were primarily exploratory variables that represent 'novelty', 'interest' and 'fun'. This supports the argument that an important question to consider when designing a Web site is not merely 'can the site's audience use the Web site?', but also 'does the site's audience want to use the Web site?' Visual, audio and interactive appeal are, as the findings show, very important design considerations. This research adds to a body of knowledge that seeks to understand aesthetic phenomena and develops a theoretical framework that will prove useful for the investigation of visual interfaces.
APA, Harvard, Vancouver, ISO, and other styles
30

Adetunji, Israel O. "Sustainable construction : a web-based performance assessment tool." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/2302.

Full text
Abstract:
The quest towards sustainable development, both nationally and globally, puts the construction industry in the foreground as the main consumer of natural resources. The industry has profound economic, social and environmental impacts. Sustainable construction is one of the most important challenges faced by the construction industry today. In the UK, sustainability is being driven and enforced by the government through stringent fiscal policies and regulations, voluntary initiatives combined with naming and shaming strategies. Stakeholders are becoming more aware of the global challenges and are using their power to exert pressure on companies. Increasingly, construction clients are demanding that their business partners submit: their corporate sustainability policies with tender packages to demonstrate their performance in dealing with opportunities and risks stemming from economic, environmental and social aspect of sustainability. However, the lack of understanding of the concept and its practical application has been a recurrent problem. The conceptual confusion; its vagueness and ambiguity, the complexity of the myriad of challenges and fluidity of the sustainability concept, compounded with the myopic attitude of the industry, lack of clear-cut and practical framework are causing frustration in the construction industry. Consequently, a number of sustainability management frameworks have been proposed. There are probably more than one hundred frameworks for sustainable business strategy. However, the majority of these are either complicated to implement or lack sound theoretical base, effective change management and completeness. These, therefore, do not make the situation any easier. Many are still baffled as to what they should do and how they should go about affecting change. Corporate sustainability in the construction industry is a challenge to many companies. The industry is still under-performing in each of the key themes of sustainable construction and this has lead to a 'blame culture' where each sector of the industry allocates responsibility for its current failings to others (CIRIA C563, 2001). Such a situation poses a need for a comprehensive, practical and easy to use tool that would aid the implementation and management of sustainability at the core of business process. The tool will complement the existing frameworks, which breaks down the strategic and management issues into manageable components. This will enable companies to focus on individual areas and identify actions needed to facilitate change. The problem is that such a tool is virtually non-existent. The main focal point of this research is the development of a tool to facilitate the implementation, management and integration of sustainability issues at the strategic level and promote wider uptake of the concept in the construction industry. This requires a thorough understanding of the concepts of sustainable development, sustainable construction and related issues as well as drivers, benefits, barriers and enablers for achieving corporate sustainability. It also demands an examination of existing management frameworks and collation of case studies from the early adopters to establish critical factors for strategic and management issues involved in achieving corporate sustainability. Through, diverse research epistemologies (quantitative, qualitative and triangulation methods), the research established four main critical factors and thirty-six sub-critical factors for achieving corporate sustainability. These factors underpinned the development of a web-based prototype software (ConPass). This thesis presents the development and evaluation ConPass Model and the prototype software.
APA, Harvard, Vancouver, ISO, and other styles
31

Janc, Artur Adam. "Network Performance Evaluation within the Web Browser Sandbox." Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/112.

Full text
Abstract:
With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing “speedtest” services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies.
APA, Harvard, Vancouver, ISO, and other styles
32

Huneiti, Ammar M. "Hypermedia-based performance support systems for the web." Thesis, Cardiff University, 2004. http://orca.cf.ac.uk/55934/.

Full text
Abstract:
The work reported in this thesis is an attempt to apply integrated knowledge-based and adaptive hypermedia technologies in the area of electronic performance support. Moreover, this work is a contribution in the direction of "structured" hypermedia authoring of technical documentation. It tackles the main challenges associated with the systematic development of Web-based technical documentation which include the design, authoring, and implementation, and the creation of supporting CASE tools. The main contribution of this research is a systematic methodology for the development of hypermedia-based Performance Support Systems (PSSs) for the Web which adheres to the main characteristics of advanced PSSs. These characteristics are outlined in a conceptual model that complies with state-of-the-art technologies and current practices in the field of user performance support. First, the thesis suggests a conceptual model for advanced PSSs. These are characterised as mainly consisting of two loosely coupled components that are designed and accessed in a task-based and user-centred manner. The first component is a freely browsed technical documentation of the application domain. The second component is the expert advisor that provides assistance for more specific, complex, and difficult to learn tasks. The integrated technologies utilised in advanced PSSs include Web-based hypermedia and knowledge-based systems. Second, the thesis concentrates on the first component of advanced PSSs i.e. technical documentation. It suggests a usage-based data model for the design of technical documentation. The proposed model abstracts the intended purpose of the documentation, the tasks supported by the documentation, and the functional characteristics of documents. These abstractions are integrated in a usage-based semantic network where rules and valid relationships are identified. This design framework can then be used by authors in order to organise, generate, and maintain the technical documentation i.e. authoring. In addition, this model is also used to support a strategy for the adaptive retrieval of hypermedia documents. Third, the thesis suggests a model-driven hypermedia authoring approach for Web- based technical documentation. This approach utilises the usage-based data model for the design of technical documentation (described above). In addition, it complies with the principled guidelines of structured authoring. Finally, the thesis focuses on "intelligent" PSSs. It promotes the provision of intelligent performance support through the utilisation and integration of technologies used in developing knowledge-based diagnostic Expert Systems (ES) and adaptive hypermedia systems. This integration is implemented through the use of hypermedia which allows supporting content to be synchronized with the diagnostic ES inference process. The integrated adaptive diagnostic ES supports the user by providing what-to-do and how-to-do type of information tailored (adapted) to the user's knowledge of the subject domain. The special organisation of displays in an HTML-based user interface allows users, while employing the ES for fault diagnosis, to request detailed information about a certain diagnosis procedure, and then return to the ES to continue from where they left off. The solutions proposed in this thesis are demonstrated through the development of a prototype PSS for an all-terrain fork-lift truck. The performance support is provided through (i) a technical manual, (ii) a diagnostic ES for locating and correcting braking system faults, and (iii) an adaptive information retrieval utility.
APA, Harvard, Vancouver, ISO, and other styles
33

Janc, Artur A. "Network performance evaluation within the web browser sandbox." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-011909-150148/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.<br>Abstract: With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing "speedtest" services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies. Includes bibliographical references (leaves 83-85).
APA, Harvard, Vancouver, ISO, and other styles
34

Miehling, Mathew J. "Correlation of affiliate performance against web evaluation metrics." Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/7250.

Full text
Abstract:
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

ANDREOLINI, MAURO. "High performance web server systems: design, testing, evaluation." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2005. http://hdl.handle.net/2108/154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bigestans, Elof. "Real-time Full Duplex Communication Over the Web : A performance comparison between different web technologies." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9618.

Full text
Abstract:
As the web browser becomes an increasingly powerful tool for the average web user, with more features and capabilities being developed constantly, the necessity to determine which features perform better than others in the same area becomes more important. This thesis investigates the performance of three separate technologies used to achieve full-duplex real time communication over the web: short polling using Ajax, server-sent events and the WebSocket protocol. An experiment was conducted measuring the performance over three custom-built web applications (one per technology being tested), comparing latency and number of HTTP requests over 100 messages being sent through the application. Additionally, the latency measurements were made over three separate network conditions. The experiment results suggest the WebSocket protocol outperforms both short polling using Ajax and server-sent events by large margins, varying slightly depending on network conditions.
APA, Harvard, Vancouver, ISO, and other styles
37

Fankhauser, Thomas. "Web scaling frameworks : building scalable, high-performance, portable and interoperable web services for the Cloud." Thesis, University of the West of Scotland, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.744767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Atterlönn, Anton, and Benjamin Hedberg. "GUI Performance Metrics Framework : Monitoring performance of web clients to improve user experience." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247940.

Full text
Abstract:
When using graphical user interfaces (GUIs), the main problems that frustrates users are long response times and delays. These problems create a bad impression of the GUI, as well as of the company that created it.When providing a GUI to users it is important to provide intuition, ease of use and simplicity while still delivering good performance. However, some factors that play a major role regarding the performance aspect is outside the developers’ hands, namely the client’s internet connection and hardware. Since every client has a different combination of internet connection and hardware, it can be a hassle to satisfy everyone while still providing an intuitive and responsive GUI.The aim of this study is to find a way to monitor performance of a web GUI, where performance comprises response times and render times, and in doing so, enable the improvement of response times and render times by collecting data that can be analyzed.A framework that monitors the performance of a web GUI was developed as a proof of concept. The framework collects relevant data regarding performance of the web GUI and stores the data in a database. The stored data can then be manually analyzed by developers to find weak points in the system regarding performance. This is achieved without interfering with the GUI or impacting the user experience negatively.<br>När man använder grafiska gränssnitt upplevs lång responstid och fördröjning som de främsta problemen. Dessa problem är frustrerande och ger användare en negativ syn på både det grafiska gränssnittet och företaget som skapat det.Det är viktigt att grafiska gränssnitt är intuitiva, lättanvända och lättfattliga samtidigt som de levererar hög prestanda. Det finns faktorer som påverkar dessa egenskaper som är utanför programmerarnas händer, t.ex. användarens internetuppkoppling och hårdvara. Eftersom varje användare har olika kombinationer av internetuppkoppling och hårdvara är det svårt att tillfredsställa alla och samtidigt tillhandahålla ett intuitivt och responsivt gränssnitt.Målet med denna studie är att hitta ett sätt att övervaka prestandan av ett grafiskt gränssnitt där begreppet prestanda omfattar responsiviteten och hastigheten av den grafiska renderingen, och genom detta möjliggöra förbättring av responstider och renderingstider.Ett ramverk som övervakar prestandan av ett grafiskt gränssnitt utvecklades. Ramverket samlar in relevant prestandamässig data om det grafiska gränssnittet och sparar datan i en databas. Datan som sparats kan sedan bli manuellt analyserad av utvecklare för att hitta svagheter i systemets prestanda. Detta uppnås utan att störa det grafiska gränssnittet och utan att ha någon negativ påverkan på användarupplevelsen.
APA, Harvard, Vancouver, ISO, and other styles
39

Holgerson, Jason L. "Collaborative online communities for increased MILSATCOM performance." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FHolgerson.pdf.

Full text
Abstract:
Thesis (M.S. in System Engineering Management)--Naval Postgraduate School, September 2009.<br>Thesis Advisor(s): Osmundson, John. "September 2009." Description based on title screen as viewed on November 9, 2009. Author(s) subject terms: Operational Availability, Net-centric Warfare, Communities, Online, Web 2.0, Sustainment, Military SATCOM, NMT. Includes bibliographical references (p. 79). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
40

Lee, Hsin-Tsang. "IRLbot: design and performance analysis of a large-scale web crawler." Texas A&M University, 2008. http://hdl.handle.net/1969.1/85914.

Full text
Abstract:
This thesis shares our experience in designing web crawlers that scale to billions of pages and models their performance. We show that with the quadratically increasing complexity of verifying URL uniqueness, breadth-first search (BFS) crawl order, and fixed per-host rate-limiting, current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly-branching spam, legitimate multi-million-page blog sites, and infinite loops created by server-side scripts. We offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6:3 billion valid HTML pages (7:6 billion connection requests) and sustained an average download rate of 319 mb/s (1,789 pages/s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the web graph with 41 billion unique nodes.
APA, Harvard, Vancouver, ISO, and other styles
41

Pun, Ka I. "Performance analysis for traffic intensive web-based workflow applications." Thesis, University of Macau, 2009. http://umaclib3.umac.mo/record=b2099649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Yang, Je-Loon. "Performance Evaluation of Java Web Services: A Developer's Perspective." UNF Digital Commons, 2007. http://digitalcommons.unf.edu/etd/246.

Full text
Abstract:
With the rapid growth of traffic on the internet, further development of the web technology upon which it is based becomes extremely important. For the evolvement of Web 2.0, web services are essential. Web services are programs that allow different computer platforms to communicate interactively across the web, without the need for extra data for interfaces and formats, such as webpage structures. Since web services are a future trend for the growth of the internet, the tools used for their development are also important. Although there are many choices of web service frameworks to choose from, developers should choose the framework that best fits their applications, based on performance, time, and effort. For this project, we compared the qualitative and quantitative metrics of four common frameworks. The four frameworks were Apache Axis, JBossWS, Codehaus XFire, and Resin Hessian. After testing, the results were statistically analyzed using the Statistical Analysis System (SAS).
APA, Harvard, Vancouver, ISO, and other styles
43

Rud, Dmytro. "Qualität von Web Services Messung und Sicherung der Performance." Saarbrücken VDM, Müller, 2005. http://deposit.d-nb.de/cgi-bin/dokserv?id=2865521&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Santa-Maria, Luis. "Visual genre conventions and user performance on the web." Thesis, University of Reading, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lind, Daniel. "Performance evaluation of HTTP web servers in embedded systems." Thesis, KTH, Maskinkonstruktion (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146617.

Full text
Abstract:
This Masters Thesis was carried out in cooperation with Syntronic AB. The purpose was todetermine what was possible in terms of Hypertext Transfer Protocol (HTTP) serverperformance on selected hardware platforms for embedded systems. The results should bevaluable for those who are about to select a hardware platform for an embedded system thatwill contain a HTTP server, and the evaluation therefore included load limits, performancecharacteristics and system resource usage.The required data was gathered with performance measurements, and a pre-study wasperformed to decide on platforms, functionality and performance parameters to include in thestudy. Three hardware platforms with different levels of performance - BeagleBoard-xM,STK1000 and Syntronic Midrange - were selected. A simulated web application was usedduring the tests and a total of five HTTP server software were tested.BeagleBoard-xM with BusyBox httpd had the best overall performance when running the testapplication. It had a high overload point, low connection durations when not overloaded, anda superior overload behavior. However, Midrange with a modified version of a server madeby Stefano Oliveri performed better when not overloaded. STK1000 was far behind the othertwo platforms in terms of performance.The overload behavior and efficiency of system resource usage differed greatly between theservers. The test results also showed that the performance varied significantly betweenHTTP server software running on the same hardware platform, and generally the softwarewith limited feature sets performed best.<br>Detta examensarbete utfördes i samarbete med Syntronic AB. Syftet var att utröna vilkenprestanda som kunde uppnås med Hypertext Transfer Protocol (HTTP) servrar på utvaldahårdvaruplattformar för inbyggda system. Resultatet skulle vara användbart för den som skavälja en hårdvaruplattform till ett inbyggt system med en HTTP-server, och utvärderingeninnehöll därför beteende under belastning, belastningsgränser, samt användning avsystemresurser.Prestandamätningar användes för att generera data för analys, och en förstudie utfördes föratt bestämma vilka plattformar, funktionalitet och prestandaparametrar som skulle ingå istudien. Tre hårdvaruplattformar med olika prestandanivåer - BeagleBoard-xM, STK1000och Syntronic Midrange - valdes ut. En simulerad webapplikation användes under testen ochtotalt testades fem HTTP-serverprogramvaror.BeagleBoard-xM med BusyBox httpd hade totalt sett den bästa prestandan vid körning avtestapplikationen. Den hade en hög överbelastningspunkt, korta behandlingstider samtöverlägset beteende under överbelastning. Midrange med en modifierad version av enserver skapad av Stefano Oliveri presterade dock bättre när den inte var överbelastad.STK1000 presterade klart sämre än de andra plattformarna.Beteendet under överbelastning och effektiviteten i utnyttjandet av systemresurer skilde sigkraftigt åt mellan de olika servrarna. Testresultaten visade också att det var stor skillnadmellan HTTP-serverprogramvarorna som kördes på samma hårdvaruplatform, och generelltsett presterade programvaror med ett begränsat antal funktioner bäst.
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Jie Carleton University Dissertation Engineering Systems and Computer. "Measurement and performance analysis of World Wide Web applications." Ottawa, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
47

Barford, Paul R. "Modeling, measurement and performance of World Wide Web transactions." Thesis, Boston University, 2001. https://hdl.handle.net/2144/36753.

Full text
Abstract:
Thesis (Ph.D.)--Boston University<br>PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.<br>The size, diversity and continued growth of the World Wide Web combine to make its understanding difficult even at the most basic levels. The focus of our work is in developing novel methods for measuring and analyzing the Web which lead to a deeper understanding of its performance. We describe a methodology and a distributed infrastructure for taking measurements in both the network and end-hosts. The first unique characteristic of the infrastructure is our ability to generate requests at our Web server which closely imitate actual users. This ability is based on detailed analysis of Web client behavior and the creation of the Scalable URL Request Generator (SURGE) tool. SURGE provides us with the flexibility to test different aspects of Web performance. We demonstrate this flexibility in an evaluation of the 1.0 and 1.1 versions of the Hyper Text Transfer Protocol. The second unique aspect of our approach is that we analyze the details of Web transactions by applying critical path analysis (CPA). CPA enables us to precisely decompose latency in Web transactions into propagation delay, network variation, server delay, client delay and packet loss delays. We present analysis of pe1formance data collected in our infrastructure. Our results show that our methods can expose surprising behavior in Web servers, and can yield considerable insight into the causes of delay variability in Web transactions.<br>2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
48

Beltrán, Querol Vicenç. "Improving web server efficiency on commodity hardware." Doctoral thesis, Universitat Politècnica de Catalunya, 2008. http://hdl.handle.net/10803/6024.

Full text
Abstract:
El ràpid creixement de la Web requereix una gran quantitat de recursos computacionals que han de ser utilitzats eficientment. Avui en dia, els servidors basats en hardware estendard son les plataformes preferides per executar els servidors web, ja que són les plataformes amb millor relació rendiment/cost. El treball presentat en aquesta tesi esta dirigit a millorar la eficàcia en la gestió de recursos dels servidors web actuals. Per assolir els objectius d'aquesta tesis s'ha caracteritzat el funcionament dels servidors web en diverses entorns representatius, per tal de identificar el problemes i coll d'ampolla que limiten el rendiment del servidor web. Amb l'estudi dels servidors web s'ha identificat dos problemes principals que disminueixen l'eficiència dels servidors web en la utilització dels recursos hardware disponibles. El primer problema identificat és la evolució del protocol HTTP per incorporar connexions persistents i seguretat, que disminueix el rendiment e incrementa la complexitat de configuració dels servidors web. El segon problema és la naturalesa de algunes aplicacions web, les quals estan limitades per la memòria física o l'ample de banda amb el disc, que impedeix la correcta utilització dels recursos presents en les maquines multiprocessadors. Per solucionar aquests dos problemes dels servidors web hem proposat dues tècniques. En primer lloc, l'arquitectura hibrida, una evolució de l'arquitectura multi-threaded que es pot implementar fàcilment el els servidor web actuals i que millora notablement la gestió de les connexions i redueix la complexitat de configuració de tot el sistema. En segon lloc, hem implementat en el kernel del sistema operatiu Linux un comprensió de memòria principal per millorar el rendiment de les aplicacions que tenen la memòria com ha coll d'ampolla, millorant així la utilització dels recursos disponibles. Els resultats d'aquesta tesis estan avalats per una avaluació experimental exhaustiva que ha provat la efectivitat i viabilitat de les nostres propostes. Cal destacar que l'arquitectura de servidor web hybrida proposada en aquesta tesis ha estat implementada recentment per coneguts servidors web com és el cas de Apache, Tomcat i Glassfish.<br>The unstoppable growth of the World Wide Web requires a huge amount of computational resources that must be used efficiently. Nowadays, commodity hardware is the preferred platform to run web server systems because it is the most cost-effective solution. The work presented in this thesis aims to improve the efficiency of current web server systems, allowing the web servers to make the most of hardware resources. To this end, we first characterize current web server system and identify the problems that hinder web servers from providing an efficient utilization of resources. From the study of web servers in a wide range of situations and environments, we have identified two main issues that prevents web servers systems from efficiently using current hardware resources. The first is the extension of the HTTP protocol to include connection persistence and security, which dramatically impacts the performance and configuration complexity of traditional multi-threaded web servers. The second is the memory-bounded or disk-bounded nature of some web workloads that prevents the full utilization of the abundant CPU resources available on current commodity hardware. We propose two novel techniques to overcome the main problems with current web server systems. Firstly, we propose a Hybrid web server<br/>architecture which can be easily implemented in any multi-threaded web server to improve CPU utilization so as to provide better management of client connections. And secondly, we describe a main memory compression technique implemented in the Linux operating system that makes optimum use of current multiprocessor's hardware, in order to improve the performance of memory bound web applications. The thesis is supported by an exhaustive experimental evaluation that proves the effectiveness and feasibility of our proposals for current systems. It is worth noting that the main concepts behind the Hybrid architecture have recently been implemented in popular web servers like Apache, Tomcat and Glassfish.
APA, Harvard, Vancouver, ISO, and other styles
49

Marang, Ah Zau. "Analysis of web performance optimization and its impact on user experience." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231445.

Full text
Abstract:
User experience (UX) is one of the most popular subjects in the industry nowadays and plays a significant role in the business success. As the growth of a business depends on customers, it is essential to emphasize on the UX that can help to enhance customer satisfaction. There has been statements that the overall end-user experience is to a great extent influenced by page load time, and that UX is primarily associated with performance of applications. This paper analyzes the effectiveness of performance optimization techniques and their impact on user experience. Principally, the web performance optimization techniques used in this study were caching data, fewer HTTP requests, Web Workers and prioritizing content. A profiling method Manual Logging was utilized to measure performance improvements. A UX survey consists of User Experience Questionnaire (UEQ) and three qualitative questions, was conducted for UX testing before and after performance improvements. Quantitative and qualitative methods were used to analyze collected data. Implementations and experiments in this study are based on an existing tool, a web-based application. Evaluation results show an improvement of 45% on app load time, but no significant impact on the user experience after performance optimizations, which entails that web performance does not really matter for the user experience. Limitation of the performance techniques and other factors that influence the performance were found during the study.<br>Användarupplevelse (UX) är idag en av de mest populära ämnena inom IT-branschen och spelar en viktig roll i affärsframgångar. Eftersom tillväxten av ett företag beror på kunder är det viktigt att betona på UX som kan bidra till att öka kundnöjdheten. Det har konstaterats att den övergripande slutanvändarupplevelsen i stor utsträckning påverkas av sidladdningstiden och att UX huvudsakligen är förknippad med applikationernas prestanda. I denna studie analyseras effektiviteten av optimeringstekniker och deras inverkan på användarupplevelse. Huvudsakligen, optimeringstekniker som användes i denna studie var cache-lösning, färre HTTP-förfrågningar, Web Workers och prioritering av innehåll. Profileringsmetoden "Manual Logging" användes för att mäta prestandaförbättringar. En enkätutvärdering som består av User Experience Questionnaire (UEQ) och tre kvalitativa frågor, genomfördes med fokus på användarupplevelsen före och efter prestandaförbättringar. Kvantitativa och kvalitativa metoder användes för att analysera insamlade data. Implementeringar och experiment i denna studie är baserade på ett befintligt verktyg, en webbaserad applikation. Utvärderingsresultatet visar en förbättring på 45% när det gäller sidladdningstid men ingen signifikant inverkan på användarupplevelsen efter prestandaförbättringar, vilket innebär att webbprestanda inte spelar någon roll för användarupplevelsen. Begränsning av optimeringstekniker och andra faktorer som påverkar prestationen hittades under studien.
APA, Harvard, Vancouver, ISO, and other styles
50

Baycan, Serhat. "Field performance of expansive anchors and piles in rock." Online version, 1996. http://bibpurl.oclc.org/web/24932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!