To see the other types of publications on this topic, follow the link: COMPUTERS / Web / Browsers.

Dissertations / Theses on the topic 'COMPUTERS / Web / Browsers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 dissertations / theses for your research on the topic 'COMPUTERS / Web / Browsers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bianco, Joseph. "Web Information System(WIS): Information Delivery Through Web Browsers." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/412.

Full text
Abstract:
The Web Information System (WIS) is a new type of Web browser capable of retrieving and displaying the physical attributes (retrieval time, age, size) of a digital document. In addition, the WIS can display the status of Hypertext Markup Language (HTML) links using an interface that is easy to use and interpret. The WIS also has the ability to dynamically update HTML links, thereby informing the user regarding the status of the information. The first generation of World Web browsers allowed for the retrieval and rendering of HTML documents for reading and printing. These browsers also provided basic management of HTML links, which are used to point to often-used information. Unfortunately, HTML links are static in nature -- other than a locator for information, an HTML link provides no other useful data. Because of the elusive characteristics of electronic information, document availability, document size (page length), and absolute age of the information can only be assessed after retrieval. WIS addresses the shortcomings of the Web by using a different approach to delivering digital information within a Web browser. By attributing the physical parameters of printed documentation such as retrieval time, age, and size to digital information, the WIS makes using online information easier and more productive than the current method.
APA, Harvard, Vancouver, ISO, and other styles
2

Janc, Artur Adam. "Network Performance Evaluation within the Web Browser Sandbox." Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/112.

Full text
Abstract:
With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing “speedtest” services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Wilson, Jason A. (Jason Aaron). "A Web browser and editor." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38137.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 60-61).
by Jason A. Wilson.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Janc, Artur A. "Network performance evaluation within the web browser sandbox." Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-011909-150148/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Abstract: With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing "speedtest" services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies. Includes bibliographical references (leaves 83-85).
APA, Harvard, Vancouver, ISO, and other styles
5

Sirkin, David Michael. "Collaborative transcoding of Web content for wireless browsers." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86631.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 38-39).
by David Michael Sirkin.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Chen-Hsiang. "Web page enhancement on desktop and mobile browsers." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79216.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2013.
"February 2013." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 154-165).
The Web is a convenient platform to deliver information, but reading web pages is not as easy as it was in 1990s. This thesis focuses on investigating techniques to enhance web pages on desktop and mobile browsers for two specific populations: non-native English readers and mobile users. There are three issues addressed in this thesis: web page readability, web page skimmability and continuous reading support on mobile devices. On today's primarily English-language Web, non-native readers encounter some problems, even if they have some fluency in English. This thesis focuses on content presentation and proposes a new transformation method, Jenga Format, to enhance web page readability. A user study with 30 non-native users showed that Jenga transformation not only improved reading comprehension, but also made the web page reading easier. On the other hand, readability research has found that average reading times for non-native readers has remained the same or even worse. This thesis studies this issue and proposes Froggy GX (Generation neXt) to improve reading under time constraints. A user study with 20 non-native users showed that Froggy GX not only enhanced reading comprehension under time constraints, but also provided higher user satisfaction than reading unaided. When using the Web on mobile devices, the reading situation becomes challenging. Even worse, context switches, such as from walking to sitting, static standing, or hands-free situations like driving, happen in reading in on-the-go situations, but this scenario was not adequately addressed in previous studies. This thesis investigates this scenario and proposes a new mobile browser, Read4Me, to support continuous reading on a mobile device. A user study with 10 mobile users showed that auto-switching not only provided significantly fewer dangerous encounters than visual-reading, but also provided the best reading experience.
by Chen-Hsiang Yu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Klang, Oskar. "Network Test Capability of Modern Web Browsers." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75226.

Full text
Abstract:
Web browsers are being used for network diagnostics. Users commonly verify their Internet speed by using a website, Bredbandskollen.se or speedtest.net for example. These test often need third party software, Flash or Java applets. This thesis aims at prototyping an application that pushes the boundaries of what the modern web browser is capable of producing regarding network measurements, without any third party software. The contributions of this thesis are a set of suggested tests that the modern browser should be capable of performing without third party software. These tests can potentially replace some of network technicians dedicated test equipment with web browser capable deceives such as mobile phones or laptops. There exist both TCP and UDP tests that can be combined for verifying some Quality of Service (QoS) metrics. The TCP tests can saturate a gigabit connection and is partially compliant with RFC 6349, which means the traditional Internet speed test sites can obtain more metrics from a gigabit throughput test then they do today.
APA, Harvard, Vancouver, ISO, and other styles
8

Nilsson, Jesper. "Interactive SysML Diagrams using a Web Browser." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80078.

Full text
Abstract:
Managing and maintaining a system requires knowledge of its structure, along with the relations and interactions between its components. Systems Model- ing Language, SysML, is a language used to describe systems and enables the practice of Model-based Systems Engineering (MBSE). Having a model of a system is one key to understand the system and useful for future management and maintenance. Apart from being an advanced language, the tools that support SysML are often both advanced and expensive. This work was commissioned to create a different tool, a tool that is free, web-based, and interactive. The tool not only allows the user to look at the system but also explore the system’s design and the interesting parts of its internal structure. The tool uses a textual input to generate interactive diagrams with the possibility to filter out redundant information. Since it is available in a web browser, one can share their textual input instead of sharing pictures of diagrams. The textual input makes it possible to share a system structure in a new way, as well as to make the system model easier to maintain.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Jason. "WebSearch: A configurable parallel multi-search web browser." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tigulla, Anil Reddy, and Satya Srinivas Kalidasu. "Evaluating Efficiency Quality Attribute in Open Source Web browsers." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2584.

Full text
Abstract:
Context: Now a day’s end users are using different types of computer applications like web browsers, data processing tools like MS office, notepad etc., to do their day-to-day works. In the real world scenario, the usage of Open Source Software (OSS) products by both industrial people and end users are gradually increasing. The success of any OSS products depends on its quality standards. ‘Efficiency’ is one of the key quality factor, which portray the standards of product and it is observed that this factor is given little importance during its development. Therefore our research context lies within evaluating the efficiency quality attribute in OSS web browsers. Objectives: As discussed earlier the context of this research lies in evaluating the efficiency of OSS web browsers, the initial objective was to identify the available efficiency measures from the current literature and observe which type of measures are suitable for web browsers. Then our next objective is to compute values for the identified efficiency measures by considering a set of predefined web browsers from all the categories. Later we proposed Efficiency Baseline Criteria (EBC) and based on this criterion and experiment results obtained, the efficiency of OSS web browsers had been evaluated. Therefore the main objective of conducting this research is to formulate EBC guidelines, which can be later used by OSS developers to test their web browsers and ensure that all the quality standards are strictly adhered during the development of OSS products. Methods: Initially Literature Review (LR) was conducted in order to identify all the related efficiency quality attributes and also observe the sub-attribute functionalities, that are useful while measuring efficiency values of web browsers. Methods and procedures which are discussed in this LR are used as input for identifying efficiency measures that are related to web browsers. Later an experiment was performed in order to calculate efficiency values for CSS & proprietary set of web browsers (i.e. Case A) and OSS web browsers (i.e. Case B) by using different tools and procedures. Authors themselves had calculated efficiency values for both Case A and Case B web browsers. Based on the results of Case A web browsers, EBC was proposed and finally an statistical analysis (i.e. Mann Whitney U-test) is performed in order to evaluate the hypothesis which was formulated in experiment section. Results: From the LR study, it is observed that efficiency quality attribute is classified into two main categories (i.e. Time Behavior and Resource Utilization). Further under the category of Time behavior a total of 3 attributes were identified (i.e. Response time, Throughput and Turnaround time). From the results of LR, we had also observed the measuring process of each attribute for different web browsers. Later an experiment was performed on two different sets of web browsers (i.e. Case A and Case B web browsers). Based on the LR results, only 3 efficiency attributes (i.e. response time, memory utilization and throughput) were identified which are more suitable to the case of web browsers. These 3 efficiency attributes are further classified into 10 sub-categories. Efficiency values are calculated to both Case A and B for these 10 identified scenarios. Later from Case A results EBC values are generated. Finally hypothesis testing was done by initially performing K-S test and results suggest choosing non-parametric test (i.e. Mann Whitney U-test). Later Mann Whitney U-test was performed for all the scenarios and the normalized Z scores are more than 1.96, further suggested rejecting null hypothesis for all the 10 scenarios. Also EBC values are compared with Case B results and these also suggest us that efficiency standard of OSS web browsers are not equivalent to Case A web browsers. Conclusions: Based on quantitative results, we conclude that efficiency standards of OSS web browsers are not equivalent, when compared to Case A web browsers and the efficiency standards are not adhered during development process. Hence OSS developers should focus on implementing efficiency standards during the development stages itself in order to increase the quality of the end products. The major contribution from the two researchers to this area of research is “Efficiency Baseline Criteria”. The proposed EBC values are useful for OSS developers to test the efficiency standards of their web browser and also help them to analyze their shortcomings. As a result appropriate preventive measures can be planned in advance.
+91 - 9491754620
APA, Harvard, Vancouver, ISO, and other styles
11

Bailey, Justin M. "Live Video Streaming from Android-Enabled Devices to Web Browsers." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/2995.

Full text
Abstract:
The wide-spread adoption of camera-embedded mobile devices along with the ubiquitous connection via WiFi or cellular networks enables people to visually report live events. Current solutions limit the configurability of such services by allowing video streaming only to fixed servers. In addition, the business models of the companies that provide such (free) services insert visual ads in the streamed videos, leading to unnecessary resource consumption. This thesis proposes an architecture of a real-time video streaming service from an Android mobile device to a server of the user's choice. The real-time video can then be viewed from a web browser. The project builds on open-source code and open protocols to implement a set of software components that successfully stream live video. Experimental evaluations show practical resource consumption and a good quality of the streamed video. Furthermore, the architecture is scalable and can support large number of simultaneous streams with additional increase in hardware resources.
APA, Harvard, Vancouver, ISO, and other styles
12

Marshall, Jeffrey Barrett. "A World Wide Web browser utilizing three-dimensional space." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nadipelly, Vinaykumar. "Dynamic Scoping for Browser Based Access Control System." TopSCHOLAR®, 2012. http://digitalcommons.wku.edu/theses/1149.

Full text
Abstract:
We have inorganically increased the use of web applications to the point of using them for almost everything and making them an essential part of our everyday lives. As a result, the enhancement of privacy and security policies for the web applications is becoming increasingly essential. The importance and stateless nature of the web infrastructure made the web a preferred target of attacks. The current web access control system is a reason behind the victory of attacks. The current web consists of two major components, the browser and the server, where the effective access control system needs to be implemented. In terms of an access control system, the current web has adopted the inadequate same origin policy and same session policy for the browser and server, respectively. The current web access control system policies are sufficient for the earlier day's web, which became inadequate to address the protection needs of today's web. In order to protect the web application from un-trusted contents, we provide an enhanced browser based access control system by enabling the dynamic scoping. Our security model for the browser will allow the client and trusted web application contents to share a common library and protect web contents from each other, while they still get executed at different trust levels. We have implemented a working model of an enhanced browser based access control system in Java, under the Lobo browser.
APA, Harvard, Vancouver, ISO, and other styles
14

Wallstersson, Marcus, and Jimmy Zöger. "Peer Assisted Live Video Streaming in Web Browsers using WebRTC." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188180.

Full text
Abstract:
This thesis presents a solution for peer assisted live video streaming in web browsers. The motivation behind the solution is that content providers, which need to allocate large amounts of server resources and bandwidth to support their services, could benefit from letting their viewers assist in distributing the video. Essential to this is the fact that live video streaming typically have relaxed time constraints, i.e. there is often a buffer of tens of seconds to allow for a smooth playback. The peer assistance is done with peer-to-peer connections that is natively supported in WebRTC-enabled web browsers. Peers cooperate by downloading different segments from the server and subsequently sharing this between themselves. For efficient utilization of the network, peers do also have a notion of the network topology and choose to cooperate with nearby peers. It is shown that server resources and bandwidth can be reduced by enabling peer assistance for suitable peers.
Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Donec odio elit, dictum in, hendrerit sit amet, egestas sed, leo. Praesent feugiat sapien aliquet odio. Integer vitae justo. Aliquam vestibulum fringilla lorem. Sed neque lectus, consectetuer at, consectetuer sed, eleifend ac, lectus. Nulla facilisi. Pellentesque eget lectus. Proin eu metus. Sed porttitor. In hac habitasse platea dictumst. Suspendisse eu lectus. Ut mi mi, lacinia sit amet, placerat et, mollis vitae, dui. Sed ante tellus, tristique ut, iaculis eu, malesuada ac, dui. Mauris nibh leo, facilisis non, adipiscing quis, ultrices a, dui.
APA, Harvard, Vancouver, ISO, and other styles
15

Jakóbisiak, Marta. "Programming the Web : design and implementation of a multidatabase browser." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41375.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 56-58).
by Marta Jakóbisiak.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Morris, Cameron. "Browser-Based Trust Negotiation." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Djärv, Karltorp Johan, and Eric Skoglund. "Performance of Multi-threaded Web Applications using Web Workers in Client-side JavaScript." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19552.

Full text
Abstract:
Context - Software applications on the web are more commonly used nowadays than before. As a result of this, the performance needed to run the applications is increasing. One method to increase performance is writing multi-threaded code using Web Workers in JavaScript. Objectives - We will investigate how using Web Workers can increase responsiveness, raw computational power and decrease load time. Additionally, we will conduct a survey that targets software developers to find out their opinions about performance in web applications, multi-threading and more specifically Web Workers. Realization (Method) - We created three experiments that concentrate on the areas mentioned above. The experiments are hosted on a web server inside an isolated Docker container to eliminate external factors as much as possible. To complement the experiments we sent out a survey to collect information of developers' opinions about Web Workers. The criteria for the selection of developers were some JavaScript experience. The survey contained questions about their opinions on switching to a multi-threaded workflow on the web. Do they experience performance issues in today's web applications? Could Web Workers be useful in their projects? Results - Responsiveness shifted from freezing the website to perfect responsiveness when using Web Workers. Raw computational power increased at best 67% when using eight workers with tasks that took between 100 milliseconds and 15 seconds. Over 15 seconds, sixteen workers improved the computational power further with around 3% - 9% compared to eight workers. At best the completion time decreased with 74% in Firefox and 72% in Chrome. Using Web Workers to help with load time gave a big improvement but is somewhat restricted to specific use cases. Conclusions - Using Web Workers to increase responsiveness made an immense difference when moving tasks that is affecting the users responsiveness to background threads. Completion time for big computational tasks was quicker in use cases where you can split the workload to separate portions and use multiple threads in parallel to complete the tasks. Load time can be improved with Web Workers by completing some tasks after the page is done loading, instead of waiting for all tasks to complete and then load the page. The survey indicated that many have performance in mind and would consider writing code in a multi-threaded way. The knowledge about multi-threading and Web Workers was low. Although, most of the participants believe that Web Workers would be useful in their current and future projects, and are worth the effort to implement.
APA, Harvard, Vancouver, ISO, and other styles
18

DeLespinasse, Alan F. (Alan Fredrick). "Rover Mosaic : e-mail communication for a full-function Web browser." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/35029.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (p. 41-43).
by Alan F. deLespinasse.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
19

Tandon, Seema Amit. "Web Texturizer: Exploring intra web document dependencies." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2539.

Full text
Abstract:
The goal of this project is to create a customized web browser to facilitate the skimming of documents by offsetting the document with relevant information. This project added techniques of learning information retrieval to automate the web browsing experience to the web texturizer. The script runs on the web texturizer website; and it allows users to quickly navigate through the web page.
APA, Harvard, Vancouver, ISO, and other styles
20

Sweeney, Michael Engineering &amp Information Technology Australian Defence Force Academy UNSW. "A presentation service for rapidly building interactive collaborative web applications." Awarded by:University of New South Wales - Australian Defence Force Academy. Engineering & Information Technology, 2009. http://handle.unsw.edu.au/1959.4/43621.

Full text
Abstract:
Web applications have become a large segment of the software development domain but their rapid rise in popularity has far exceeded the support in software engineering. There are many tools and techniques for web application development, but the developer must still learn and use many complex protocols and languages. Products still closely bind data operations, business logic, and the user interface, limiting integration and interoperability. This thesis introduces an innovative new presentation service to help web application developers create better applications faster, and help them build high quality web user interfaces. This service acts as a broker between web browsers and applications, adding value with programming-language independent object-oriented technology. The second innovation is a generic graphics applet (GGA) component for the web browser user interface. This light component provides interactive graphics support for the creation of business charts, maps, diagrams, and other graphical displays in web applications. The combination of a presentation service program (BUS) and the GGA is explored in a series of experiments to evaluate their ability to improve web applications. The experiments provide evidence that the BUS and GGA technology enhances web application designs.
APA, Harvard, Vancouver, ISO, and other styles
21

Gentner, Susan Gayle. "A Browser-Based Collaborative Multimedia Messaging System." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_theses/63.

Full text
Abstract:
Making a communication tool easier for people to operate can have profound and positive effects on its popularity and on the users themselves. This thesis is about making it easier for people to publish web-based documents that have sound, video and text. Readily available software and hardware are employed in an attempt to achieve the goal of providing a software service that enables users to compose audio-video documents with text.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Mengmeng. "COBE: A CONJUNCTIVE ONTOLOGY BROWSER AND EXPLORER FOR VISUALIZING SNOMED CT FRAGMENTS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1436373297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Nordström, Daniel. "Applicability of modern graphics libraries in web development : How may current graphics APIs that allow GPU-rendered web content be better inorporated for use in modern web application production?" Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75813.

Full text
Abstract:
This thesis presents an exploration into current web browser technologies for graphicsdevelopment, and offers a framework-like solution to integrate WebGL basedgraphical features into any web application based on those findings. It is builtlargely of the 2017 investigative graduate work done at Explizit Solutions (an ITfirm based in Skellefteå, Sweden), where the goal was to discover how 3D graphicstechnology in web browsers could be incorporated into and improve the front-endof their booking system services. A refined version of the solution produced in thatwork is presented, discussed and evaluated in this dissertation along with the investigativework done to produce it.
APA, Harvard, Vancouver, ISO, and other styles
24

Persson, Pontus. "Identifying Early Usage Patterns That Increase User Retention Rates In A Mobile Web Browser." Thesis, Linköpings universitet, Databas och informationsteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-137793.

Full text
Abstract:
One of the major challenges for modern technology companies is user retentionmanagement. This work focuses on identifying early usage patterns that signifyincreased retention rates in a mobile web browser.This is done using a targetedparallel implementation of the association rule mining algorithm FP-Growth.Different item subset selection techniques including clustering and otherstatistical methods have been used in order to reduce the mining time and allowfor lower support thresholds.A lot of interesting rules have been mined. The best retention-wise ruleimplies a retention rate of 99.5%. The majority of the rules analyzed in thiswork implies a retention rate increase between 150% and 200%.
APA, Harvard, Vancouver, ISO, and other styles
25

Mobarak, Barbara Ann. "The development of a computer literacy curriculum for California charter schools." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2683.

Full text
Abstract:
To develop leaders for the 21st century, schools must be able to prepare students to meet the high academic, technical and workforce challenges. Charter schools are increasingly attempting to meet these challenges by educating students through innovative means and by creating effectual educational programs that are more conducive to the needs of the student. This document provides a computer literacy curriculum, which will facilitate student learning of computer literacy skills.
APA, Harvard, Vancouver, ISO, and other styles
26

Grape, Victor. "Comparing Costs of Browser Automation Test Tools with Manual Testing." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133170.

Full text
Abstract:
Testing is a necessary component of software development, but it is also an expensive one, especially if performed manually. One way to mitigate the cost of testing is to implement test automation, where the test cases are run automatically. For any organisation looking to implement test automation, the most interesting cost is time. Automation takes time to implement and one of the most obvious benefits of automation is that the automated test execution time is lower than that of manual execution. This thesis contains a literature study covering testing methodology, especially in regards to the domain of web application testing. The literature covered also included three economic models that may be used to calculate the costs of automation compared to manual testing. The models can be used to calculate the time it would take, or the number of necessary executions, for the total cost of test automation to be lower than of that of manual testing. The thesis is based on a case study of test automation for the StoredSafe platform, a web application. Three sets of test automation frameworks were used to implement three different test suits and the test implementation times were collected. The data collected were then used to calculate the time it would take, using the three economic models, for the cost of automated test cases to become equal to that of with manual testing. The data showed that the estimated time to reach breakeven for the three frameworks varied between 2½ and at worst 10 years, with an average of 3½ years. The models and data presented in this thesis may be used in order to estimate the cost of test automation in comparison to manual testing over longer periods of time, but care must be taken in order to ensure that the data used is correct in regards to one’s own organisation or else the estimate may be faulty.
APA, Harvard, Vancouver, ISO, and other styles
27

Kastman, Pål. "Development of a 3D viewer for showing of house models in a web browser : A usability evaluation of navigation techniques." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177362.

Full text
Abstract:
The architectural industry today struggles with how to best show their models to interested suitors opposite the construction industry which have the advantage of the fact that they can build physical models of houses which they can then show. This is where BIM comes into the picture. By extracting the graphics fromthese models and visualising them in a web browser this studyhas by creating a viewer with different navigation techniques sought to find out which techniques where most efficient for navigating models in the viewer. This was done with the help of user tests which results show that when it comes to projections, users were more efficient with perspective projection than orthogonal projections, however, user interviews show that users could still find a use for orthographic projection as it was better for displaying floor plans. Egocentric perspective were more efficient than allocentric perspective, but most users preferred egocentric perspective inside the models and allocentric projection outside of it. As for clipping objects and using clip planes, it is a closer race as users completed the task faster with clip plane but to a greater extent with culling of objects. However, most users wanted to use both technologies at the same time so that they could complement each other.
APA, Harvard, Vancouver, ISO, and other styles
28

Piyasirivej, Pilun. "Using a contingent heuristic approach and eye gaze tracking for the usability evaluation of web sites /." Access via Murdoch University Digital Theses Project, 2004. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20050510.132804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Harmon, Trev R. "On-Line Electronic Document Collaboration and Annotation." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1589.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Evertsson, Jens. "Är HTML 5 redo för användning? : - Fokus på funktionalitet gällande utvecklingsspråkets nya taggar och attribut." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-16493.

Full text
Abstract:
Det här arbetet innefattar html 5 i avseendet beträffande användning av dess nya taggar/attribut redan nu, även fastän år som 2022 eller 2014 har nämnts av vissa när det gäller utvecklingsspråkets färdigställande. Intresset bakom arbetet uppstod baserat på några olika saker. Först och främst vid studerandet inom kursen "Webbteknisk Introduktion" (som läses i samband med Webbprogrammerare-programmets första år (hösten 2009), hos Linnéuniversitet) där användning av nya taggar, och påståenden avseende att språket redan då kunde börja användas, samt även utvecklingsspråkets namn (eftersom innefattningen av språket består av mycket mer än taggar/attribut) bidrog. Av de nytillkomna taggarna/attributen, har automatiska tester (via Javascript) gentemot de senaste versionerna av webbläsarna (Maj 2011 samt Augusti 2011) bestående av Internet Explorer, Opera, Mozilla Firefox, Apples Safari och Google Chrome genomförts. Operativsystemen webbläsarna körts under består av Mac OS X Server 10.6.7 och Windows 7 Professional (x86). Avseende resultatet visar det att webbläsarna överlag har omkring 80 till 90 % stöd för de nytillkommna taggarna, även fastän alla nya attribut stöds olika i webbläsarna. Slutsatsen av testerna visar därmed att det blir en övervägning i avseendet beträffande vad som kan/bör/skall brukas.
This essay is about html 5 and its tags/attributes can be used right now, even if years like 2022 or 2014 has been talked about for dates when the new language can be completely done. The interest behind the essay arose when I’ve studied in a computer course named “Webbteknisk Introduktion” at the Linnaeus University the autumn of 2009, inside the program: “Webbprogrammerare-programmet”. The course was mainly about html, but at one point we’ve also got to have our hands on the new thing – html 5. It was said at the same point in the course, that html 5 was more or less ready to use. Which caught my interest and wanted to get evidence about it. New tags/attributes have in this work been tested with JavaScript towards the latest web browsers (as of May 2011 and August 2011) consisting of: Internet Explorer, Opera, Mozilla Firefox, Apple Safari and Google Chrome. On top of Mac OS X Server 10.6.7 and Windows 7 Professional (x86). According to the final results, it shows that the tested browsers had about 80 to 90 percent support for the new tags, even if the attributes may vary. The final conclusion about the completed tests would then be that it’s more or less a choice of the things in the language he/she wants, or need to use, and so on.
APA, Harvard, Vancouver, ISO, and other styles
31

Johansson, Regina. "Surf the roads? : An interview study aiming to investigate truck driver’s needs for a web browser in the truck cab." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69514.

Full text
Abstract:
Long haul drivers spend a lot of time in their trucks which consequently serves as both a work place and a second home. The Internet, and communication and information technology can be used for both personal uses by the drivers, and lead to major savings for the haulage firms and provide high level service to the customers. This study investigates what needs long haul drivers have for using the Internet in their trucks, and which devise that best would suit their needs. A questionnaire study was held including 35 drivers, and an interview study including 30 drivers. The results show that almost all drivers want to perform work related tasks through the Internet, and several of them also want to use personal applications online. Work tasks online needs to be performed during the day, whereas private use of the Internet mostly would take place at nights. Several drivers are positive to an integrated system for using the Internet in the truck, and the study presents a possible concept for such a system, and discusses the results related to present research and applicable theories.
APA, Harvard, Vancouver, ISO, and other styles
32

Ouvrier, Gustaf. "Characterizing the HTTPS Trust Landscape : - A Passive View from the Edge." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161814.

Full text
Abstract:
Our society increasingly relies on the Internet for common services like online banking, shopping, and socializing. Many of these services heavily depend on secure end-to-end transactions to transfer personal, financial, or other sensitive information. At the core of ensuring secure transactions are the TLS/SSL protocol and the ``trust'' relationships between all involved partners. In this thesis we passively monitor the HTTPS traffic between a campus network and the Internet, and characterize the certificate usage and trust relationships in this complex landscape. By comparing our observations against known vulnerabilities and problems, we provide an overview of the actual security that typical Internet users (such as the people on campus) experience. Our measurements cover both mobile and stationary users, consider the involved trust relationships, and provide insights into how the HTTPS protocol is used and the weaknesses observed in practice.
APA, Harvard, Vancouver, ISO, and other styles
33

Strålfors, Annika. "Making test automation sharable: The design of a generic test automation framework for web based applications." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240981.

Full text
Abstract:
The validation approach for assuring quality of software does often include the conduction of tests. Software testing includes a wide range of methodology depending on the system level and the component under test. Graphical user interface (GUI) testing consists of high level tests that assert that functions and design element in user interfaces work as expected. The research conducted in this paper focused on GUI testing of web based applications and the movement towards automated testing within the software industry. The question which formed the basis for the study was the following: How should a generic test automation framework be designed in order to allow maintenance between developers and non-developers? The study was conducted on a Swedish consultant company that provides e-commerce web solutions. A work strategy approach for automated testing was identified and an automation framework prototype was produced. The framework was evaluated through a pilot study where testers participated through the creation of a test suite for a specific GUI testing area. Time estimations were collected as well as qualitative measurements through a follow up survey. This paper presents a work strategy proposal for automated tests together with description of the framework system design. The results are presented with a subsequent discussion about the benefits and complexity of creating and introducing automated tests within large scale systems. Future work suggestions are also addressed together with accountancy of the frameworks usefulness for other testing areas besides GUI testing.
APA, Harvard, Vancouver, ISO, and other styles
34

Victors, Jesse. "The Onion Name System| Tor-powered distributed DNS for tor hidden services." Thesis, Utah State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1598486.

Full text
Abstract:

Tor hidden services are anonymous servers of unknown location and ownership who can be accessed through any Tor-enabled web browser. They have gained popularity over the years, but still suffer from major usability challenges due to their cryptographically-generated non-memorable addresses. In response to this difficulty, in this work we introduce the Onion Name System (OnioNS), a privacy-enhanced distributed DNS that allows users to reference a hidden service by a meaningful globally-unique verifiable domain name chosen by the hidden service operator. We introduce a new distributed self-healing public ledger and construct OnioNS as an optional backwards-compatible plugin for Tor on top of existing hidden service infrastructure. We simplify our design and threat model by embedding OnioNS within the Tor network and provide mechanisms for authenticated denial-of-existence with minimal networking costs. Our reference implementation demonstrates that OnioNS successfully addresses the major usability issue that has been with Tor hidden services since their introduction in 2002.

APA, Harvard, Vancouver, ISO, and other styles
35

Anderson, James D. "Interactive Visualization of Search Results of Large Document Sets." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1547048073451373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hartley, Andrée Vanda. "Expédition aux terres Australes : a web-based online role-play simulation : the enhancement of language acquisition through social interaction /." Access via Murdoch University Digital Theses Project, 2004. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20050603.151117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Gastellier-Prevost, Sophie. "Vers une détection des attaques de phishing et pharming côté client." Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00699627.

Full text
Abstract:
Le développement de l'Internet à haut débit et l'expansion du commerce électronique ont entraîné dans leur sillage de nouvelles attaques qui connaissent un vif succès. L'une d'entre elles est particulièrement sensible dans l'esprit collectif : celle qui s'en prend directement aux portefeuilles des Internautes. Sa version la plus répandue/connue est désignée sous le terme phishing. Majoritairement véhiculée par des campagnes de spam, cette attaque vise à voler des informations confidentielles (p.ex. identifiant, mot de passe, numéro de carte bancaire) aux utilisateurs en usurpant l'identité de sites marchands et/ou bancaires. Au fur et à mesure des années, ces attaques se sont perfectionnées jusqu'à proposer des sites webs contrefaits qui visuellement - hormis l'URL visitée - imitent à la perfection les sites originaux. Par manque de vigilance, bon nombre d'utilisateurs communiquent alors - en toute confiance - des données confidentielles. Dans une première partie de cette thèse, parmi les moyens de protection/détection existants face à ces attaques, nous nous intéressons à un mécanisme facile d'accès pour l'Internaute : les barres d'outils anti-phishing, à intégrer dans le navigateur web. La détection réalisée par ces barres d'outils s'appuie sur l'utilisation de listes noires et tests heuristiques. Parmi l'ensemble des tests heuristiques utilisés (qu'ils portent sur l'URL ou le contenu de la page web), nous cherchons à évaluer leur utilité et/ou efficacité à identifier/différencier les sites légitimes des sites de phishing. Ce travail permet notamment de distinguer les heuristiques décisifs, tout en discutant de leur pérennité. Une deuxième variante moins connue de cette attaque - le pharming - peut être considérée comme une version sophistiquée du phishing. L'objectif de l'attaque reste identique, le site web visité est tout aussi ressemblant à l'original mais - a contrario du phishing - l'URL visitée est cette fois-ci elle aussi totalement identique à l'originale. Réalisées grâce à une corruption DNS amont, ces attaques ont l'avantage de ne nécessiter aucune action de communication de la part de l'attaquant : celui-ci n'a en effet qu'à attendre la visite de l'Internaute sur son site habituel. L'absence de signes "visibles" rend donc l'attaque perpétrée particulièrement efficace et redoutable, même pour un Internaute vigilant. Certes les efforts déployés côté réseau sont considérables pour répondre à cette problématique. Néanmoins, le côté client y reste encore trop exposé et vulnérable. Dans une deuxième partie de cette thèse, par le développement de deux propositions visant à s'intégrer dans le navigateur client, nous introduisons une technique de détection de ces attaques qui couple une analyse de réponses DNS à une comparaison de pages webs. Ces deux propositions s'appuient sur l'utilisation d'éléments de référence obtenus via un serveur DNS alternatif, leur principale différence résidant dans la technique de récupération de la page web de référence. Grâce à deux phases d'expérimentation, nous démontrons la viabilité du concept proposé.
APA, Harvard, Vancouver, ISO, and other styles
38

Durbha, Surya Srinivas. "Semantics-enabled framework for knowledge discovery from Earth observation data." Diss., Mississippi State : Mississippi State University, 2006. http://sun.library.msstate.edu/ETD-db/ETD-browse/browse.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Rehn, Michael. "Garbage Collected CRDTs on the Web : Studying the Memory Efficiency of CRDTs in a Web Context." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413299.

Full text
Abstract:
In today's connected society, where it is common to have several connected devices per capita, it is more important than ever that the data you need is omnipresent, i.e. its available when you need it, no matter where you are. We identify one key technology and platform that could be the future—peer-to-peer communication and the Web. Unfortunately, guaranteeing consistency and availability between users in a peer-to-peer network, where network partitions are bound to happen, can be a challenging problem to solve. To solve these problems, we turned to a promising category of data types called CRDTs—Conflict Free Replicated Data Types. By following the scientific tradition of reproduction, we build upon previous research of a CRDT framework, and adjust it work in a peer-to-peer Web environment, i.e. it runs on a Web browser. CRDTs makes use of meta-data to ensure consistency, and it is imperative to remove this meta-data once it no longer has any use—if not, memory usage grows unboundedly making the CRDT impractical for real-world use. There are different garbage collection techniques that can be applied to remove this meta-data. To investigate whether the CRDT framework and the different garbage collection techniques are suitable for the Web, we try to reproduce previous findings by running our implementation through a series of benchmarks. We test whether our implementation works correctly on the Web, as well as comparing the memory efficiency between different garbage collection techniques. In doing this, we also proved the correctness of one of these techniques. The results from our experiments showed that the CRDT framework was well-adjusted to the Web environment and worked correctly. However, while we could observe similar behaviour between different garbage collection techniques as previous research, we achieved lower relative memory savings than expected. An additional insight was that for long-running systems that often reset its shared state, it might be more efficient to not apply any garbage collection technique at all. There is still much work to be done to allow for omnipresent data on the Web, but we believe that this research contains two main takeaways. The first is that the general CRDT framework is well-suited for the Web and that it in practice might be more efficient to choose different garbage collection techniques, depending on your use-case. The second take-away is that by reproducing previous research, we can still advance the current state of the field and generate novel knowledge—indeed, by combining previous ideas in a novel environment, we are now one step closer to a future with omnipresent data.
I dagens samhälle är vi mer uppkopplade än någonsin. Tack vare det faktum att vi nu ofta har fler än en uppkopplad enhet per person, så är det viktigare än någonsin att ens data är tillgänglig på alla ens enheter–oavsett vart en befinner sig. Två tekniker som kan möjliggöra denna ``allnärvaro'' av data är Webben, alltså kod som körs på en Webbläsare, tillsammans med peer-to-peer-kommunikation; men att säkerställa att distribuerad data både är tillgänglig och likadan för alla enheter är svårt, speciellt när enhetens internetanslutning kan brytas när som helst. Conflict-free replicated data-types (CRDT:er) är en lovande klass av datatyper som löser just dessa typer av problem i distribuerade system; genom att använda sig av meta-data, så kan CRDT:er fortsätta fungera trots att internetanslutningen brutits. Dessutom är de garanterade att konvergera till samma sluttillstånd när anslutningen upprättas igen. Däremot lider CRDT:er av ett speciellt problem–denna meta-data tar upp mycket minne trots att den inte har någon användning efter en stund. För att göra datatypen mer minneseffektiv så kan meta-datan rensas bort i en process som kallas för skräpsamling. Vår idé var därför att reproducera tidigare forskning om ett ramverk för CRDT:er och försöka anpassa denna till att fungera på Webben. Vi reproducerar dessutom olika metoder för skräpsamling för att undersöka om de, för det första fungerar på Webben, och för det andra är lika effektiv i denna nya miljö som den tidigare forskningen pekar på. Resultaten från våra experiment visade att CRDT-ramverket och dess olika skräpsamlingsmetoder kunde anpassas till att fungera på Webben. Däremot så noterade vi något högre relativ minnesanvändning än vad vi har förväntat oss, trots att beteendet i stort var detsamma som den tidigare forskningen. En ytterligare upptäckt vad att i vissa specifika fall så kan det vara mer effektivt att inte applicera någon skräpsamling alls. Trots att det är mycket arbete kvar för att använder CRDT:er peer-to-peer på Webben för att möjliggöra ``allnärvarande'' data, så innehåller denna uppsats två huvudsakliga punkter. För det första så fungerar det att anpassa CRDT-ramverket och dess olika skräpsamlingsmetoder till Webben, men ibland är det faktiskt bättre att inte applicera någon skräpsamling alls. För det andra så visas vikten av att reproducera tidigare forskning–inte bara visar uppsatsen att tidigare CRDT-forskning kan appliceras i andra miljöer, dessutom kan ny kunskap hämtas ur en sådan reproducering.
APA, Harvard, Vancouver, ISO, and other styles
40

Fredriksson, Stefan. "WebAssembly vs. its predecessors : A comparison of technologies." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97654.

Full text
Abstract:
For many years it has only been HTML, CSS, and JavaScript that have been native to the Web. In December 2019, WebAssembly joined them as the fourth language to run natively on the Web. This thesis compared WebAssembly to the technologies ActiveX, Java applets, Asm.js, and Portable Native Client (PNaCl) in terms of their performance, security, and browser support. The reason why this was an interesting topic to investigate was to determine in what areas WebAssembly is an improvement over previous similar technologies. Another goal was to provide companies that still use older technologies with an indication as to whether or not it is worth upgrading their system with newer technology. To answer the problem, the thesis mainly focused on comparing the performance of the technologies through a controlled experiment. The thesis also aimed at getting a glimpse of the security differences between the technologies by the use of a literature study. The thesis showed that PNaCl was the technology with the best performance. However, WebAssembly had better browser support. Also, PNaCl is deprecated while WebAssembly is heavily supported and could potentially be further optimized.
APA, Harvard, Vancouver, ISO, and other styles
41

Purra, Joel. "Swedes Online: You Are More Tracked Than You Think." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117075.

Full text
Abstract:
When you are browsing websites, third-party resources record your online habits; such tracking can be considered an invasion of privacy. It was previously unknown how many third-party resources, trackers and tracker companies are present in the different classes of websites chosen: globally popular websites, random samples of .se/.dk/.com/.net domains and curated lists of websites of public interest in Sweden. The in-browser HTTP/HTTPS traffic was recorded while downloading over 150,000 websites, allowing comparison of HTTPS adaption and third-party tracking within and across the different classes of websites. The data shows that known third-party resources including known trackers are present on over 90% of most classes, that third-party hosted content such as video, scripts and fonts make up a large portion of the known trackers seen on a typical website and that tracking is just as prevalent on secure as insecure sites. Observations include that Google is the most widespread tracker organization by far, that content is being served by known trackers may suggest that trackers are moving to providing services to the end user to avoid being blocked by privacy tools and ad blockers, and that the small difference in tracking between using HTTP and HTTPS connections may suggest that users are given a false sense of privacy when using HTTPS.

Source code, datasets, and a video recording of the presentation is available on the master's thesis website.

APA, Harvard, Vancouver, ISO, and other styles
42

Heinstedt, Elin, and Niklas Johansson. "Requesting Utility in Usability -Perspectives from a large team software engineering project." Thesis, Blekinge Tekniska Högskola, Institutionen för arbetsvetenskap och medieteknik, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4960.

Full text
Abstract:
Many companies invest large amount of money in developing new technology, without knowing how it will be used. To succeed in making these technologies useful it is necessary to understand the context that gives meaning to the artifact. In the case of generic products, especially in new domains, the context is not obvious. This bachelor thesis analyses what Usability Engineering, Participatory Design and Ethnography can contribute to the problem of learning about the context of usage for generic artifacts. Understanding and identifying details of context is considered to be important to achieve usability in software development. The experience is that most recommendations on usability methods concern situations of specific users in a specific context. In order to find important aspects of the real-world use of generic products, we suggest that ethnographic studies can be conducted in contexts where behaviors relevant to the design are thought to be found. The problem of not knowing the context was experienced in usability work practiced in a large software engineering project. The project task was to develop a web browser for Symbian's ?Quartz? reference design for handheld devices. Methods used were taken from participatory design and usability engineering.
APA, Harvard, Vancouver, ISO, and other styles
43

Orner, Daniel. "Design and comparison of a hierarchical web browser Back menu /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR29598.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2006. Graduate Programme in Higher Education.
Typescript. Includes bibliographical references (leaves 93-96). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR29598
APA, Harvard, Vancouver, ISO, and other styles
44

Ersson, Kerstin, and Persson Siri. "Peer-to-peer distribution of web content using WebRTC within a web browser." Thesis, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-254725.

Full text
Abstract:
The aim of this project was to investigate if it is possible to host websites using the BitTorrent protocol, a protocol for distribution of data on the web. This was done using several Node.js modules, small clusters of code written in JavaScript, such as Browserify and a modified version of WebTorrent. In these modules, technologies like websockets and WebRTC are implemented. The project resulted in a working WebTorrent module, implemented on the website www.peerweb.io. However, the module still needs optimization concerning the time it takes to set up a WebRTC peer connection. With these modifications, we believe that hosting websites via peer-to-peer network will be the future of the web.
APA, Harvard, Vancouver, ISO, and other styles
45

(8119418), Hafiz Muhammad Junaid Khan. "A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER." Thesis, 2019.

Find full text
Abstract:
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.
APA, Harvard, Vancouver, ISO, and other styles
46

"Leveraging Scalable Data Analysis to Proactively Bolster the Anti-Phishing Ecosystem." Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.57029.

Full text
Abstract:
abstract: Despite an abundance of defenses that work to protect Internet users from online threats, malicious actors continue deploying relentless large-scale phishing attacks that target these users. Effectively mitigating phishing attacks remains a challenge for the security community due to attackers' ability to evolve and adapt to defenses, the cross-organizational nature of the infrastructure abused for phishing, and discrepancies between theoretical and realistic anti-phishing systems. Although technical countermeasures cannot always compensate for the human weakness exploited by social engineers, maintaining a clear and up-to-date understanding of the motivation behind---and execution of---modern phishing attacks is essential to optimizing such countermeasures. In this dissertation, I analyze the state of the anti-phishing ecosystem and show that phishers use evasion techniques, including cloaking, to bypass anti-phishing mitigations in hopes of maximizing the return-on-investment of their attacks. I develop three novel, scalable data-collection and analysis frameworks to pinpoint the ecosystem vulnerabilities that sophisticated phishing websites exploit. The frameworks, which operate on real-world data and are designed for continuous deployment by anti-phishing organizations, empirically measure the robustness of industry-standard anti-phishing blacklists (PhishFarm and PhishTime) and proactively detect and map phishing attacks prior to launch (Golden Hour). Using these frameworks, I conduct a longitudinal study of blacklist performance and the first large-scale end-to-end analysis of phishing attacks (from spamming through monetization). As a result, I thoroughly characterize modern phishing websites and identify desirable characteristics for enhanced anti-phishing systems, such as more reliable methods for the ecosystem to collectively detect phishing websites and meaningfully share the corresponding intelligence. In addition, findings from these studies led to actionable security recommendations that were implemented by key organizations within the ecosystem to help improve the security of Internet users worldwide.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2020
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography