To see the other types of publications on this topic, follow the link: Web of Science (WoS).

Dissertations / Theses on the topic 'Web of Science (WoS)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Web of Science (WoS).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Glisson, William Bradley. "The Web Engineering Security (WES) methodology." Thesis, University of Glasgow, 2008. http://theses.gla.ac.uk/186/.

Full text
Abstract:
The World Wide Web has had a significant impact on basic operational economical components in global information rich civilizations. This impact is forcing organizations to provide justification for security from a business case perspective and to focus on security from a web application development environment perspective. This increased focus on security was the basis of a business case discussion and led to the acquisition of empirical evidence gathered from a high level Web survey and more detailed industry surveys to analyse security in the Web application development environment. Along with this information, a collection of evidence from relevant literature was also gathered. Individual aspects of the data gathered in the previously mentioned activities contributed to the proposal of the Essential Elements (EE) and the Security Criteria for Web Application Development (SCWAD). The Essential Elements present the idea that there are essential, basic organizational elements that need to be identified, defined and addressed before examining security aspects of a Web Engineering Development process. The Security Criteria for Web Application Development identifies criteria that need to be addressed by a secure Web Engineering process. Both the EE and SCWAD are presented in detail along with relevant justification of these two elements to Web Engineering. SCWAD is utilized as a framework to evaluate the security of a representative selection of recognized software engineering processes used in Web Engineering application development. The software engineering processes appraised by SCWAD include: the Waterfall Model, the Unified Software Development Process (USD), Dynamic Systems Development Method (DSDM) and eXtreme Programming (XP). SCWAD is also used to assess existing security methodologies which are comprised of the Orion Strategy; Survivable / Viable IS approaches; Comprehensive Lightweight Application Security Process (CLASP) and Microsoft’s Trust Worthy Computing Security Development Lifecycle. The synthesis of information provided by both the EE and SCWAD were used to develop the Web Engineering Security (WES) methodology. WES is a proactive, flexible, process neutral security methodology with customizable components that is based on empirical evidence and used to explicitly integrate security throughout an organization’s chosen application development process. In order to evaluate the practical application of the EE, SCWAD and the WES methodology, two case studies were conducted during the course of this research. The first case study describes the application of both the EE and SCWAD to the Hunterian Museum and Art Gallery’s Online Photo Library (HOPL) Internet application project. The second case study presents the commercial implementation of the WES methodology within a Global Fortune 500 financial service sector organization. The assessment of the WES methodology within the organization consisted of an initial survey establishing current security practices, a follow-up survey after changes were implemented and an overall analysis of the security conditions assigned to projects throughout the life of the case study.
APA, Harvard, Vancouver, ISO, and other styles
2

Sistla, Shambhu Maharaj Sastry. "How Far Web Services Tools Support OASIS Message Security Standards?" Thesis, University of Skövde, School of Humanities and Informatics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-983.

Full text
Abstract:
<p>There is a great deal of interest burgeoning in the intellectual community regarding Web Services and their usage. Many writers have tried to bring awareness about some unconceived threats lurking behind the enticing Web Services. Threats due to Web Services are on an all time high giving an alarming knock to the Web Services security community. This led to the, Organization for the Advancement of Structured Information Standards (OASIS) made some constraints mandatory in order to standardize message security and these constraints and specifications are presented through a document called WS Security -2004. This work is an attempt to check the support offered by various Web Services Tools available currently. It introduces the reader to Web Services and presents an overview of how far some of the tools have reached in order to make the Web Services environment safe, secure and robust to meet the current day’s requirements. A quantitative approach was taken to investigate the support offered by servers like BEA, Apache Axis etc. The conclusions drawn show that most of the tools meet the imposed standards but a lot more is expected from the web community and these tools; if at all the visions about safe and secure Web Services are to be realized.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Bianco, Joseph. "Web Information System(WIS): Information Delivery Through Web Browsers." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/412.

Full text
Abstract:
The Web Information System (WIS) is a new type of Web browser capable of retrieving and displaying the physical attributes (retrieval time, age, size) of a digital document. In addition, the WIS can display the status of Hypertext Markup Language (HTML) links using an interface that is easy to use and interpret. The WIS also has the ability to dynamically update HTML links, thereby informing the user regarding the status of the information. The first generation of World Web browsers allowed for the retrieval and rendering of HTML documents for reading and printing. These browsers also provided basic management of HTML links, which are used to point to often-used information. Unfortunately, HTML links are static in nature -- other than a locator for information, an HTML link provides no other useful data. Because of the elusive characteristics of electronic information, document availability, document size (page length), and absolute age of the information can only be assessed after retrieval. WIS addresses the shortcomings of the Web by using a different approach to delivering digital information within a Web browser. By attributing the physical parameters of printed documentation such as retrieval time, age, and size to digital information, the WIS makes using online information easier and more productive than the current method.
APA, Harvard, Vancouver, ISO, and other styles
4

Conocimiento, Dirección de Gestión del. "Web of Science." Clarivate Analytics, 2004. http://hdl.handle.net/10757/655404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gaedke, Martin. "Doctoral Dissertations in Web Engineering and Web Science." Universitätsverlag Chemnitz, 2014. https://monarch.qucosa.de/id/qucosa%3A20124.

Full text
Abstract:
Wissenschaftliche Schriftenreihe, die Dissertationen der Professur Verteilte und selbstorganisierende Rechnersysteme beinhaltet.<br>Scientific series containing dissertations of the Professorship Distributed and Self-Organizing Systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Luis, Arroyo Rubén, and Quispe Carlos Chávez. "Web of Science - Kopernio." Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/625909.

Full text
Abstract:
Guía sobre Kopernio, herramienta que funciona a través de Web of Science y permite que los investigadores puedan descargar de forma legal los artículos a los que la universidad está suscrito. También recupera archivos en texto completo disponibles en Google Scholar, Pubmed y otras plataformas.
APA, Harvard, Vancouver, ISO, and other styles
7

Youn, Choonhan Fox Geoffrey C. "Web services based architecture in computational Web portals." Related Electronic Resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2003. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tetlow, Philip David. "Investigations into Web science and the concept of Web life." Thesis, University of Sunderland, 2009. http://sure.sunderland.ac.uk/3555/.

Full text
Abstract:
Our increasing ability to construct large and complex computer and information systems suggests that the classical manner in which such systems are understood and architected is inappropriate for the open and unstructured manner in which they are often used. With the appearance of mathematically complex and, more importantly, high scale, non-deterministic systems, such as the World Wide Web, there is a need to understand, construct and maintain systems in a world where their assembly and use may not be precisely predicted. In Addition, few have thus far attempted to study such Web-scale systems holistically so as to understand the implications of non-programmable characteristics, like emergence and evolution – a matter of particular relevance in the new field of Web Science. This collection of prior published works and their associated commentary hence brings together a number of themes focused on Web Science and its broader application in systems and software engineering. It primarily rests on materials presented in the book The Web’s Awake, first published in April 2007.
APA, Harvard, Vancouver, ISO, and other styles
9

McElhiney, Patrick R. "Scalable Web Service Development with Amazon Web Services." Thesis, University of New Hampshire, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10931435.

Full text
Abstract:
<p> The objective of this thesis was to explore the topic of scalable web development, and it answered the question, &ldquo;How do you scale a website to handle more traffic at peak times without wasting resources?&rdquo; This is important research to any web company that has issues with rising costs as demand for their website increases. It would be wise for every online business to be prepared for more web traffic, before it occurs, without spending the budget of a multi-million user web company in low traffic periods. The last thing you want is an error as your customer base starts to arrive, giving them a bad experience for their first impressions, which would result in lost revenue.</p><p> Scalable software development architectures, including microservices, big data, and Kubernetes were studied, in addition to similar web service companies including Facebook, Twitter, and Match.com. A scalable architecture was designed for a social media web service, MeAndYou, using the big data configuration with a shared Aurora database, which was configured using an auto-scaling group attached to a load balancer in Amazon Web Services (AWS). It was tested using a custom threaded Selenium-based Python script that applied simulated user load to the servers. As the load was applied, AWS added more Elastic Compute Cloud (EC2) instances running a virtual disk image of the web server. After the load was removed, the instances were terminated automatically by AWS to save costs.</p><p> Countless steps were taken to make the web service bigger and more scalable than it originally was, before testing, including adding more fields to user profiles, adding more search types, and separating the layers of code into different Hypertext Preprocessor (PHP) files in the front-end. A version control system was configured on the servers using GitHub and rsync. The systems architecture designed suggests the Match Engine should use a stream processing message queue, which would allow the system to factor searches one at a time as they are created, with horizontal scaling capabilities, rather than grabbing the entire database and storing it in memory. The backend Match Engine was also tested for accuracy using Structured Query Language (SQL) injection, which determined how the match algorithm should be improved in the future.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
10

Nyirenda, Mayumbo. "Universal web application server." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/6427.

Full text
Abstract:
Includes bibliographical references (leaves 113-116).<br>The growth of the World Wide Web has in large part been made possible by the technologies that power it. These are the Web servers and Web browsers that many take for granted. Each has evolved to support the changing needs of users of the WWW from simple static text to highly interactive and dynamic multimedia. The Web servers, in particular, have evolved into a spectrum of different technologies to support what are now known as Web applications. These are usually installed and accessed through a Web server. Security is a problem in Web server environments and therefore the Web servers are usually run as an un-privileged user. Performance is another problem as some of these technologies require re-initialization of the execution environment with every subsequent request. These security and performance shortcomings have been dealt with by numerous Web application technologies.
APA, Harvard, Vancouver, ISO, and other styles
11

Xia, Ning. "HealthyLifeHRA: Web Application." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1403521982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pasila, N. (Niklas). "Malliperustainen ohjelmistokehitys web-ympäristössä." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201603021257.

Full text
Abstract:
Tämän kandidaatin tutkielman tarkoituksena oli esitellä malliperustainen ohjelmistokehitys vaihtoehtona web-sovellusten suunnittelussa ja kehityksessä. Tutkielmaan etsittiin aineistoa ACM Digital Library ja IEEE Xplore — IEEE/IEE Electronic Library -kokotekstitietokannoista sekä manuaalista hakua käyttäen. Nykyisten kompleksien web-pohjaisten sovellusten suunnittelu, kehittäminen ja ylläpito on hankalaa, sillä laajalti hyväksytyn teoreettisen pohjan tai mallin sijaan käytössä on vain vaihteleva kokoelma työkaluja. Malliperustainen kehitystapa tuo selkeän pohjan ohjelmiston suunnittelu-, kehitys-, sekä ylläpitovaiheeseen. Sen käyttöönoton päätavoitteiksi on lueteltu tuottavuuden parantaminen standardisoitujen mallien uudelleenkäytöllä, suunnitteluprosessin yksinkertaistaminen hyödyntämällä toistuvia suunnittelumalleja sekä kehittäjien välisen kommunikaation edistäminen. Käyttöönoton vaikutuksiksi kehitysprosessiin lukeutui niin ohjelmiston parantunut laatu ja parempi uudelleenkäytettävyys kuin kehittäjien tehokkuus ja tyytyväisyys. Abstraktiotason noustessa kompleksisuus pienenee ja ohjelmiston muutosherkkyys vähenee. Käyttöönotossa huomio siirtyy enemmän ohjelmiston kehitysprosessin alkuvaiheeseen. Järjestelmän tarkka suunnittelu on edellytyksenä sille, että kaikki malliperustaisen ohjelmistokehityksen hyödyt saadaan esille. Arkkitehtonisen prosessin tulee olla entistä täsmällisempi ja jäsennellympi, mikä tarkoittaa sitä, että ohjelmistoarkkitehdin rooli on entistä vaativampi.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Ying. "Enhancing security for XML Web services." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27531.

Full text
Abstract:
The XML-based interoperable characteristics make enhancing security for XML Web Services a lot different from that of the traditional network-based applications. SSL VPN gateways are usually used to provide security for traditional network-based applications, but not for Web Services. This thesis presents an integrated security solution for securing both traditional network-based applications and Web Services. The integrated security solution includes a VPN framework and a Web Services framework. Considering that we have already had an SSL VPN gateway developed by our lab, we take it as the motherboard of the solution and the VPN server of the gateway as the security functional part of the VPN framework. As the highlight of this thesis project, a Web Services security component, also the security functional part of the Web Services framework, has been developed, implemented and integrated with the SSL VPN gateway to get the integrated security solution. The problem of applying ECC over binary fields for XML security, SOAP message security and Web Services security to make the Web Services security component share the same set of ECC keys with the VPN server on the gateway has been studied. Tools for reading ECC keys and certificates from the BUL's key files have been developed. Methods to adopt the ECC key files to ensure security of Web Services have also been developed. To the best of our knowledge, there is no previous work on adopting ECC keys over binary fields for Web Services security. The integrated security solution we present in this thesis is the prototype of a network device that has functions of three gateways: a VPN gateway, a Web Services security gateway and a Web Services gateway.
APA, Harvard, Vancouver, ISO, and other styles
14

Jin, Yuan. "Bridging the ontological gap between semantic web and the RESTful web services." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97115.

Full text
Abstract:
Data are produced in large quantities and in various forms around the globe everyday. Researchers advance their research depending on the availability of necessary data and the discovery of them. As people's demand to manage the data grows, however, three problems appear to hinder the attempts to effectively leverage the data. One is the semantic heterogeneity found in linking different data sources. Database designers create data with different semantics; even data within the same domain may differ in meaning. If users want to acquire all the obtainable information, they have to write different queries with different semantics. One solution to such a problem is the use of ontology. An ontology is defined as a specification for the concepts of an agent (or a community of agents) and the relationships between them (Gruber 1995). Concepts and relationships between concepts are extracted from the data to form knowledge network. Other parties wishing to connect their data to the knowledge network could share, enrich and distribute the vocabulary of the ontology. Users could also write queries to the ontology by any RDF query language (Brickly 2004). The use of ontology is part of the Web 3.0's effort to provide a semantic-sensitive global knowledge network. A second problem is about new ways to access data resources with ontology information. People used to build application-specific user interfaces to databases, which were offline. Now many choose to expose data in Web Services. Web services are a system to provide HTTP-based remote request calling services that are described in a machine-readable format (Haas and Brown 2004). They usually provide application (or web) programming interfaces to manage data. The question is Web Services are born in a world of applications relying on conventional ways to connect to data sources. For example, D2RQ (Bizer and Seaborne 2004) translates queries against ontology to SQL queries and it depends on JDBC to read from relational databases. Now the interfaces for these data sources are going to be changed. The Semantic Web world faces the challenge to lose data sources. If Web Services were going to spread over the Internet one day, this lack of connection would hold back me from applying the ontology to connect to heterogeneous data sources.A third problem (or constraint) is working within the specific project domain. I embed this within a humanities cyberinfrastructure that integrates Chinese biographical, historical and geographical data. The data sources come in various forms – local and remote relational databases and, RESTful Web Services. Working with both legacy databases and the new web application interfaces narrowed down my choice of solutions. Commercial products provide ways to "ontologicalize" the Web Services. I argue that they are heavyweight (e.g. unnecessary components bound with the product) and cost-prohibitive for small-scale projects like ours. Several mature open source solutions featuring working with relational databases provide no or very limited access to Web Services. For example, no clue is found in D2RQ to join Web Services into their system, while OpenLink Virtuoso answers calls for SOAP but cannot manage data from RESTful Web Services. I propose to build a connection between ontologies and Web Services. I devise the metadata to represent non-RDF Web Services in ontology, and I revise the code and create new data structures in D2RQ to support ontology queries to data from RESTful Web Services.<br>Les données sont produites en grandes quantités et sous diverses formes dans le monde et tous les jours. Les chercheurs avancer leurs recherches en fonction de la disponibilité des données nécessaires et la découverte de leur. Comme la demande des gens pour gérer les données croît, toutefois, trois problèmes semblent entraver les tentatives d'exploiter efficacement les données. La première est l'hétérogénéité sémantique dans reliant différentes sources de données. Concepteurs de créer des données de base de données avec une sémantique différente; même les données dans le même domaine peuvent avoir une signification différente. Si les utilisateurs souhaitent obtenir toute l'information obtenue, ils doivent écrire des requêtes différentes avec une sémantique différente. Une solution à ce problème est l'utilisation de l'ontologie. Une ontologie est définie comme une spécification pour les concepts d'un agent (ou d'une communauté d'agents) et les relations entre eux (Gruber 1995). Concepts et les relations entre les concepts sont extraites des données pour former réseau de connaissances. Les autres parties qui souhaitent se connecter leurs données au réseau de connaissances pourraient partager, enrichir et diffuser le vocabulaire de l'ontologie. Les utilisateurs peuvent aussi écrire des requêtes à l'ontologie par une requête RDF langue (Brickley 2004). L'utilisation de l'ontologie est une partie de l'effort de Web 3.0 pour fournir un réseau de connaissances sémantiques sensibles mondiale.Un deuxième problème est sur le point de nouvelles façons d'accéder aux données des ressources de l'information ontologie. Les gens de construire des interfaces utilisateur des applications spécifiques aux bases de données, qui ont été mises hors. Maintenant, de nombreux fournisseurs de données choisir pour exposer les données des services web. Les services web sont un système pour fournir la demande HTTP à distance d'appeler les services qui sont décrits dans un format lisible par machine (Haas and Brown 2004). Par exemple, D2RQ (Bizer and Seaborne 2004) se traduit par des requêtes sur l'ontologie de requêtes SQL, et cela dépend de JDBC pour lire à partir des bases de données relationnelles. Maintenant, les interfaces de ces sources de données vont être modifiées. Le monde du web sémantique doit relever le défi de perdre des sources de données. Si les services web ont été va se répandre sur Internet, un jour, ce manque de connexion tiendrait nous ramène de l'application de l'ontologie de se connecter à des sources de données hétérogènes.Un troisième problème (ou contrainte) est travailler dans le domaine des projets spécifiques. Nous incorporer cela dans une cyber-infrastructure qui intègre les sciences humaines chinois biographiques, des données historiques et géographiques. Les sources de données prennent des formes diverses - bases de données locales et distantes relationnelles et, les services web RESTful. Travailler avec les anciennes bases de données à la fois et l'application web de nouvelles interfaces rétréci vers le bas notre choix de solutions. Produits commerciaux offrent des moyens à ontologicalize les services web. Nous soutenons qu'ils sont lourds (par exemple, les composants inutiles liés au produit) et ils sont coûteuse pour les projets à petite échelle, comme notre projet. Plusieurs solutions open source mature offrant de travailler avec des bases de données relationnelles ne fournissent pas ou peu accès aux services Web. Nous proposons de construire un lien entre les ontologies et les services web. Nous trouver les métadonnées pour représenter les non-RDF services web dans l'ontologie, et nous revoir le code et créer de nouvelles structures de données en D2RQ à l'appui des requêtes ontologie à partir des données des services web RESTful.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhu, Yonghan 1971. "Web services : framework and technologies." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80907.

Full text
Abstract:
Immediately after its appearance in 1999, Web Services has become the hottest topic in the information technology industry. Web Services was primarily fostered by the exponentially growing demand for highly efficient Business-to-Business (B2B) solutions.<br>Web Services is a highly modularized application level framework. Its fundamental idea is to enable Web applications to interact with each other regardless of platforms, languages or network infrastructure used. This understanding has been accepted and shared by current Web Services vendors, users, professionals and standards organizations. Meanwhile, Web Services concepts, architecture, components, working models and direction are still under debate. This thesis provides an introduction to these topics based on a thorough research across existing materials.<br>The first part of this thesis clarifies the architecture and basic concepts of Web Services. The second part provides a complete introduction and analysis of the two groups of technologies stated above, and shows how these technologies work together. The third part of this thesis presents a discussion of the advantages and potential applications of Web Services. It is based on the perspectives of a variety of technical experts and solution designers. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
16

ALHARTHI, KHALID AYED B. "AN ARABIC SEMANTIC WEB MODEL." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1367064711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Conocimiento, Dirección de Gestión del. "Guía de acceso para Web of Science." Clarivate Analytics, 2021. http://hdl.handle.net/10757/655404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Fu Min. "Collecting web data for social science research." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3953492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Saari, S. (Sampsa). "Mobiililaitteiden käytettävyyden haasteet web-suunnittelussa." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201811093005.

Full text
Abstract:
Tämän tutkimuksen aiheena on mobiililaitteiden käytettävyyden haasteet Web-suunnittelussa. Aihe on tärkeä, koska mobiililaitteiden osuus Internetin käytössä on kasvanut lyhyessä ajassa valtavasti. Tutkimuksen on tarkoitus tutkia, mitkä ovat yleisempiä käytettävyyden haasteita mobiililaitteilla Webissä ja miten ne eroavat perinteisten pöytätietokoneiden haasteista. Tutkimuksessa tutkitaan myös, millä keinoilla käytettävyyttä voitaisiin parantaa Web-sivustoilla. Tutkimuksessa pyritään aluksi tunnistamaan kirjallisuudesta yleisempiä mobiililaitteiden käytettävyyden haasteista Webissä. Empiirisessä osiossa tehdyn tutkimuksen tulos vastasi kirjallisuudesta löytynyttä tietoa. Tutkimus osoittaa, että mobiililaitteilla on samoja haasteita Webin käytettävyydessä kuin pöytätietokoneillakin. Suurin ero on se, että mobiililaitteiden ja pöytätietokoneiden välillä on paljon eroavaisuuksia esim. Mobiililaitteilla on hitaammat Internet yhteydet sekä pienemmät näytöt. Näiden vuoksi käytettävyyden sääntöjen noudattaminen mobiililaitteilla on entistäkin tärkeämpää. Tässä tutkimuksessa löydettiin useita keinoja parantaa mobiililaitteiden käytettävyyttä Web-sivustoilla. Empiirisentutkimuksen tuloksena esitellään keinoja ja toimintatapoja, joilla mobiilisivustojen käytettävyyttä voidaan parantaa. Tuloksissa todettiin saavutettavuuden sekä suorituskyvyn parantuneen sivustolla huomattavasti käyttäen tutkimuksessa löydettyjä tekniikoita. Jokainen Web-suunnitteluprojekti on ainutlaatuinen ja niillä on erilainen sisältö, yleisö ja tavoitteet, on vaikea luoda täydellistä ohjeistusta kaikille sivustoille paremman käytettävyyden aikaansaamiseksi. Tässä tutkimuksessa löydetyt tulokset ovat joukko parhaita käytäntöjä ja suosituksia verkkosivustojen rakentamiseen mobiililaitteille.
APA, Harvard, Vancouver, ISO, and other styles
20

Williamson, Victor Lamont. "Goal-oriented Web search." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61247.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 57-58).<br>We have designed and implemented a Goal-oriented Web application to search videos, images, and news by querying YouTube, Truveo, Google and Yahoo search services. The Planner module decomposes functionality in Goals and Techniques. Goals declare searches for specific types of content and Techniques query the various Web services. We choose which Web service has the best rating at runtime and return the winning results. Users weight their preferred Web services and declare a repository of their own Techniques to upload and execute.<br>by Victor Lamont Williamson.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
21

Bota, Horatiu S. "Composite web search." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/38925/.

Full text
Abstract:
The figure above shows Google’s results page for the query “taylor swift”, captured in March 2016. Assembled around the long-established list of search results is content extracted from various source — news items and tweets merged within the results ranking, images, songs and social media profiles displayed to the right of the ranking, in an interface element that is known as an entity card. Indeed, the entire page seems more like an assembly of content extracted from various sources, rather than just a ranked list of blue links. Search engine result pages have become increasingly diverse over the past few years, with most commercial web search providers responding to user queries with different types of results, merged within a unified page. The primary reason for this diversity on the results page is that the web itself has become more diverse, given the ease with which creating and hosting different types of content on the web is possible today. This thesis investigates the aggregation of web search results retrieved from various document sources (e.g., images, tweets, Wiki pages) within information “objects” to be integrated in the results page assembled in response to user queries. We use the terms “composite objects” or “composite results” to refer to such objects, and throughout this thesis use the terminology of Composite Web Search (e.g., result composition) to distinguish our approach from other methods of aggregating diverse content within a unified results page (e.g., Aggregated Search). In our definition, the aspects that differentiate composite information objects from aggregated search blocks are that composite objects (i) contain results from multiple sources of information, (ii) are specific to a common topic or facet of a topic rather than a grouping of results of the same type, and (iii) are not a uniform ranking of results ordered only by their topical relevance to a query. The most widely used type of composite result in web search today is the entity card. Entity cards have become extremely popular over the past few years, with some informal studies suggesting that entity cards are now shown on the majority of result pages generated by Google. As composite results are used more and more by commercial search engines to address information needs directly on the results page, understanding the properties of such objects and their influence on searchers is an essential aspect of modern web search science. The work presented throughout this thesis attempts the task of studying composite objects by exploring users’ perspectives on accessing and aggregating diverse content manually, by analysing the effect composite objects have on search behaviour and perceived workload, and by investigating different approaches to constructing such objects from diverse results. Overall, our experimental findings suggest that items which play a central role within composite objects are decisive in determining their usefulness, and that the overall properties of composite objects (i.e., relevance, diversity and coherence) play a combined role in mediating object usefulness.
APA, Harvard, Vancouver, ISO, and other styles
22

Oh, Sangyoon. "Web service architecture for mobile computing." [Bloomington, Ind.] : Indiana University, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3229598.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2006.<br>"Title from dissertation home page (viewed July 11, 2007)." Source: Dissertation Abstracts International, Volume: 67-08, Section: B, page: 4529. Adviser: Geoffrey C. Fox.
APA, Harvard, Vancouver, ISO, and other styles
23

Tierney, Matthew Ryan. "Rethinking information privacy for the web." Thesis, New York University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3602740.

Full text
Abstract:
<p> Hanni M. Fakhoury, staff attorney with the Electronic Frontier Foundation, has argued against Supreme Court Justice Samuel Alito&rsquo;s opinion that society should accept a decline in personal privacy with modern technology, &ldquo;Technology doesn&rsquo;t involve an &lsquo;inevitable&rsquo; tradeoff [of increased convenience] with privacy. The only inevitability must be the demand that privacy be a value built into our technology&rdquo; [40]. Our position resonates with Mr. Fakhoury&rsquo;s assertion for rethinking information privacy for the web. In this thesis, we present three artifacts that address the balance between usability, efficiency, and privacy as we rethink information privacy for the web.</p><p> In the first part of this thesis, we propose the design, implementation and evaluation of Cryptagram, a system designed to enhance online photo privacy. Cryptagram enables users to convert photos into encrypted images, which the users upload to Online Social Networks (OSNs). Users directly manage access control to those photos via shared keys that are independent of OSNs or other third parties. OSNs apply standard image transformations (JPEG compression) to all uploaded images so Cryptagram provides image encoding and encryption protocols that are tolerant to these transformations. Cryptagram guarantees that the recipient with the right credentials can completely retrieve the original image from the transformed version of the uploaded encrypted image while the OSN cannot infer the original image. Cryptagram&rsquo;s browser extension integrates seamlessly with preexisting OSNs, including Facebook and Google+, and currently has over 400 active users.</p><p> In the second part of this thesis, we introduce the design of Lockbox, a system designed to provide end-to-end private file-sharing with the convenience of Google Drive or Dropbox. Lockbox uniquely combines two important design points: (1) a federated system for detecting and recovering from server equivocation and (2) a hybrid cryptosystem over delta encoded data to balance storage and bandwidth costs with efficiency for syncing end-user data. To facilitate appropriate use of public keys in the hybrid cryptosystem, we integrate a service that we call KeyNet, which is a web service designed to leverage existing authentication media (e.g., OAuth, verified email addresses) to improve the usability of public key cryptography.</p><p> In the third part of this thesis, we propose a new system, Compass, which realizes the philosophical privacy framework of <i>contextual integrity (CI)</i> as a full OSN design. CI), which we believe better captures users privacy expectations in OSNs. In Compass, three properties hold: (a) users are associated with roles in specific <i>contexts</i>; (b) every piece of information posted by a user is associated with a specific <i> context</i>; (c) <i>norms</i> defined on roles and attributes of posts in a context govern how information is shared across users within that context. Given the definition of a context and its corresponding norm set, we describe the design of a compiler that converts the human-readable norm definitions to generate appropriate information flow verification logic including: (a) a compact binary decision diagram for the norm set; and (b) access control code that evaluates how a new post to a context will flow. We have implemented a prototype that shows how the philosophical framework of contextual integrity can be realized in practice to achieve strong privacy guarantees with limited additional verification overhead.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Chiang, Cho-Yu. "On building dynamic web caching hierarchies /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488199501403111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Benson, Edward 1983. "Reducing authoring complexity on the web with a relational layer for web content." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/93056.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 183-191).<br>When we browse the web, we experience rich designs and data interactivity. But our creation efforts of such content are often hampered by the great engineering effort required. As a result, novices are largely limited to authoring read-only content from within content management systems, and experts must rely on complex software toolchains like Ruby on Rails to manage software complexity. These application-level strategies have enabled tremendous creative output, but they only alleviate, rather than eliminate, core sources of web authoring complexity. This work shows that adding a declarative, relational layer to the web stack reduces the complexity of authoring and reusing web content by providing a way to reason about how structures on the web fit together and what should happen when they change. For static content and design, I demonstrate new and more usable authoring and deployment techniques. For dynamic content, I demonstrate how a relational layer can transform HTML and spreadsheets into read-write-compute web applications without any procedural programming at all. User studies show that HTML novices can learn to apply these techniques in only a few minutes, increasing their creative capacity beyond read-only rich text. And professionals can use this approach to drive a development process based on full-fidelity design mockups rather than code fragments.<br>by Edward Oscar Benson.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Le-Shin. "Adaptive peer networks for distributed Web search." [Bloomington, Ind.] : Indiana University, 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3380141.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2009.<br>Title from PDF t.p. (viewed on Jul 20, 2010). Source: Dissertation Abstracts International, Volume: 70-12, Section: B, page: 7684. Adviser: Filippo Menczer.
APA, Harvard, Vancouver, ISO, and other styles
27

Spiller, Simone. "Distance education on the world-wide web." Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20871.

Full text
Abstract:
The Internet's growing acceptance as a key medium to educate people is the main focus of this thesis. Although distance education has been a popular learning method since the end of the 19th Century, never before has technology made it so easy to disseminate knowledge by linking different media formats---e.g. text, sound, video. In fact, this huge library accessed by a universal interface, is one of the key contributions that the Internet and the World-Wide Web have brought to the learning process.<br>The main goal of this thesis is to encourage instructors to create and deliver courses using the Internet and, most of all, to show that the process can be simple and effective. In order to support this study, four major Course Management tools are presented and analyzed: Pathway by Solis-Macromedia, LearningSpace by Lotus, WebCT by The University of British Columbia, and Virtual-U by Simon Fraser University.<br>As a result of this thesis, a Grades Application was developed using the Internet protocol. This application is an uncomplicated, yet effective solution for using the Web to manage, calculate, and view students' marks. With the open architecture of the Web and standard programming languages such as JavaScript and Perl, the system will execute in most computers available in universities around the world.
APA, Harvard, Vancouver, ISO, and other styles
28

Malan, Petronella Elizabeth. "Adolessente leefstylpatrone : 'n opname in geslekteerde hoërskole van die Wes-Kaap Onderwysdepartement." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18102.

Full text
Abstract:
Thesis (M Sport Sc (Sport Science))--Stellenbosch University, 2011.<br>ENGLISH ABSTRACT: Adolescence is the period between childhood and adulthood. This phase starts between the ages of 11 and 13 years and ends between 17 and 21 years. Adolescence was seen as a phase of development, growth and excellent health in the past, but that is not the case in the 21st century. The health of adolescent are being influenced by technology such as computers and televisions, crime, poor eating habits, the absence of Physical Education at schools, urbanization, overpopulation and less available space for children to play. These aspects lead to a sedentary lifestyle which may impact their health in the form of hypokinetic diseases. The primary aim of this study was to determine the lifestyle patterns of adolescents in selected Western Cape high schools. The secondary aims of this study were to determine the lifestyle patterns of different ethnic groups; those of boys and girls; and to compare these lifestyle patterns with those of adolescents 10 years ago. In this study, two questionnaires were used for data collection: a questionnaire for the adolescents; and one for the Life Orientation teachers. The high schools (N=30) were randomly selected to partake in the study. Out of each school learners (N=60) were also randomly selected to partake in the study. The 60 learners consisted of [n=15] boys and [n=15] girls in Grade 9 and [n=15] boys and [n=15] girls in Grade 11 between the ages of 15 and 17 years. One Life Orientation teacher was also randomly selected from each school. Data from the two questionnaires were coded in computer format and statistically analysed with the computer program Stasoft Statistica Version 10. From the results of the study it can be concluded that neither White nor Coloured adolescents found school sport important nor they did not partake in sporting activities on a regular basis. Adolescent boys, on the other hand, were found to be much more active than adolescent girls. Adolescent girls preferred sedentary activities like listening to music and reading books. Both White and Coloured adolescents, and boys and girls, found their health to be excellent despite the fact that research showed the opposite to be true. White adolescents also found socialising more important than Coloured adolescents. Coloured adolescent on the other hand found household chores more important. Boys and Coloured adolescents attended self defence classes on a regular basis. This study is a follow-up study on one conducted by Van Deventer in 1999. It serves as a basis for further research and it is recommended that a new study should be conducted every 10 years to determine changes in the lifestyle patterns of adolescents so that it can be addressed. Further research is recommended because of the low feedback of Life Orientation teachers to determine and address the status of Life Orientation presently in schools, the attitudes of teachers and learners toward Life Orientation, Education and training of Life Orientation teachers, apparatus and facilities needs and time allocation towards the movement component in Life Orientation. Further research is also recommended because of the insufficient feedback received from Black learners. It is also important to determine their lifestyle patterns so that recommendations in this regard can be made.<br>AFRIKAANSE OPSOMMING: Adolessensie is die periode tussen die kinderjare en volwassenheid. Die fase begin tussen die ouderdomme van 11 en 13 jaar en eindig tussen 17 en 21 jaar. Adolessensie is vroeër as 'n fase van groei, ontwikkeling en goeie gesondheid beskou maar dit is nie meer die geval nie. Adolessente se gesondheid word huidig deur beskikbare tegnologie soos televisie en rekenaars, vervoer, misdaad, verstedeliking, minder sportgronde as gevolg van oorbevolking, gebrekkige Liggaamlike Opvoeding by skole en ongesonde eetgewoontes belemmer. Dit kan tot sedentêre leefstylpatrone aanleiding gee wat hipokinetiese siektes kan meebring. Die hoofdoel van die studie was om die leefstylpatrone van adolessente in geselekteerde hoërskole van die Wes-Kaapse Onderwysdepartement te bepaal. Daar was ook op die volgende subprobleme gefokus: die vergelyking van die leefstylpatrone van verskillende etniese groepe, die vergelyking van die leefstylpatrone van seuns en meisies en die vergelyking van die leefstylpatrone van huidige adolessente met dié van 10 jaar gelede. Twee vraelyste was vir die insameling van die data gebruik. Een wat leerders en die ander die Lewensoriëntering-onderwyser voltooi het. Die hoërskole (N=30) was ewekansig geselekteer. Uit elke hoërskool was leerders (N=60) ewekansig: uit Graad 9 [n=15] seuns en [n=15] meisies en uit Graad 11 [n=15] seuns en [n=15] meisies tussen die ouderdomme van 15 en 17 geselekteer. Een Lewensoriëntering-onderwyser per skool was ook ewekansig geselekteer. Die data wat vanaf die vraelyste verkry was, is in rekenaarformaat gekodeer en statisties verwerk. Stasoft Statistica Weergawe 10 is vir verdere dataverwerking gebruik. Daar is tot die gevolgtrekking gekom dat beide Wit en Bruin adolessente sport as onbelangrik geag het en ongereeld daaraan deelneem het. Adolessente seuns het sport belangriker geag en ook meer gereeld as adolessente meisies daaraan deelgeneem. Adolessente meisies het sedentêre aktiwiteite, soos om te lees en na musiek te luister, belangriker as adolessente seuns geag. Wit en Bruin adolessente, en die seuns en meisies, het hul gesondheid goed geag al bewys navorsing die teendeel. Wit adolessente het meer as Bruin adolessente gesosialiseer, terwyl Bruin adolessente huishoudelike take belangriker as Wit adolessente geag het. Adolessente seuns en Bruin adolessente het gereeld selfverdedigingsklasse bygewoon. Die onderhawige studie is 'n opvolgstudie wat deur Van Deventer in 1999 uitgevoer is en dien as rigtingwyser vir verdere navorsing. 'n Opvolgstudie elke 10 jaar is ideaal sodat die veranderinge in leefstylpatrone onder adolessente gemonitor en aangespreek kan word. Verdere navorsing word as gevolg van die lae terugvoersyfer van Lewensoriënteringonderwysers aanbeveel sodat die volgende aangespreek kan word: die status wat Lewensoriëntering huidig geniet; die gesindheid van onderwysers en leerders teenoor Lewensoriëntering; onderwyseropleiding; apparaat en fasiliteit behoeftes; en die tydstoekenning vir die bewegingskomponent van Lewensoriëntering. Verdere navorsing word as gevolg van die lae terugvoersyfer van Swart adolessente aanbeveel sodat hul leefstylpatrone ook bepaal en aanbevelings daarvolgens gemaak kan word.
APA, Harvard, Vancouver, ISO, and other styles
29

Chopra, Rashi. "Real estate web application." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Plachouras, Vasileios. "Selective web information retrieval." Thesis, University of Glasgow, 2006. http://theses.gla.ac.uk/1945/.

Full text
Abstract:
This thesis proposes selective Web information retrieval, a framework formulated in terms of statistical decision theory, with the aim to apply an appropriate retrieval approach on a per-query basis. The main component of the framework is a decision mechanism that selects an appropriate retrieval approach on a per-query basis. The selection of a particular retrieval approach is based on the outcome of an experiment, which is performed before the final ranking of the retrieved documents. The experiment is a process that extracts features from a sample of the set of retrieved documents. This thesis investigates three broad types of experiments. The first one counts the occurrences of query terms in the retrieved documents, indicating the extent to which the query topic is covered in the document collection. The second type of experiments considers information from the distribution of retrieved documents in larger aggregates of related Web documents, such as whole Web sites, or directories within Web sites. The third type of experiments estimates the usefulness of the hyperlink structure among a sample of the set of retrieved Web documents. The proposed experiments are evaluated in the context of both informational and navigational search tasks with an optimal Bayesian decision mechanism, where it is assumed that relevance information exists. This thesis further investigates the implications of applying selective Web information retrieval in an operational setting, where the tuning of a decision mechanism is based on limited existing relevance information and the information retrieval system’s input is a stream of queries related to mixed informational and navigational search tasks. First, the experiments are evaluated using different training and testing query sets, as well as a mixture of different types of queries. Second, query sampling is introduced, in order to approximate the queries that a retrieval system receives, and to tune an ad-hoc decision mechanism with a broad set of automatically sampled queries.
APA, Harvard, Vancouver, ISO, and other styles
31

Ebenezer, Catherine. "Health informatics on the Web." Free Pint Ltd, 2002. http://hdl.handle.net/10150/106500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mengistu, Dawit Bezu. "Social Science Studies and Experiments with Web Applications." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78122.

Full text
Abstract:
This thesis explores a web-based method to do studies in cultural evolution. Cumulative cultural evolution (CCE) is defined as social learning that allows for the accumulation of changes over time where successful modifications are maintained until additional change is introduced. In the past few decades, many interdisciplinary studies were conducted on cultural evolution. However, until recently most of those studies were limited to lab experiments. This thesis aims to address the limitations of the experimental methods by replicating a lab-based experiment online. A web-based application was developed and used for replicating an experiment on conformity by Solomon Asch [1951]. The developed application engages participants in an optical illusion test within different groups of social influence. The major finding of the study reveals that conformity increases on trials with higher social influence. In addition, it was also found that when the task becomes more difficult, the subject's conformity increases. These findings were also reported in the original experiment. The results of the study showed that lab-based experiments in cultural evolution studies can be replicated over the web with quantitatively similar results.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Jichuan. "Web-based e-mail client for computer science." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2462.

Full text
Abstract:
The project is a web e-mail application to provide a web page interface for all CSCI faculty, staff and students to handle their e-mails. The application is written by JSP, Java Servlets, JavaScript and custom JSP tag libraries. Regular e-mail capabilities have been enhanced by the feature of allowing users to store and manage messages by day (store to daily folders, view in daily folders, append notes for that day).
APA, Harvard, Vancouver, ISO, and other styles
34

Soini, J. (Jaakko). "Web-sovellusten käytettävyyden arviointi heuristiikkojen avulla." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201612103257.

Full text
Abstract:
Heuristinen evaluointi on menetelmä, jota käytetään ohjelmistojen käytettävyyden arviointiin. Sen tavoitteena on paljastaa mahdollisia käytettävyysongelmia ohjelmistossa. Heuristisessa evaluoinnissa asiantuntijat vertaavat ohjelmistoa tai sen prototyyppiä yleisesti tunnettuihin periaatteisiin eli heuristiikkoihin. Tämä kirjallisuuskatsaus esittelee tunnettuja heuristiikkoja ja niiden soveltamista erityisesti web-sovelluksiin, millaisia omia heuristiikkoja web-sovellusten käytettävyyden arviointiin kuuluu, mitä eroja ja yhtäläisyyksiä niillä on perinteisiin heuristiikkoihin verrattuna ja kuinka nämä heuristiikat ovat kehittyneet perinteisistä heuristiikoista.
APA, Harvard, Vancouver, ISO, and other styles
35

McLearn, Greg. "Autonomous Cooperating Web Crawlers." Thesis, University of Waterloo, 2002. http://hdl.handle.net/10012/1080.

Full text
Abstract:
A web crawler provides an automated way to discover web events ? creation, deletion, or updates of web pages. Competition among web crawlers results in redundant crawling, wasted resources, and less-than-timely discovery of such events. This thesis presents a cooperative sharing crawler algorithm and sharing protocol. Without resorting to altruistic practices, competing (yet cooperative) web crawlers can mutually share discovered web events with one another to maintain a more accurate representation of the web than is currently achieved by traditional polling crawlers. The choice to share or merge is entirely up to an individual crawler: sharing is the act of allowing a crawler M to access another crawler's web-event data (call this crawler S), and merging occurs when crawler M requests web-event data from crawler S. Crawlers can choose to share with competing crawlers if it can help reduce contention between peers for resources associated with the act of crawling. Crawlers can choose to merge from competing peers if it helps them to maintain a more accurate representation of the web at less cost than directly polling web pages. Crawlers can control how often they choose to merge through the use of a parameter &#961;, which dictates the percentage of time spent either polling or merging with a peer. Depending on certain conditions, pathological behaviour can arise if polling or merging is the only form of data collection. Simulations of communities of simple cooperating web crawlers successfully show that a combination of polling and merging (0 < &#961; < 1) can allow an individual member of the cooperating community a higher degree of accuracy in their representation of the web as compared to a traditional polling crawler. Furthermore, if web crawlers are allowed to evaluate their own performance, they can dynamically switch between periods of polling and merging to still perform better than traditional crawlers. The mutual performance gain increases as more crawlers are added to the community.
APA, Harvard, Vancouver, ISO, and other styles
36

Youn, Hoony C. (Hoony Chong). "Web based client/server computing." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/40571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Shih, Lawrence Kai 1974. "Machine learning on Web documents." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28331.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.<br>Includes bibliographical references (leaves 111-115).<br>The Web is a tremendous source of information: so tremendous that it becomes difficult for human beings to select meaningful information without support. We discuss tools that help people deal with web information, by, for example, blocking advertisements, recommending interesting news, and automatically sorting and compiling documents. We adapt and create machine learning algorithms for use with the Web's distinctive structures: large-scale, noisy, varied data with potentially rich, human-oriented features. We adapt two standard classification algorithms, the slow but powerful support vector machine and the fast but inaccurate Naive Bayes, to make them more effective for the Web. The support vector machine, which cannot currently handle the large amount of Web data potentially available, is sped up by "bundling" the classifier inputs to reduce the input size. The Naive Bayes classifier is improved through a series of three techniques aimed at fixing some of the severe, inaccurate assumptions Naive Bayes makes. Classification can also be improved by exploiting the Web's rich, human-oriented structure, including the visual layout of links on a page and the URL of a document. These "tree-shaped features" are placed in a Bayesian mutation model and learning is accomplished with a fast, online learning algorithm for the model. These new methods are applied to a personalized news recommendation tool, "the Daily You." The results of a 176 person user-study of news preferences indicate that the new Web-centric techniques out-perform classifiers that use traditional text algorithms and features. We also show that our methods produce an automated ad-blocker that performs as well as a hand-coded commercial ad-blocker.<br>by Lawrence Kai Shih.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Webber, Matthew J. (Matthew James). "A Stateful Web Augmentation Toolkit." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/61248.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2010.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 52-54).<br>This thesis introduces the Stateful Web Augmentation Toolkit (SWAT), a toolkit that gives users control over the presentation and functionality of web content. SWAT extends Chickenfoot, a Firefox browser scripting environment that offers a variety of automation and manipulation capabilities. SWAT allows programmers to identify data records in database-backed web sites. Records are nodes of data corresponding to rows in the database backend. Programmers can append additional functionality to those nodes, and the resulting code can be bundled up and installed by users without technical expertise. SWAT consists of three modules: a Site Profile module that identifies data records, a Tweak module that defines the look and behavior of an interactive widget, and a Storage module that persists the widget state across pages and browser sessions. Default implementations are provided for each module, and these implementations adhere to an API that encompasses all communication between modules. A programmer can extend or replace any module to improve a system built with SWAT. With SWAT, end users can customize sites far beyond where their content providers stopped, and can add functionality that logically connects different data sources, changes how and where data is stored, and redefines how they interact with the web.<br>by Matthew J. Webber.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
39

Benson, Edward 1983. "A data aware web architecture." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60156.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.<br>Includes bibliographical references (p. 86-89).<br>This thesis describes a client-server toolkit called Sync Kit that demonstrates how client-side database storage can improve the performance of data intensive websites. Sync Kit is designed to make use of the embedded relational database defined in the upcoming HTML5 standard to offload some data storage and processing from a web server onto the web browsers to which it serves content. Sync Kit provides various strategies for synchronizing relational database tables between the browser and the web server, along with a client-side template library so that portions web applications may be executed client-side. Unlike prior work in this area, Sync Kit persists both templates and data in the browser across web sessions, increasing the number of concurrent connections a server can handle by up to a factor of four versus that of a traditional server-only web stack and a factor of three versus a recent template caching approach.<br>by Edward Benson.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
40

Wilson, Jason A. (Jason Aaron). "A Web browser and editor." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38137.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.<br>Includes bibliographical references (leaves 60-61).<br>by Jason A. Wilson.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
41

Fitzgerald, Michael J. M. Eng Massachusetts Institute of Technology. "CopyStyler : Web design by example." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46032.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.<br>Includes bibliographical references (p. 67-68).<br>This thesis describes the design and implementation of CopyStyler, a tool to enable novice web users to style their own web pages by emulating the style of existing pages on the Web. The tool is implemented as a browser extension on Firefox, created using Chickenfoot. The tool lays the user's page and the page they wish to emulate side by side, and the user uses a custom selection process to define which page styles he would like to copy and which page elements he would like to change. The tool was evaluated on a variety of web pages and structures, as well as with a variety of users. This thesis also proposes additions to CopyStyler that could be created to enhance its ability in other areas of style copying.<br>by Mitchael J. Fitzgerald.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
42

xu, rui. "WEBEVO: TAMING WEB APPLICATION EVOLUTION VIA SEMANTIC CHANGE DETECTION." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1595242401982817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ouahid, Hicham. "Data extraction from the Web using XML." Thesis, University of Ottawa (Canada), 2001. http://hdl.handle.net/10393/9260.

Full text
Abstract:
This thesis presents a mechanism based on eXtensible Markup Language (XML) to extract data from HTML-based Web pages and populate relational databases. This task is performed by a system called the XML-based Web Agent (XWA). The data extraction is done in three phases. First, the Web pages are converted to well-formed XML documents to facilitate their processing. Second, the data is extracted from the well-formed XML documents and formatted into valid XML documents. Finally, the valid XML documents are mapped into tables to be stored in a relational database. To extract specific data from the Web, the XWA requires information about the Web pages from which to extract the data, the location of the data within the Web pages, and how the extracted data should be formatted. This information is stored in Web Site Ontologies which are built using a language called the Web Ontology Description Language (WONDEL). WONDEL is based on XML and XML Pointer Language. It has been defined as a part of this work to allow users to specify the data they want, and let the XWA work offline to extract it and store it in a database. This has the advantage of saving users the time waiting for the Web pages to download, and taking benefit from the powerful query mechanism offered by database management systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Zayour, Iyad. "Information retrieval over the World Wide Web." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq22023.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Van, Rooyen Reinhardt. "A transaction assurance framework for web services." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/10911.

Full text
Abstract:
Includes bibliographical references (p. 121-124).<br>Trust assurances for customers of online transactions is an important, but not well implemented concept for the growth of confidence in electronic transactions. In an online world where customers do not personally know the companies they seek to do business with, there is real risk involved in providing an unknown service with personal information and payment details. The risks faced by a customer are compounded when multiple services are involved in a single transaction. This dissertation provides mechanisms that can be used to reduce the risks faced by a client involved in online transactions by allowing the him/her access to information about the services involved and control or prescribe how the transaction uses the services. The dissertation uses electronic transactions legislation to ground a trust assurance protocol and minimize the assumptions that have to be made. By basing the protocol on legislation, no information that isn't already required by law is used in the protocol. A trust assurance protocol is presented so that the client can establish which services are involved in a transaction so that the he/she can begin to determine whether or not he/she is willing to conduct business with the services. A trust model that calculates an assurance measure for services is developed so that the client can automatically establish a measure of trust for a service based on the external perceptions of a service, and his/her own personal experience. A simulation environment was created and used to monitor the services involved in a transaction to evaluate the trust assurance protocol and gain experience with the trust calculation that the client computes. Vocabularies that simplify and standardize descriptions of personal information, business types and the legal structure imposed on Web services offering goods or services online are presented to reduce the ambiguity involved in gathering information from different online sources. The vocabularies also provide a cornerstone of the trust assurance protocol by providing information that is necessary to compute the trust value of a Web service. Results of the trust assurance protocol are obtained and evaluated against the qualitative requirements of providing assurances to clients, and confirms that the protocol is feasible to be deployed, in terms of the overhead placed on a transaction. This dissertation finds that a trust assurance protocol is necessary to provide the client with information that he/she legally has access to and that the trust model can provide a calculable measure of trust that the client can use to compare Web services.
APA, Harvard, Vancouver, ISO, and other styles
46

Walters, Lourens O. "A web browsing workload model for simulation." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/6369.

Full text
Abstract:
Bibliography: p. 163-167.<br>The simulation of packet switched networks depends on accurate web workload models as input for network models. We derived a workload model for traffic generated by an individual browsing the web. We derived the workload model by studying packet traces of web traffic generated by individuals browsing the web on a campus network.
APA, Harvard, Vancouver, ISO, and other styles
47

Steinberg, David A. "Shots: A High-Performance Web Templating Language." Kent State University Honors College / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1387918200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Farrar, Scott O. "An ontology for linguistics on the Semantic Web." Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/289879.

Full text
Abstract:
The current research presents an ontology for linguistics useful for an implementation on the Semantic Web. By adhering to this model, it is shown that data of the kind routinely collected by field linguists may be represented so as to facilitate automatic analysis and semantic search. The literature concerning typological databases, knowledge engineering, and the Semantic Web is reviewed. It is argued that the time is right for the integration of these three areas of research. Linguistic knowledge is discussed in the overall context of common-sense knowledge representation. A three-layer approach to meaning is assumed, one that includes conceptual, semantic, and linguistic levels of knowledge. In particular the level of semantics is shown to be crucial for a notional account of grammatical categories such as tense, aspect, and case. The level of semantic is viewed as an encoding of common-sense reality. To develop the ontology an upper model based on the Suggested Upper Merged Ontology (SUMO) is adopted, though elements from other ontologies are utilized as well. A brief comparison of available upper models is presented. It is argued that any ontology for linguistics should provide an account of at least (1) linguistic expressions, (2) mental linguistic units, (3) linguistic categories, and (4) discrete semantic units. The concepts and relations concerning these four domains are motivated as part of the ontology. Finally, an implementation for the Semantic Web is given by discussing the various data constructs necessary for markup (interlinear text, lexicons, paradigms, grammatical descriptions). It is argued that a characterization of the data constructs should not be included in the general ontology, but should be left up to the individual data provider to implement in XML Schema. A search scenario for linguistic data is discussed. It is shown that an ontology for linguistics provides the machinery for pure semantic search, that is, an advanced search framework whereby the user may use linguistic concepts, not just simple strings, as the search query.
APA, Harvard, Vancouver, ISO, and other styles
49

Kamarudin, Muhammad Hilmi. "An intrusion detection scheme for identifying known and unknown web attacks (I-WEB)." Thesis, University of Warwick, 2018. http://wrap.warwick.ac.uk/103911/.

Full text
Abstract:
The number of utilised features could increase the system's computational effort when processing large network traffic. In reality, it is pointless to use all features considering that redundant or irrelevant features would deteriorate the detection performance. Meanwhile, statistical approaches are extensively practised in the Anomaly Based Detection System (ABDS) environment. These statistical techniques do not require any prior knowledge on attack traffic; this advantage has therefore attracted many researchers to employ this method. Nevertheless, the performance is still unsatisfactory since it produces high false detection rates. In recent years, the demand for data mining (DM) techniques in the field of anomaly detection has significantly increased. Even though this approach could distinguish normal and attack behaviour effectively, the performance (true positive, true negative, false positive and false negative) is still not achieving the expected improvement rate. Moreover, the need to re-initiate the whole learning procedure, despite the attack traffic having previously been detected, seems to contribute to the poor system performance. This study aims to improve the detection of normal and abnormal traffic by determining the prominent features and recognising the outlier data points more precisely. To achieve this objective, the study proposes a novel Intrusion Detection Scheme for Identifying Known and Unknown Web Attacks (I-WEB) which combines various strategies and methods. The proposed I-WEB is divided into three phases namely pre-processing, anomaly detection and post-processing. In the pre-processing phase, the strengths of both filter and wrapper procedures are combined to select the optimal set of features. In the filter, Correlation-based Feature Selection (CFS) is proposed, whereas the Random Forest (RF) classifier is chosen to evaluate feature subsets in wrapper procedures. In the anomaly detection phase, the statistical analysis is used to formulate a normal profile as well as calculate the traffic normality score for every traffic. The threshold measurement is defined using Euclidean Distance (ED) alongside the Chebyshev Inequality Theorem (CIT) with the aim of improving the attack recognition rate by eliminating the set of outlier data points accurately. To improve the attack identification and reduce the misclassification rates that are first detected by statistical analysis, ensemble-learning particularly using a boosting classifier is proposed. This method uses using LogitBoost as the meta-classifier and RF as the base-classifier. Furthermore, verified attack traffic detected by ensemble learning is then extracted and computed as signatures before storing it in the signature library for future identification. This helps to reduce the detection time since similar traffic behaviour will not have to be re-executed in future.
APA, Harvard, Vancouver, ISO, and other styles
50

Wijayarathne, Dayal Buddika. "Shallow Groundwater Modeling of the Historical Irwin Wet Prairie in the Oak Openings of Northwest Ohio." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1435749359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography