To see the other types of publications on this topic, follow the link: Download data.

Dissertations / Theses on the topic 'Download data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Download data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Yingyong. "Maximizing data download capabilities for future constellation space missions." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1730.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Ieong, Sze-Chung Ricci. "Dispute resolution against copyright infringement through internet download?" access abstract and table of contents access full-text, 2007. http://libweb.cityu.edu.hk/cgi-bin/ezdb/dissert.pl?ma-slw-b21844173a.pdf.

Full text
Abstract:
Thesis (M.A.)--City University of Hong Kong, 2007.
"Master of Arts in arbitration and dispute resolution dissertation, City University of Hong Kong" Title from PDF t.p. (viewed on May 22, 2007) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
3

Chowdhury, H. (Helal). "Data download on the move in visible light communications:design and analysis." Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526213620.

Full text
Abstract:
Abstract In visible light communication (VLC), light emitting diodes (LEDs) are used as transmitters; the air is the transmission medium and the photodiodes are used for receivers. This is often referred to as light fidelity (Li-Fi). In this thesis, we provide the methodology to evaluate the performance of VLC hotspot networks in the context of data downloading on the move scenarios by using throughput-distance relationship models. In this context, first we study the different properties of optical transceiver elements, noise sources, characterization and modelling of artificial light interference, different link topologies and then we introduce the throughput-distance relationship model. Secondly, the analytically based throughput-distance relationship has been developed for evaluating the performance of VLC hotspot networks in indoor environment in both day and night conditions. Simulation results reveal that background noise has a significant impact on the performance of VLC hotspots. As expected, in both indoor and outdoor environments the VLC hotspot performs better at night than during day. The performance of VLC hotspot networks is also quantified in terms of received file size at different bit error rate requirements and velocities of the mobile user. Thirdly, we study the performance of hybrid (Radio-Optical) WLAN-VLC hotspot and compare its performance with stand-alone VLC-only or WLAN-only hotspot cases. In this case, we also consider the data download on the move scenarios in an indoor environment for a single-user as well as for multi-user cases. In this hybrid WLAN-VLC hotspot, both the WLAN and the VLC are characterized by their throughput and communication range. Simulations have been performed to evaluate the performance of such network for data downloading on the move scenario by taking into account performance metrics such as filesize, average connectivity and system throughput. Simulation results reveal that the considered hybrid WLAN-VLC performs always better than stand-alone VLC-only or WLAN-only hotspot both for a single and multi-user cases. Finally, this thesis analyses the feasibility and potential benefits of using hybrid radio-optical wireless systems. In this respect, cooperative communication using optical relays are also introduced in order to increase the coverage and energy efficiency of the battery operated device. Potential benefits are identified as service connectivity and energy efficiency of battery operated device in an indoor environment. Simulation results reveal that user connectivity and energy efficiency depend on user density, coverage range ratio between single-hop and multi-hop, relay probabilities and mobility of the user
Tiivistelmä Näkyvään valoon pohjautuvassa tiedonsiirrossa (VLC) valodiodeja (LED) käytetään lähettiminä, ilma on siirtokanava ja valoilmaisimia käytetään vastaanottimina. Tätä kutsutaan usein nimellä light fidelity (Li-Fi). Tässä työssä tarjoamme menetelmiä VLC ”hotspot” verkkojen suorituskyvyn arviointiin tiedonsiirtonopeus-etäisyysmalleilla skenaarioissa, jossa tietoa ladataan liikkeessä. Tässä kontekstissa tutkimme ensin optisen lähettimen komponenttien eri ominaisuuksia, kohinan lähteitä, keinovalon häiriömalleja ja tiedonsiirtolinkkien topologioita, jonka jälkeen esittelemme tiedonsiirtonopeuden ja etäisyyden välisen mallin. Toiseksi kehitetyn analyyttisen tiedonsiirto-etäisyys mallia käytetään arvioitaessa VLC hotspot verkkojen suorituskykyä sisäympäristössä sekä päivä että yö olosuhteissa. Simulointien tulokset osoittavat, että taustakohinalla on suuri vaikutus VLC verkkojen suorituskykyyn. Kuten odotettua, sisä- ja ulkotiloissa VLC hotspot toimii paremmin yöllä kuin päivällä. VLC hotspot verkkojen suorituskyky arvioidaan myös vastaanotetun tiedoston koon, eri bittivirhesuhteen vaatimuksilla ja liikkuvan käyttäjän nopeuden suhteen. Kolmanneksi tutkimme hybridi WLAN-VLC hotspot verkon suorituskykyä ja vertaamme sen suorituskykyä pelkän VLC- tai WLAN hotspot tapauksessa. Käsittelemme myös skenaarioita jossa tiedoston lataus tapahtuu liikkeessä sisätilassa yhden käyttäjän sekä monen käyttäjän tapauksissa. Tässä hybridi WLAN-VLC hotspot, sekä erilliset WLAN- ja VLC verkot ovat määritelty niiden tiedonsiirtonopeuden ja kantaman perusteella. Näiden verkkojen suorituskykyä arvioitaessa on tehty joukko tietokonesimulointeja verkossa tapahtuvasta tietojen lataamisesta liikkeessä ottamalla huomioon suorituskyvyn mittarit kuten tiedoston koko, keskimääräinen yhteyden kesto ja saavutettu läpäisy. Simuloinnin tulokset paljastavat, että hybridi WLAN-VLC toimii aina paremmin kuin pelkkä VLC tai WLAN hotspot sekä yhden että monen käyttäjän tapauksessa. Lopuksi työssä analysoidaan ehdotetun järjestelmän toteutettavuus ja mahdolliset edut käytettäessä hybridejä radio-optisia langattomia järjestelmiä. Tältä osin esitellään myös kooperatiiviseen viestintään perustuvat optiset releet parantamaan verkon kattavuutta ja energiatehokkuutta akkukäyttöisissä laitteissa. Mahdolliset hyödyt tunnistetaan palvelun konnektiivisuudessa ja energiatehokkuudessa akkukäyttöisissä laitteissa sisätiloissa. Simulointien tulokset osoittavat, että käyttäjien konnektiivisuus ja energiatehokkuus riippuvat käyttäjätiheydestä, kantaman ja etäisyyden välisestä suhteesta yhden hypyn ja monen hypyn välillä, releointi todennäköisyydestä ja käyttäjien mobiliteetista
APA, Harvard, Vancouver, ISO, and other styles
4

Opitz, Andrea. "Stereo plastic calibration, simulation and data analysis /." [S.l.] : [s.n.], 2007. http://www.zb.unibe.ch/download/eldiss/07opitz_a.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gerber, Manuel. "Predictive site detection and reconstruction a data-driven approach to the detection, analysis, reconstruction and excavation of ancient Near Eastern monumental architecture /." Bern : Selbstverl, 2003. http://www.zb.unibe.ch/download/eldiss/03gerber_m.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hopkins, Ashley R. "Privacy Within Photo-Sharing and Gaming Applications: Motivation and Opportunity and the Decision to Download." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1556821782704244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Canali, Davide. "Plusieurs axes d'analyse de sites web compromis et malicieux." Thesis, Paris, ENST, 2014. http://www.theses.fr/2014ENST0009/document.

Full text
Abstract:
L'incroyable développement du World Wide Web a permis la création de nouveaux métiers, services, ainsi que de nouveaux moyens de partage de connaissance. Le web attire aussi des malfaiteurs, qui le considèrent comme un moyen pour gagner de l'argent en exploitant les services et la propriété d'autrui. Cette thèse propose une étude des sites web compromis et malicieux sous plusieurs axes d'analyse. Même si les attaques web peuvent être de nature très compliquées, on peut quasiment toujours identifier quatre acteurs principaux dans chaque cas. Ceux sont les attaquants, les sites vulnérables hébergés par des fournisseurs d'hébergement, les utilisateurs (souvent victimes des attaques), et les sociétés de sécurité qui parcourent Internet à la recherche de sites web compromis à être bloqués. Dans cette thèse, nous analysons premièrement les attaques web du point de vue des hébergeurs, en montrant que, même si des outils gratuits permettent de détecter des signes simples de compromission, la majorité des hébergeurs échouent dans cette épreuve. Nous passons en suite à l'analyse des attaquants et des leurs motivations, en étudiant les attaques web collectés par des centaines de sites web vulnérables. Ensuite, nous étudions le comportement de milliers de victimes d'attaques web, en analysant leurs habitudes pendant la navigation, pour estimer s'il est possible de créer des "profils de risque", de façon similaire à ce que les compagnies d'assurance font aujourd'hui. Enfin, nous adoptons le point de vue des sociétés de sécurité, en proposant une solution efficace pour la détection d'attaques web convoyées par sites web compromis
The incredible growth of the World Wide Web has allowed society to create new jobs, marketplaces, as well as new ways of sharing information and money. Unfortunately, however, the web also attracts miscreants who see it as a means of making money by abusing services and other people's property. In this dissertation, we perform a multidimensional analysis of attacks involving malicious or compromised websites, by observing that, while web attacks can be very complex in nature, they generally involve four main actors. These are the attackers, the vulnerable websites hosted on the premises of hosting providers, the web users who end up being victims of attacks, and the security companies who scan the Internet trying to block malicious or compromised websites. In particular, we first analyze web attacks from a hosting provider's point of view, showing that, while simple and free security measures should allow to detect simple signs of compromise on customers' websites, most hosting providers fail to do so. Second, we switch our point of view on the attackers, by studying their modus operandi and their goals in a distributed experiment involving the collection of attacks performed against hundreds of vulnerable web sites. Third, we observe the behavior of victims of web attacks, based on the analysis of their browsing habits. This allows us to understand if it would be feasible to build risk profiles for web users, similarly to what insurance companies do. Finally, we adopt the point of view of security companies and focus on finding an efficient solution to detecting web attacks that spread on compromised websites, and infect thousands of web users every day
APA, Harvard, Vancouver, ISO, and other styles
8

Henneberger, Sabine. "Entwicklung einer Analysemethode für Institutional Repositories unter Verwendung von Nutzungsdaten." Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2011. http://dx.doi.org/10.18452/16399.

Full text
Abstract:
Nutzungsdaten von elektronischen wissenschaftlichen Publikationen und insbesondere die Anzahl ihrer Downloads rücken mit der Verbreitung des Internets zunehmend in den Blickpunkt des Interesses der Autoren, der Herausgeber, der technischen Anbieter und der Nutzer solcher Publikationen. Downloadzahlen von Publikationen, welche durch Auswertung der Protokolle der IT-Systeme der Anbieter ermittelt werden, sind solche Nutzungsdaten. Die Erhebung erfolgt durch Filterung aller stattgefundenen Zugriffe und Summierung über eine definierte Zeiteinheit. Downloadzahlen sind Gegenstand wissenschaftlicher Untersuchungen, in welchen das Konzept des Citation Impact auf die Nutzungshäufigkeit einer Publikation übertragen und der sogenannte Download Impact gebil-det wird. Besonderes Augenmerk wird dem Zusammenhang von Citation Impact und Download Impact gewidmet. Handelt es sich um Open-Access-Publikationen, muss davon ausgegangen werden, dass in den Downloadzahlen nicht nur menschliche, sondern auch maschinelle Zugriffe erfasst wurden, da eine sichere Unterscheidung unmöglich ist. Das hat zur Folge, dass die gewonnenen Daten für die einzelnen Publikationen unzuverlässig sind und starken Schwankungen unterliegen. Trotzdem enthalten sie wertvolle Informationen, welche mit Hilfe der Mathematischen Statistik nutzbar gemacht werden können. Mit nichtparametrischen Methoden ausgewertet, geben Downloadzahlen Auskunft über die Sichtbarkeit von elektronischen Publikationen im Internet. Diese Methoden bilden den Kern von NoRA (Non-parametric Repository Analysis), mit deren Hilfe die Betreiber von Open Access Repositories die Downloadzahlen ihrer elektronischen Publikationen auswerten können, um Sichtbarkeitsdefizite zu ermitteln und zu beheben und so die Qualität ihres Online-Angebotes zu erhöhen. Die Analysemethode NoRA wurde auf die Daten von vier universitären Institutional Repositories erfolgreich angewendet. Es konnten jeweils Gruppen von Publikationen identifiziert werden, die sich hinsichtlich ihrer Nutzung signifikant unterscheiden. Die Parallelen in den Ergebnissen weisen auf Einflussfaktoren für die Nutzungsdaten hin, welche in der gegenwärtigen Diskussion bisher keine Berücksichtigung finden. Hier erschließen sich weitere Anwendungsfelder für NoRA. Gleichzeitig geben die Ergebnisse Anlass, den Informationsgehalt von Downloadzahlen für die einzelne Publikation kritisch zu hinterfragen.
With the spread of internet usage over the past decades, access characteristics of electronic scientific publica-tions, especially the number of document downloads, are of increasing interest to the authors, publishers, technical providers and users of such publications. These download data of publications are usually obtained from the protocols of the IT systems of the provider. A data set is then created by filtering all accesses and subsequent summarizing over a certain time unit. Download data are the subject of scientific investigations, in which the concept of the Citation Impact is applied to the rate of use of a publication and the so-called Download Impact is formed. Special attention is paid to the relation between Citation Impact and Download Impact. In the case of Open Access publications, two types of access need to be distinguished. Human access and machine access are both captured and a reliable distinction is not possible yet. As a result, the data obtained for single publications are unreliable and subject to strong fluctuations. Nevertheless, they contain valuable information that can be made useful with the help of mathematical statistics. Analyzed with nonparametric methods, download data give information about the visibility of electronic publications on the Internet. These methods form the core of NoRA (Non-parametric Repository Analysis). With the help of NoRA, the operators of Open Access Repositories are able to analyze the download data of their electronic publications, to identify and correct deficiencies of visibility and to increase the quality of their online platform. The analytical method NoRA was successfully applied to data from Institutional Repositories of four universities. In each case, groups of publications were identified that differed significantly in their usage. Similarities in the results reveal factors that influence the usage data, which have not been taken into account previously. The presented results imply further applications of NoRA but also raise doubts about the value of download data of single publications.
APA, Harvard, Vancouver, ISO, and other styles
9

Jílek, Radim. "Služba pro ověření spolehlivosti a pečlivosti českých advokátů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363772.

Full text
Abstract:
This thesis deals with the design and implementation of the Internet service, which allows to objectively assess and verify the reliability and diligence of Czech lawyers based on publicly available data of several courts. The aim of the thesis is to create and put into operation this service. The result of the work are the programs that provide partial actions in the realization of this intention.
APA, Harvard, Vancouver, ISO, and other styles
10

Schäppi, Eggler Rodolpha. "Laufbahnberatung up to date - Ein modernes Internet-Tool ergänzt traditionelle bewährte Methoden /." Zürich, 2004. http://www.zhaw.ch/fileadmin/user_upload/psychologie/Downloads/Bibliothek/Arbeiten/D/d1829.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Skogsberg, Peter. "Quantitative indicators of a successful mobile application." Thesis, KTH, Radio Systems Laboratory (RS Lab), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123976.

Full text
Abstract:
The smartphone industry has grown immensely in recent years. The two leading platforms, Google Android and Apple iOS, each feature marketplaces offering hundreds of thousands of software applications, or apps. The vast selection has facilitated a maturing industry, with new business and revenue models emerging. As an app developer, basic statistics and data for one's apps are available via the marketplace, but also via third-party data sources. This report regards how mobile software is evaluated and rated quantitatively by both endusers and developers, and which metrics are relevant in this context. A selection of freely available third-party data sources and app monitoring tools is discussed, followed by introduction of several relevant statistical methods and data mining techniques. The main object of this thesis project is to investigate whether findings from app statistics can provide understanding in how to design more successful apps, that attract more downloads and/or more revenue. After the theoretical background, a practical implementation is discussed, in the form of an in-house application statistics web platform. This was developed together with the app developer company The Mobile Life, who also provided access to app data for 16 of their published iOS and Android apps. The implementation utilizes automated download and import from online data sources, and provides a web based graphical user interface to display this data using tables and charts. Using mathematical software, a number of statistical methods have been applied to the collected dataset. Analysis findings include different categories (clusters) of apps, the existence of correlations between metrics such as an app’s market ranking and the number of downloads, a long-tailed distribution of keywords used in app reviews, regression analysis models for the distribution of downloads, and an experimental application of Pareto’s 80-20 rule which was found relevant to the gathered dataset. Recommendations to the app company include embedding session tracking libraries such as Google Analytics into future apps. This would allow collection of in-depth metrics such as session length and user retention, which would enable more interesting pattern discovery.
Smartphonebranschen har växt kraftigt de senaste åren. De två ledande operativsystemen, Google Android och Apple iOS, har vardera distributionskanaler som erbjuder hundratusentals mjukvaruapplikationer, eller appar. Det breda utbudet har bidragit till en mognande bransch, med nya växande affärs- och intäktsmodeller. Som apputvecklare finns grundläggande statistik och data för ens egna appar att tillgå via distributionskanalerna, men även via datakällor från tredje part. Den här rapporten behandlar hur mobil mjukvara utvärderas och bedöms kvantitativt av båda slutanvändare och utvecklare, samt vilka data och mått som är relevanta i sammanhanget.  Ett urval av fritt tillgängliga tredjeparts datakällor och bevakningsverktyg presenteras, följt av en översikt av flertalet relevanta statistiska metoder och data mining-tekniker. Huvudsyftet med detta examensarbete är att utreda om fynd utifrån appstatistik kan ge förståelse för hur man utvecklar och utformar mer framgångsrika appar, som uppnår fler nedladdningar och/eller större intäkter. Efter den teoretiska bakgrunden diskuteras en konkret implementation, i form av en intern webplattform för appstatistik. Denna plattform utvecklades i samarbete med apputvecklaren The Mobile Life, som också bistod med tillgång till appdata för 16 av deras publicerade iOSoch Android-appar. Implementationen nyttjar automatiserad nedladdning och import av data från datakällor online, samt utgör ett grafiskt gränssnitt för att åskådliggöra datan med bland annat tabeller och grafer. Med hjälp av matematisk mjukvara har ett antal statistiska metoder tillämpats på det insamlade dataurvalet. Analysens omfattning inkluderar en kategorisering (klustring) av appar, existensen av en korrelation mellan mätvärden såsom appars ranking och dess antal nedladdningar, analys av vanligt förekommande ord ur apprecensioner, en regressionsanalysmodell för distributionen av nedladdningar samt en experimentell applicering av Paretos ”80-20”-regel som fanns lämplig för vår data. Rekommendationer till appföretaget inkluderar att bädda in bibliotek för appsessionsspårning, såsom Google Analytics, i dess framtida appar. Detta skulle möjliggöra insamling av mer detaljerad data såsom att mäta sessionslängd och användarlojalitet, vilket skulle möjliggöra mer intressanta analyser.
APA, Harvard, Vancouver, ISO, and other styles
12

Srinivaasan, Gayathri. "Malicious Entity Categorization using Graph modelling." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202980.

Full text
Abstract:
Today, malware authors not only write malicious software but also employ obfuscation, polymorphism, packing and endless such evasive techniques to escape detection by Anti-Virus Products (AVP). Besides the individual behavior of malware, the relations that exist among them play an important role for improving malware detection. This work aims to enable malware analysts at F-Secure Labs to explore various such relationships between malicious URLs and file samples in addition to their individual behavior and activity. The current detection methods at F-Secure Labs analyze unknown URLs and file samples independently without taking into account the correlations that might exist between them. Such traditional classification methods perform well but are not efficient at identifying complex multi-stage malware that hide their activity. The interactions between malware may include any type of network activity, dropping, downloading, etc. For instance, an unknown downloader that connects to a malicious website which in turn drops a malicious payload, should indeed be blacklisted. Such analysis can help block the malware infection at its source and also comprehend the whole infection chain. The outcome of this proof-of-concept study is a system that detects new malware using graph modelling to infer their relationship to known malware as part of the malware classification services at F-Secure.
Idag, skadliga program inte bara skriva skadlig programvara men också använda förvirring, polymorfism, packning och ändlösa sådana undan tekniker för att fly detektering av antivirusprodukter (AVP). Förutom individens beteende av skadlig kod, de relationer som finns mellan dem spelar en viktig roll för att förbättra detektering av skadlig kod. Detta arbete syftar till att ge skadliga analytiker på F-Secure Labs att utforska olika sådana relationer mellan skadliga URL: er och fil prover i Förutom deras individuella beteende och aktivitet. De aktuella detektionsmetoder på F-Secure Labs analysera okända webbadresser och fil prover oberoende utan med beaktande av de korrelationer som kan finnas mellan dem. Sådan traditionella klassificeringsmetoder fungerar bra men är inte effektiva på att identifiera komplexa flerstegs skadlig kod som döljer sin aktivitet. Interaktioner mellan malware kan innefatta någon typ av nätverksaktivitet, släppa, nedladdning, etc. Till exempel, en okänd loader som ansluter till en skadlig webbplats som i sin tur släpper en skadlig nyttolast, bör verkligen vara svartlistad. En sådan analys kan hjälpa till att blockera malware infektion vid källan och även förstå hela infektion kedja. Resultatet av denna proof-of-concept studien är ett system som upptäcker ny skadlig kod med hjälp av diagram modellering för att sluta deras förhållande till kända skadliga program som en del av de skadliga klassificerings tjänster på F-Secure.
APA, Harvard, Vancouver, ISO, and other styles
13

von, Wenckstern Michael. "Web applications using the Google Web Toolkit." Master's thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-115009.

Full text
Abstract:
This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be implemented in the Model-View-Presenter pattern to show that complex user interfaces can be created with the Google Web Toolkit. The Google Web Toolkit framework will be compared with the JavaServer Faces one to find out which toolkit is the right one for the next web project
Diese Diplomarbeit beschreibt die Erzeugung desktopähnlicher Anwendungen mit dem Google Web Toolkit und die Umwandlung klassischer Java-Programme in diese. Das Google Web Toolkit ist eine Open-Source-Entwicklungsumgebung, die Java-Code in browserunabhängiges als auch in geräteübergreifendes HTML und JavaScript übersetzt. Vorgestellt wird der Großteil des GWT Frameworks inklusive des Java zu JavaScript-Compilers sowie wichtige Sicherheitsaspekte von Internetseiten. Um zu zeigen, dass auch komplizierte graphische Oberflächen mit dem Google Web Toolkit erzeugt werden können, wird das bekannte Brettspiel Agricola mittels Model-View-Presenter Designmuster implementiert. Zur Ermittlung der richtigen Technologie für das nächste Webprojekt findet ein Vergleich zwischen dem Google Web Toolkit und JavaServer Faces statt
APA, Harvard, Vancouver, ISO, and other styles
14

Lin, Szu-Hung, and 林思宏. "Statistical Multi-source Data Download Strategies for Mobile P2P Streaming." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/88033701596459256251.

Full text
Abstract:
碩士
國立屏東科技大學
資訊管理系所
101
Provisioning of video streaming service in Mobile P2P networks is challenged by many facts in transmission, such as high data loss rate, burst loss, unstable connectivity, link and peer host heterogeneity. This thesis addresses these transport issues by proposing two novel schemes, statistical download (SD) and blockless interleaving (BI), which cooperatively work with UEP-FEC channel coding. SD improves the successful rate of data download by exploiting multi-source download according to a RTT-based probability model. BI conducts different interleaving policies of data download from respective source peers according to the associated channel loss process. Simulation results show that even the system presents high volatility and heterogeneity, when the size grows into large, SD can improve on-time data rate with moderate traffic overheads. Meanwhile, BI can improve the error recovery performance of UEP-FEC by eliminating the packet loss correlation. It turns out that the video continuity is significantly improved.
APA, Harvard, Vancouver, ISO, and other styles
15

Lin, Hau-Chin, and 林皓嶔. "A Multiple Parallel Download Scheme with BoundaryBandwidth Consideration in Data Grids." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/71250195476154897462.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
94
The advances of computer technology have promoted the quality of our daily life. However, the computing speed and the storage capacity of computers still can not satisfy users’ requirements. The concept of Data Grid is provided for this reason which consisted of many computers and makes the capabilities of compute and storage become infinite. The characteristic of Data Grid is distributing a file to many computers in order to decrease the download time of users. Thus, the parallel download method, which allows the user to download different part of a file form various computers simultaneously, is adapted to increase the decreasing degree of download time. As far as we know, there are many single parallel download methods are proposed to speed up the download time but the case of multiple parallel download jobs in a system is still not be considered. Single parallel download can decrease the download time, but it is just like a robber which will increase the download time of other users compared with the traditional download manner. If all jobs in the grid system are parallel download, the problem of resource waste is caused. In this article, we proposed a Boundary Bandwidth parallel download scheme (BB-parallel download) to limit the usage of upload bandwidth from the server and to avoid the bandwidth waste in entire system. Furthermore, it makes that each parallel download job will not influence others and will has best download time in the multiple parallel download jobs environment. We test the effects of file size, server count, and block size upon the parallel download time. The tested results show that the proposed BB-parallel download scheme outperforms than the static and the dynamic parallel download schemes.
APA, Harvard, Vancouver, ISO, and other styles
16

Hsi, Shih-Chun, and 席士鈞. "An Efficient and Bandwidth Sensitive Parallel Download Scheme in Data Grids." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/bu7s9d.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
95
For modern scientific applications such as astrophysics, astronomy, aerography, and biology, a large amount of storage space is required because of the large-scale datasets. Data Grid collects distributed storage resources such as hard disk space across heterogeneous networks to meet such requirements. In data grid environment, data replication service that copies the replicas to proper storage systems increases the reliability of the data access. By means of these replicas, parallel download creates multiple connections from the client side to the replica servers to improve the performance of the data transfer. During a large dataset transferring, the dynamic network status and different states of the replica servers make the transferring time hard to predict. If the available bandwidths of some replica servers become lower during the downloading progress, the completion time will be increased. Therefore, to adapt the bandwidth-variation and to make the data transferring more efficient, a parallel download scheme which is called EA (Efficient and Adaptive) parallel download is proposed in this thesis. The scheme is to re-evaluate all of the replica servers during the download progress and replace the decaying selected servers with better backup servers. According to our experiments in the Unigrid environment, the EA parallel download decreases the completion time by 1.63% to 13.45% in natural Unigrid environment and 6.28% to 30.56% in choreographed Unigrid environment when compared to the Recursive Co-Allocation scheme. It means that the proposed scheme adapts to the dynamic environment nicely and decreases the total download time effectively.
APA, Harvard, Vancouver, ISO, and other styles
17

Nachappa, Thimmiah Gudiyangada. "Accessing meteorological data in INSPIRE." Master's thesis, 2009. http://hdl.handle.net/10362/8255.

Full text
Abstract:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
In the information age, information is of vital importance to the economic and social development of a country. Meteorological data, is multidimensional, continually evolving, highly spatial and highly temporal in nature. It is of great importance to a wide range of stakeholders including national agencies, private weather services, defense, transportation, aviation, national infrastructures, financial institutions and the general public. Members of the WMO (World Meteorological Organization) have vast amounts of data. However, this data is stored in many different formats based on various conceptual models (e.g. BUFR, GRIB, NetCDF, HDF). INSPIRE is a European Union initiative to create interoperability between spatial datasets among various communities. The main goal of this project is to suggest the most appropriate INSPIRE Download Service to access meteorological data. This project uses BUFR data and tries to access it through Climate Science Modeling Language (CSML), which is a data model and software framework for accessing meteorological data and retrieve it through standard geospatial web services. Based on the testing, suitable INSPIRE Download Service will be suggested. This helps to bridge the gaps between the geospatial, meteorological communities, and policy makers.
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Yun-daw, and 吳昀道. "E-Book Client Embedded Development: Customized Download Service and Data Storage Management Design." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/25874908612596993635.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
98
Recently, people talk about transforming traditional books into e-books. It seems that people are interested in e-books because its benefit can store hundreds of e-book in the small electronic device which can be from smart phones to electronic tablet devices with a 9.7 inches screen. We can take it around the world and can take time to read books in the environment where it is difficult for people to read books. According to the report[13], generally, the weight of e-book reader is not more than 2 pounds. At the same time, computers and Internet become more and more popular, so we can get and deliver e-book files conveniently. We can connect to the Internet in order to browse e-town website and buy e-books on it through 3G or wify. It will help us save our time for buying books in the book store. This thesis will talk about the implementation of e-book client on smart phone, especially for customized download service and data storage management design. The first problem confronts me is that how we use the system memory efficiently. We have to care about the balance between the memory usage and the execution rate. The second problem is that whole software is blocked when it is downloading e-books. We provide a solution for the first problem. It is that putting some data which e-book client doesn’t need now into the database file and getting those data from it when e-book client needs. The solution to the second problem is that we take advantage of Service in Android, and put it into another process so that our e-book client won''t be blocked by download jobs. This thesis proposes a design of data storage management for future extension. It can determine whether the application can access the data belonging to our e-book client.
APA, Harvard, Vancouver, ISO, and other styles
19

Ho, Guan-Lin, and 何冠霖. "An Empirical Investigation of the Factors that Influence Individuals to Download Medical Data from My Health Bank." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/64nxgt.

Full text
Abstract:
碩士
義守大學
醫務管理學系
105
Background and purpose: My health bank, issued by the Ministry of Health and Welfare of Taiwan, aimes to make people to understand medical information more clear and enable people to get their medical record and medication knowledge conveniently. However, the system was applied rarely and the usage was not high till now, Why people did not want to apply My health bank is therefore the issue that should be understood and resolved. If people have difficulty and problem, it is necessary to help people to conquer the barrier and solve the problem in order to increase the application rate of My health bank. The aim of this study is to understand the influencing factors and enable people to make use of My health bank. Materials and methods: According to Pender''s health promotion model, the research model includes seven factors: perceived benefits, perceived barriers, self-efficacy, perceived health status, health value, cues to action, social influence. The study utilized survey methodology and questionnaires were used to collect responses from individuals that was over 20 years old and came to a large regional hospital in southern Taiwan. Results and Conclusion: Totally, 258 questionnaires were distributed and 221 responses were collected, resulting in a response rate of 86%. The results of discriminant analysis showed that perceived barriers, perceived self-efficacy, and perceived health status significantly predict the use of My health bank. Based on the results, it is inferenced that the possible reason that individuals not applying for My health bank may be due to the requirement of card readers, household numbers of the household registers, computer harware and software. On the other hand, those who have applied for My health bank are in better health. This study suggests that the government should lower the barriers for the application of My health bank by individuals. Futher, the “opt-out system,” proposed by the Australia’ E-health, can also be adopted to reduce these obstacles. Finally, to increase the numbers of application counters for My health bank among hospitals are also suggested.
APA, Harvard, Vancouver, ISO, and other styles
20

Saade, Gabriel. "Mobile data in handcuffs : how limited mobile data affect people’s behavior on mobile data usage and megabyte allocation in different locations." Master's thesis, 2020. http://hdl.handle.net/10400.14/31273.

Full text
Abstract:
In the era of mobile data, there have been many changes regarding human behavior on data consumption. Having abundant or “unlimited” data is becoming more and more common. However, what would happen to the behaviors of people if they were under extreme conditions of data limitations. This study shows the behavioral outcome of the participants of this survey when exposed to limitations regarding data consumption. Some of the most chosen applications in terms of importance when travelling abroad and when staying home were mobility applications, communication, social media, and restaurant search downloads. There exist many different types of research on mobile data but very few highlight the importance of behavioral change and data prioritization. Under the umbrella of mobile data, many different businesses are being disrupted due to the mass interconnectivity of people and differentiated networks. Using real-life data, this study shows us that people do prioritize their applications when exposed to certain data limitations in different scenarios. The outcomes of the survey show us businesses could be affected by mobile data and the pulling of information that participants have. The exposure to limited data further increased this disruption as people only require applications that were necessary and most of these were related to some specific industries such as tourism and gastronomy. This study shows us that people in general, prefer to download information with an average mean of 70.2 megabytes rather than upload information with an average mean of 29.8 with a certain pattern in application prioritization.
Em tempos modernos com o crescimento da importância do mundo digital, tem havido muitas mudanças no comportamento dos utilizadores de dados móveis. Cada vez mais, os consumidores têm acesso a tarifários de dados móveis ilimitados. Porém, o que aconteceria se o uso de dados viesse a ser limitado? Este estudo analisa as mudanças nos comportamentos dos participantes quando confrontados com limitações no tarifário. Parte das aplicações mais importantes para o consumidor são aplicações dos setores da mobilidade, comunicação, redes sociais e pesquisas de restaurantes. Existem diferentes investigações na área dos dados móveis, mas poucas dos mesmas realçam a importância das mudanças no comportamento e na priorização dos dados. Com o crescimento dos dados móveis, múltiplos negócios têm sido perturbados pela interconectividade dos consumidores e pela diferenciação das redes. Através de dados reais, este estudo mostra como as pessoas priorizam as suas aplicações quando expostas a limitações nos dados móveis. Os resultados da nossa pesquisa mostram-nos que, de facto, certos negócios podem ser afetados pelos dados móveis e pela sondagem de informação sujeita aos mesmos. A exposição à limitação de dados continuou a aumentar esta distorção, visto que os consumidores só usaram as aplicações que consideram mais importantes, a maioria dos quais relacionados com indústrias específicas como o turismo e a gastronomia. O estudo concluí que os consumidores no geral preferem ter velocidades de download de 70.2 megabytes por segundo do que velocidades de upload de 29.8 megabytes.
APA, Harvard, Vancouver, ISO, and other styles
21

Yu-FenChen and 陳瑜棻. "Vehicular Proximity Service (V-ProSe) of Sharing the Downloaded Geo-based Landscape Data Using the Credit-based Cluster Scheme for Vehicular Social Network (VSN)." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/n98b28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cameron-Pesant, Sarah. "La webométrie en sciences sociales et humaines : analyse des données d’usage de la plateforme Érudit." Thèse, 2016. http://hdl.handle.net/1866/19549.

Full text
Abstract:
Cette étude exploratoire s’intéresse à l’usage des revues en sciences sociales et humaines diffusées en libre accès complet et en libre accès différé par la plateforme Érudit. Basée sur les données de téléchargements d’Érudit, elle vise à 1) fournir un portrait détaillé de l’usage des articles, 2) décrire les habitudes de téléchargement des usagers au Canada et à l’international, et 3) analyser l’effet des politiques de libre accès des revues sur les téléchargements qu’elles reçoivent. Pour ce faire, 39 437 659 téléchargements, extraits de 999 367 190 requêtes HTTP enregistrées dans les logs du serveur d’Érudit de 2010 à 2015, ont été analysés. Les résultats montrent que la majorité des usagers provient du Québec, de la France et d’autres pays francophones, et que, la plupart du temps, ceux-ci accèdent aux articles par l’intermédiaire de Google. Les habitudes de téléchargement varient d’un pays à l’autre : alors que les usagers canadiens et français utilisent Érudit principalement en journée et en semaine, leurs homologues américains sont davantage actifs en soirée, la nuit, ainsi que les week-ends. Enfin, un avantage important lié au libre accès a été observé : les articles des revues en libre accès sont davantage téléchargés que ceux des revues en libre accès différé et, pour ces dernières, la fin de l’embargo est associée à une croissance importante des téléchargements – croissance moins marquée au Canada où bon nombre d’institutions sont abonnées aux revues de la plateforme. Ces résultats démontrent l’importance des revues nationales pour les sciences sociales et humaines, ainsi que l’effet positif du libre accès sur la diffusion des connaissances, tant au Canada qu’à l’étranger.
This study explores the usage of open access (OA) and delayed OA journals in the social sciences and humanities hosted by the journal platform Érudit. Relying on Érudit’s download data, the goals of the study are: 1) to describe the usage of scholarly articles, 2) to examine download patterns of national and international users, and 3) to analyze the effect of OA policies on journal download rates. The study is based on an analysis of 39,437,659 downloads, which were extracted from 999,367,190 HTTP requests stored in Érudit’s log files between 2010 and 2015. The results show that the majority of users came from Quebec, France and other French-speaking countries, and that most users access articles through Google. Download patterns varied between countries: although articles were most frequently accessed during working hours, US users were more active in the evening, at night and during weekends than Canadian and French users. The study also demonstrates a clear OA advantage, as freely available articles were downloaded more frequently than delayed OA articles affected by an embargo, and downloads per article increased substantially after embargos ended. This effect was less pronounced for Canadian users, who often have access to Érudit journals via institutional subscriptions and are thus not affected by the embargo periods. The results show the positive effect of OA on knowledge dissemination in Canada as well as internationally, and emphasize the importance of national journals in the social sciences and humanities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography