To see the other types of publications on this topic, follow the link: Multiple Clouds.

Dissertations / Theses on the topic 'Multiple Clouds'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiple Clouds.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Xiaojun. "Multiple Scattering from Bubble Clouds." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/36.

Full text
Abstract:
Multiple scattering effects from bubble clouds are investigated in this study. A high performance, general purpose numerical tool for multiple scattering calculations is developed. This numerical tool is applied in three computational scenarios in this study. The total scattering cross section of a bubble cloud is investigated. Numerical results indicate that the resonant frequency of the bubble cloud is much lower than that of a single bubble. The variation of resonant frequency of multiple scattering is also studied. It is found that the resonant frequency decreases as the number of bubbles increases, or as the void fraction of the bubble cloud decreases. Phase distributions of bubble oscillations in various multiple scattering scenarios are presented. It is found that, at resonance, the bubbles synchronize to the same phase, which is indicative of the lowest mode of collective oscillation. At wave localization, half of the bubbles oscillate at phase 0 while the other half oscillate at phase Pi. An intuitive interpretation of this behavior is given.
APA, Harvard, Vancouver, ISO, and other styles
2

Hedlund, Tobias. "Registration of multiple ToF camera point clouds." Thesis, Umeå University, Department of Physics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-34952.

Full text
Abstract:
<p>Buildings, maps and objects et cetera, can be modeled using a computer or reconstructed in 3D by data from different kinds of cameras or laser scanners. This thesis concerns the latter. The recent improvements of Time-of-Flight cameras have brought a number of new interesting research areas to the surface. Registration of several ToF camera point clouds is such an area.</p><p>A literature study has been made to summarize the research done in the area over the last two decades. The most popular method for registering point clouds, namely the Iterative Closest Point (ICP), has been studied. In addition to this, an error relaxation algorithm was implemented to minimize the accumulated error of the sequential pairwise ICP.</p><p>A few different real-world test scenarios and one scenario with synthetic data were constructed. These data sets were registered with varying outcome. The obtained camera poses from the sequential ICP were improved by loop closing and error relaxation.</p><p>The results illustrate the importance of having good initial guesses on the relative transformations to obtain a correct model. Furthermore the strengths and weaknesses of the sequential ICP and the utilized error relaxation method are shown.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

De, Souza Bento Da Silva Pedro Paulo. "On the mapping of distributed applications onto multiple Clouds." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN089/document.

Full text
Abstract:
Le Cloud est devenu une plate-forme très répandue pour le déploiement d'applications distribuées. Beaucoup d'entreprises peuvent sous-traiter leurs infrastructures d'hébergement et, ainsi, éviter des dépenses provenant d'investissements initiaux en infrastructure et de maintenance.Des petites et moyennes entreprises, en particulier, attirés par le modèle de coûts sur demande du Cloud, ont désormais accès à des fonctionnalités comme le passage à l'échelle, la disponibilité et la fiabilité, qui avant le Cloud étaient presque réservées à de grandes entreprises.Les services du Cloud peuvent être offerts aux utilisateurs de plusieurs façons. Dans cette thèse, nous nous concentrons sur le modèle d'Infrastructure sous Forme de Service. Ce modèle permet aux utilisateurs d’accéder à des ressources de calcul virtualisés sous forme de machine virtuelles (MVs).Pour installer une application distribuée, un client du Cloud doit d'abord définir l'association entre son application et l'infrastructure. Il est nécessaire de prendre en considération des contraintesde coût, de ressource et de communication pour pouvoir choisir un ensemble de MVs provenant d'opérateurs de Cloud publiques et privés le plus adaptés. Cependant, étant donné la quantité exponentiel de configurations, la définition manuelle de l'association entre application et infrastructure peut être un challenge dans des scénarios à large échelle ou ayant des contraintes importantes de temps. En effet, ce problème est une généralisation du problème de calcul de homomorphisme de graphes, qui est NP-complet.Dans cette thèse, nous adressons le problème de calculer des placements initiaux et de reconfiguration pour des applications distribuées sur potentiellement de multiples Clouds. L'objectif est de minimiser les coûts de location et de migration en satisfaisant des contraintes de ressources et communications. Pour cela, nous proposons des heuristiques performantes capables de calculer des placements de bonne qualité très rapidement pour des scénarios à petite et large échelles. Ces heuristiques, qui sont basées sur des algorithmes de partition de graphes et de vector packing, ont été évaluées en les comparant avec des approches de l'état de l'art comme des solveurs exactes et des méta-heuristiques. Nous montrons en utilisant des simulations que les heuristiques proposées arrivent à calculer des solutions de bonne qualité en quelques secondes tandis que des autres approches prennent des heures ou jours pour les calculer<br>The Cloud has become a very popular platform for deploying distributed applications. Today, virtually any credit card holder can have access to Cloud services. There are many different ways of offering Cloud services to customers. In this thesis we especially focus on theInfrastructure as a Service (IaaS), a model that, usually, proposes virtualized computing resources to costumers in the form of virtual machines (VMs). Thanks to its attractive pay-as-you-use cost model, it is easier for customers, specially small and medium companies, to outsource hosting infrastructures and benefit of savings related to upfront investments and maintenance costs. Also, customers can have access to features such as scalability, availability, and reliability, which previously were almost exclusive for large companies. To deploy a distributed application, a Cloud customer must first consider the mapping between her application (or its parts) to the target infrastructure. She needs to take into consideration cost, resource, and communication constraints to select the most suitable set of VMs, from private and public Cloud providers. However, defining a mapping manually may be a challenge in large-scale or time constrained scenarios since the number of possible configuration explodes. Furthermore, when automating this process, scalability issues must be taken into account given that this mapping problem is a generalization of the graph homomorphism problem, which is NP-complete.In this thesis we address the problem of calculating initial and reconfiguration placements for distributed applications over possibly multiple Clouds. Our objective is to minimize renting and migration costs while satisfying applications' resource and communication constraints. We concentrate on the mapping between applications and Cloud infrastructure. Using an incremental approach, we split the problem into three different parts and propose efficient heuristics that can compute good quality placements very quickly for small and large scenarios. These heuristics are based on graph partition and vector packing heuristics and have been extensively evaluated against state of the art approaches such as MIP solvers and meta-heuristics. We show through simulations that the proposed heuristics manage to compute solutions in a few seconds that would take many hours or days for other approaches to compute
APA, Harvard, Vancouver, ISO, and other styles
4

Wieman, Sharon A. "Multiple channel satellite analysis of cirrus." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA238054.

Full text
Abstract:
Thesis (M.S. in Meteorology)--Naval Postgraduate School, June 1990.<br>Thesis Advisor(s): Wash, Carlyle H. Second Reader: Durkee, Philip A. "June 1990." Description based on title screen as viewed on October 15, 2009. DTIC Identifier(s): Cirrus Clouds, Satellite Meteorology, Theses, Split Window Techniques. Author(s) subject terms: Meteorology, Satellite Remote Sensing, Cirrus, Split-Window Technique. Includes bibliographical references (p. 47-48). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
5

Pfeifroth, Uwe Anton [Verfasser], Bodo [Akademischer Betreuer] [Gutachter] Ahrens, and Andreas [Gutachter] Fink. "The diurnal cycle of clouds and precipitation : an evaluation of multiple data sources / Uwe Anton Pfeifroth. Betreuer: Bodo Ahrens. Gutachter: Bodo Ahrens ; Andreas Fink." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2016. http://d-nb.info/1112601627/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pfeifroth, Uwe [Verfasser], Bodo [Akademischer Betreuer] [Gutachter] Ahrens, and Andreas [Gutachter] Fink. "The diurnal cycle of clouds and precipitation : an evaluation of multiple data sources / Uwe Anton Pfeifroth. Betreuer: Bodo Ahrens. Gutachter: Bodo Ahrens ; Andreas Fink." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2016. http://nbn-resolving.de/urn:nbn:de:hebis:30:3-414318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schmidt, Jörg. "Dual-field-of-view Raman lidar measurements of cloud microphysical properties." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-150408.

Full text
Abstract:
Im Rahmen der vorliegenden Arbeit wurde eine neuartige Lidartechnik in ein leistungsstarkes Lidar-System implementiert. Mit Hilfe des realisierten Aufbaus wurden Aerosol-Wolken-Wechselwirkungen in Flüssigwasserwolken über Leipzig untersucht. Die angewandte Messmethode beruht auf der Detektion von Licht, das an Wolkentröpfchen mehrfach in Vorwärtsrichtung gestreut und an Stickstoffmolekülen inelastisch zurückgestreut wurde. Dabei werden zwei Gesichtsfelder unterschiedlicher Größe verwendet. Ein Vorwärtsiterations-Algorithmus nutzt die gewonnenen Informationen zur Ermittlung von Profilen wolkenmikrophysikalischer Eigenschaften. Es können der Extinktionskoeffizient, der effektive Tröpfchenradius, der Flüssigwassergehalt sowie die Tröpfchenanzahlkonzentration bestimmt werden. Weiterhin wird die exakte Erfassung der Wolkenunterkantenhöhe durchdie eingesetzte Messtechnik ermöglicht. Darüber hinaus ist die Bestimmung von Aerosoleigenschaften mit dem eingesetzten Lidargerät möglich. Die Qualität des realisierten Messaufbaus wurde geprüft und eine Fehleranalyse durchgeführt. Unter anderem wurde der aus einer Wolkenmessung bestimmte Flüssigwassergehalt mit einem Mikrowellen-Radiometer bestätigt. Anhand von Fallbeispielen konnte das Potential dieser Messtechnik demonstriert werden. Die Bedeutung von Profilinformationen von Wolkeneigenschaften für die Untersuchung von Aerosol-Wolken-Wechselwirkungen wurde gezeigt. Weiterhin wurde mit Hilfe eines Doppler-Windlidars der Einfluss der Vertikalwindgeschwindigkeit auf Wolkeneigenschaften und damit Aerosol-Wolken-Wechselwirkungen verdeutlicht. Neunundzwanzig Wolkenmessungen wurden für eine statistische Auswertung bezüglich Aerosol-Wolken-Wechselwirkungen genutzt. Dabei konnte erstmalig die Abhängigkeit von Aerosol-Wolken-Wechselwirkungen von der Wolkeneindringtiefe untersucht werden. Es wurde festgestellt, dass diese auf die untersten 70m von Wolken beschränkt sind. Weiterhin wurden deutlich stärkere Aerosol-Wolken-Wechselwirkungen in Wolkengebieten festgestellt, die von Aufwinden dominiert werden. Für der Quantifizierung der Stärke von Aerosol-Wolken-Wechselwirkungen wurden ACIN-Werte genutzt, welche den Zusammenhang zwischen der Tröpfchenanzahlkonzentration und dem Aerosol-Extinktionskoeffizienten beschreiben. Dabei wurde zwischen der Untersuchung der entsprechenden mikrophysikalischen Prozesse und deren Bedeutung für die Wolkenalbedo und damit dem Strahlungsantrieb der Wolken unterschieden. Für die erstgenannte Zielstellung wurde ein ACIN-Wert von 0.80 +/- 0.40 ermittelt, für Letztere 0.13 +/- 0.07.
APA, Harvard, Vancouver, ISO, and other styles
8

Kjellén, Kevin. "Point Cloud Registration in Augmented Reality using the Microsoft HoloLens." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148901.

Full text
Abstract:
When a Time-of-Flight (ToF) depth camera is used to monitor a region of interest, it has to be mounted correctly and have information regarding its position. Manual configuration currently require managing captured 3D ToF data in a 2D environment, which limits the user and might give rise to errors due to misinterpretation of the data. This thesis investigates if a real time 3D reconstruction mesh from a Microsoft HoloLens can be used as a target for point cloud registration using the ToF data, thus configuring the camera autonomously. Three registration algorithms, Fast Global Registration (FGR), Joint Registration Multiple Point Clouds (JR-MPC) and Prerejective RANSAC, were evaluated for this purpose. It was concluded that despite using different sensors it is possible to perform accurate registration. Also, it was shown that the registration can be done accurately within a reasonable time, compared with the inherent time to perform 3D reconstruction on the Hololens. All algorithms could solve the problem, but it was concluded that FGR provided the most satisfying results, though requiring several constraints on the data.
APA, Harvard, Vancouver, ISO, and other styles
9

Delgado, Donate Eduardo Juan. "Multiple star formation in molecular cloud cores." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sellami, Rami. "Supporting multiple data stores based applications in cloud environments." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL002/document.

Full text
Abstract:
Avec l’avènement du cloud computing et des big data, de nouveaux systèmes de gestion de bases de données sont apparus, connus en général sous le vocable systèmes NoSQL. Par rapport aux systèmes relationnels, ces systèmes se distinguent par leur absence de schéma, une spécialisation pour des types de données particuliers (documents, graphes, clé/valeur et colonne) et l’absence de langages de requêtes déclaratifs. L’offre est assez pléthorique et il n’y a pas de standard aujourd’hui comme peut l’être SQL pour les systèmes relationnels. De nombreuses applications peuvent avoir besoin de manipuler en même temps des données stockées dans des systèmes relationnels et dans des systèmes NoSQL. Le programmeur doit alors gérer deux (au moins) modèles de données différents et deux (au moins) langages de requêtes différents pour pouvoir écrire son application. De plus, il doit gérer explicitement tout son cycle de vie. En effet, il a à (1) coder son application, (2) découvrir les services de base de données déployés dans chaque environnement Cloud et choisir son environnement de déploiement, (3) déployer son application, (4) exécuter des requêtes multi-sources en les programmant explicitement dans son application, et enfin le cas échéant (5) migrer son application d’un environnement Cloud à un autre. Toutes ces tâches sont lourdes et fastidieuses et le programmeur risque d’être perdu dans ce haut niveau d’hétérogénéité. Afin de pallier ces problèmes et aider le programmeur tout au long du cycle de vie des applications utilisant des bases de données multiples, nous proposons un ensemble cohérent de modèles, d’algorithmes et d’outils. En effet, notre travail dans ce manuscrit de thèse se présente sous forme de quatre contributions. Tout d’abord, nous proposons un modèle de données unifié pour couvrir l’hétérogénéité entre les modèles de données relationnelles et NoSQL. Ce modèle de données est enrichi avec un ensemble de règles de raffinement. En se basant sur ce modèle, nous avons défini notre algèbre de requêtes. Ensuite, nous proposons une interface de programmation appelée ODBAPI basée sur notre modèle de données unifié, qui nous permet de manipuler de manière uniforme n’importe quelle source de données qu’elle soit relationnelle ou NoSQL. ODBAPI permet de programmer des applications indépendamment des bases de données utilisées et d’exprimer des requêtes simples et complexes multi-sources. Puis, nous définissons la notion de bases de données virtuelles qui interviennent comme des médiateurs et interagissent avec les bases de données intégrées via ODBAPI. Ce dernier joue alors le rôle d’adaptateur. Les bases de données virtuelles assurent l’exécution des requêtes d’une façon optimale grâce à un modèle de coût et un algorithme de génération de plan d’exécution optimal que nous définis. Enfin, nous proposons une approche automatique de découverte de bases de données dans des environnements Cloud. En effet, les programmeurs peuvent décrire leurs exigences en termes de bases de données dans des manifestes, et grâce à notre algorithme d’appariement, nous sélectionnons l’environnement le plus adéquat à notre application pour la déployer. Ainsi, nous déployons l’application en utilisant une API générique de déploiement appelée COAPS. Nous avons étendue cette dernière pour pouvoir déployer les applications utilisant plusieurs sources de données. Un prototype de la solution proposée a été développé et mis en œuvre dans des cas d'utilisation du projet OpenPaaS. Nous avons également effectué diverses expériences pour tester l'efficacité et la précision de nos contributions<br>The production of huge amount of data and the emergence of Cloud computing have introduced new requirements for data management. Many applications need to interact with several heterogeneous data stores depending on the type of data they have to manage: traditional data types, documents, graph data from social networks, simple key-value data, etc. Interacting with heterogeneous data models via different APIs, and multiple data stores based applications imposes challenging tasks to their developers. Indeed, programmers have to be familiar with different APIs. In addition, the execution of complex queries over heterogeneous data models cannot, currently, be achieved in a declarative way as it is used to be with mono-data store application, and therefore requires extra implementation efforts. Moreover, developers need to master and deal with the complex processes of Cloud discovery, and application deployment and execution. In this manuscript, we propose an integrated set of models, algorithms and tools aiming at alleviating developers task for developing, deploying and migrating multiple data stores applications in cloud environments. Our approach focuses mainly on three points. First, we provide a unified data model used by applications developers to interact with heterogeneous relational and NoSQL data stores. This model is enriched by a set of refinement rules. Based on that, we define our query algebra. Developers express queries using OPEN-PaaS-DataBase API (ODBAPI), a unique REST API allowing programmers to write their applications code independently of the target data stores. Second, we propose virtual data stores, which act as a mediator and interact with integrated data stores wrapped by ODBAPI. This run-time component supports the execution of single and complex queries over heterogeneous data stores. It implements a cost model to optimally execute queries and a dynamic programming based algorithm to generate an optimal query execution plan. Finally, we present a declarative approach that enables to lighten the burden of the tedious and non-standard tasks of (1) discovering relevant Cloud environments and (2) deploying applications on them while letting developers to simply focus on specifying their storage and computing requirements. A prototype of the proposed solution has been developed and implemented use cases from the OpenPaaS project. We also performed different experiments to test the efficiency and accuracy of our proposals
APA, Harvard, Vancouver, ISO, and other styles
11

Brown, Joseph Nagy 1959. "HPSIM4A: Simulating multiple clocks and functional registers." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/292038.

Full text
Abstract:
Universal AHPL, a hardware description language, is supported by a function level simulator. Driving the simulation is a data base, generated by STAGE1 of the Three-Stage Hardware Compiler, and the output of the COMSEC Processor, which provides the user with control over the simulation and the printed results. This paper describes the design and use of the function level simulator HPSIM4A, a refined extension of its predecessor, HPSIM4. HPSIM4A is a thoroughly tested and debugged version of HPSIM4 with additional features that utilize more of Universal AHPL's descriptive capabilities. In particular, the multiple clock, the specific driving clock and the User-Defined Functional Register capabilities can now be simulated. Additionally, provision has been made to simulate both positive and negative edge triggered flip-flops and User-Defined Combinational Logic Units with a minimal programming effort.
APA, Harvard, Vancouver, ISO, and other styles
12

Bhattacharjee, Tirtha Pratim. "A dynamic middleware to integrate multiple cloud infrastructures with remote apllications." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/71290.

Full text
Abstract:
In an era with compelling need for greater computation power, the aggregation of software system components is becoming more challenging and diverse. The new-generation scientific applications are growing hub of complex and intense computation performed on huge data set with exponential growth. With the development of parallel algorithms, design of multi-user web applications and frequent changes in software architecture, there is a bigger challenge lying in front of the research institutes and organizations. Network science is an interesting field posing extreme computation demands to sustain complex large-scale networks. Several static or dynamic network analysis have to be performed through algorithms implementing complex graph theories, statistical mechanics, data mining and visualization. Similarly, high performance computation infrastructures are imbibing multiple characters and expanding in an unprecedented way. In this age, it's mandatory for all software solutions to migrate to scalable platforms and integrate cloud enabled data center clusters for higher computation needs. So, with aggressive adoption of cloud infrastructures and resource-intensive web-applications, there is a pressing need for a dynamic middleware to bridge the gap and effectively coordinate the integrated system. Such a heterogeneous environment encourages the devising of a transparent, portable and flexible solution stack. In this project, we propose adoption of Virtual Machine aware Portable Batch System Cluster (VM-aware PBS Cluster), a self-initiating and self-regulating cluster of Virtual Machines (VM) capable of operating and scaling on any cloud infrastructure. This is an unique but simple solution for large-scale softwares to migrate to cloud infrastructures retaining the most of the application stack intact. In this project, we have also designed and implemented Cloud Integrator Framework, a dynamic implementation of cloud aware middleware for the proposed VM-aware PBS cluster. This framework regulates job distribution in an aggregate of VMs and optimizes resource consumption through on-demand VM initialization and termination. The model was integrated into CINET system, a network science application. This model has enabled CINET to mediate large-scale network analysis and simulation tasks across varied cloud platforms such as OpenStack and Amazon EC2 for its computation requirements.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Haywood, Dana. "The Relationship between Nonprofit Organizations and Cloud Adoption Concerns." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/4372.

Full text
Abstract:
Many leaders of nonprofit organizations (NPOs) in the United States do not have plans to adopt cloud computing. However, the factors accounting for their decisions is not known. This correlational study used the extended unified theory of acceptance and use of technology (UTAUT2) to examine whether performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit can predict behavioral intention (BI) and use behavior (UB) of NPO information technology (IT) managers towards adopting cloud computing within the Phoenix metropolitan area of Arizona of the U.S. An existing UTAUT2 survey instrument was used with a sample of IT managers (N = 106) from NPOs. A multiple regression analysis confirmed a positive statistically significant relationship between predictors and the dependent variables of BI and UB. The first model significantly predicted BI, F (7,99) =54.239, p -?¤ .001, R^2=.795. Performance expectancy (β = .295, p = .004), social influence (β = .148, p = .033), facilitating conditions (β = .246, p = .007), and habit (β = .245, p = .002) were statistically significant predictors of BI at the .05 level. The second model significantly predicted UB, F (3,103) = 37.845, p -?¤ .001, R^2 = .527. Habit (β = .430, p = .001) was a statistically significant predictor for UB at a .05 level. Using the study results, NPO IT managers may be able to develop strategies to improve the adoption of cloud computing within their organization. The implication for positive social change is that, by using the study results, NPO leaders may be able to improve their IT infrastructure and services for those in need, while also reducing their organization's carbon footprint through use of shared data centers for processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Kini, Rohit Ravindranath. "Sensor Position Optimization for Multiple LiDARs in Autonomous Vehicles." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289597.

Full text
Abstract:
3D ranging sensor LiDAR, is an extensively used sensor in the autonomous vehicle industry, but LiDAR placement problem is not studied extensively. This thesis work proposes a framework in an open- source autonomous driving simulator (CARLA) that aims to solve LiDAR placement problem, based on the tasks that LiDAR is intended for in most of the autonomous vehicles. LiDAR placement problem is solved by improving point cloud density around the vehicle, and this is calculated by using LiDAR Occupancy Boards (LOB). Introducing LiDAR Occupancy as an objective function, the genetic algorithm is used to optimize this problem. This method can be extended for multiple LiDAR placement problem. Additionally, for multiple LiDAR placement problem, LiDAR scan registration algorithm (NDT) can also be used to find a better match for first or reference LiDAR. Multiple experiments are carried out in simulation with a different vehicle truck and car, different LiDAR sensors Velodyne 16 and 32 channel LiDAR, and, by varying Region Of Interest (ROI), for testing the scalability and technical robustness of the framework. Finally, this framework is validated by comparing the current and proposed LiDAR positions on the truck.<br>3D- sensor LiDAR, är en sensor som används i stor utsträckning inom den autonoma fordonsindustrin, men LiDAR- placeringsproblemet studeras inte i stor utsträckning. Detta uppsatsarbete föreslår en ram i en öppen källkod för autonom körningssimulator (CARLA) som syftar till att lösa LiDAR- placeringsproblem, baserat på de uppgifter som LiDAR är avsedda för i de flesta av de autonoma fordonen. LiDAR- placeringsproblem löses genom att förbättra punktmolntätheten runt fordonet, och detta beräknas med LiDAR Occupancy Boards (LOB). Genom att introducera LiDAR Occupancy som en objektiv funktion används den genetiska algoritmen för att optimera detta problem. Denna metod kan utökas för flera LiDAR- placeringsproblem. Dessutom kan LiDAR- scanningsalgoritm (NDT) för flera LiDAR- placeringsproblem också användas för att hitta en bättre matchning för LiDAR för första eller referens. Flera experiment utförs i simulering med ett annat fordon lastbil och bil, olika LiDAR-sensorer Velodyne 16 och 32kanals LiDAR, och, genom att variera intresseområde (ROI), för att testa skalbarhet och teknisk robusthet i ramverket. Slutligen valideras detta ramverk genom att jämföra de nuvarande och föreslagna LiDAR- positionerna på lastbilen.
APA, Harvard, Vancouver, ISO, and other styles
15

Algarni, Abdullah Fayez H. "A machine learning framework for optimising file distribution across multiple cloud storage services." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/17981/.

Full text
Abstract:
Storing data using a single cloud storage service may lead to several potential problems for the data owner. Such issues include service continuity, availability, performance, security, and the risk of vendor lock-in. A promising solution is to distribute the data across multiple cloud storage services , similarly to the manner in which data are distributed across multiple physical disk drives to achieve fault tolerance and to improve performance . However, the distinguishing characteristics of different cloud providers, in term of pricing schemes and service performance, make optimising the cost and performance across many cloud storage services at once a challenge. This research proposes a framework for automatically tuning the data distribution policies across multiple cloud storage services from the client side, based on file access patterns. The aim of this work is to explore the optimisation of both the average cost per gigabyte and the average service performance (mainly latency time) on multiple cloud storage services . To achieve these aims, two machine learning algorithms were used: 1. supervised learning to predict file access patterns. 2. reinforcement learning to learn the ideal file distribution parameters. File distribution over several cloud storage services . The framework was tested in a cloud storage services emulator, which emulated a real multiple-cloud storage services setting (such as Google Cloud Storage, Amazon S3, Microsoft Azure Storage, and Rack- Space file cloud) in terms of service performance and cost. In addition, the framework was tested in various settings of several cloud storage services. The results of testing the framework showed that the multiple cloud approach achieved an improvement of about 42% for cost and 76% for performance. These findings indicate that storing data in multiple clouds is a superior approach, compared with the commonly used uniform file distribution and compared with a heuristic distribution method.
APA, Harvard, Vancouver, ISO, and other styles
16

Nogherotto, Rita. "A numerical framework for multiple phase cloud microphysics in regional and global atmospheric models." Doctoral thesis, Università degli studi di Trieste, 2015. http://hdl.handle.net/10077/11140.

Full text
Abstract:
2012/2013<br>The Regional Climate Model RegCM4 (Giorgi et al., 2012) treats nonconvective clouds and precipitation following the Subgrid Explicit SUBEX param- eterization (Pal et al., 2000). This scheme includes a simple representation for the formation of raindrops and solves diagnostically the precipitation: rain forms when the cloud water content exceeds the autoconversion threshold, that is an increasing function of the temperature and assumes different values over the land and over the ocean to account for the difference in number of the cloud condensation nuclei over continental and oceanic regions. The SUBEX scheme does not account for the presence of clouds ice, and the fraction of ice is diagnosed as a function of temperature in the radiation scheme. Due to the increasing emphasis on cloud representations in the climate community and the forthcoming increasing resolution due to the inclusion, in the close future, of a non-hydrostatic compressible core, the treatment of the ice microphysics and a prognostic representation of the precipitation is required in RegCM4. This thesis presents the new parameterization for stratiform cloud microphysics and precipitation implemented in RegCM4. The approach of the new parameterization is based on an implicit numerical framework recently developed and implemented into the ECMWF operational forecasting model (Tiedtke, 1993). The new parameterization solves 5 prognostic equations for the water vapour, the liquid water, the rain, the ice and the snow mixing ratios. It allows a proper treatment of mixed-phase clouds and a more physically realistic representation of the precipitation as it is no more an instantaneous response to the microphysical processes occurring in clouds and is subjected to the horizontal advection. A first discussion of the results contains an evaluation of the vertical distributions of the main microphysical quantities, such as the liquid and ice water mixing ratios and the relative fractions. It also presents a series of sensitivity tests to understand how the moisture and radiation quantities respond to the variation of the microphysical parameters used in the scheme, such as the fall speeds of the falling categories, the autoconversion scheme and the evaporation coefficient. Cloud properties are afterwards evaluated through the implementation for RegCM4 of the new cloud evaluation COSP tool (Bodas-Salcedo et al., 2011), developed by the Cloud Feedback Model In- tercomparison Project (CFMIP), that facilitates the comparison of simulated clouds with observations from passive and active remote sensing by diagnosing from model outputs the quantities that would be observed from satellites if they were flying above an atmosphere similar to that predicted by the model. Different hypothesis are presented to explain the reasons for RegCM4 biases in representing different types of clouds over the tropical band and new prospectives for the future investigations designed to answer to the open questions are outlined.<br>XXVI Ciclo<br>1983
APA, Harvard, Vancouver, ISO, and other styles
17

Schröder, Marc. "Multiple scattering and absorption of solar radiation in the presence of three-dimensional cloud fields." [S.l. : s.n.], 2004. http://www.diss.fu-berlin.de/2004/237/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mellas, Michael John. "Constructing Multiple Realities on Stage: Conceiving a Magical Realist Production of Jose Rivera's Cloud Tectonics." Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1218129542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Mellas, Michael John. "Constructing multiple realities on stage conceiving a magical realist production of José Rivera's Cloud tectonics /." Oxford, Ohio : Miami University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=miami1218129542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rehman, Haroon, Asha Chepkorir Segie, Kanishka Chakraborty, and Devapiran Jaishankar. "Bone Marrow Wars: Attack of the Clones." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/asrf/2020/presentations/33.

Full text
Abstract:
Multiple myeloma is characterized by the malignant proliferation of clonal plasma cells producing monoclonal paraproteins, leading to multi-organ damage. On the other hand monoclonal B-cell lymphocytosis (MBCL) is characterized by the malignant proliferation of clonal B-lymphocytes, with potential to develop into chronic lymphocytic leukemia (CLL) or small lymphocytic lymphoma (SLL). CLL/SLL can result in visceromegaly, anemia, thrombocytopenia, fevers, night sweats and unintentional weight loss. Literature review demonstrates these two malignant clonal bone marrow disorders are most frequently seen independently in patients; however, we report one rare diagnostic challenge where both clonal disorders were identified in a single patient concurrently. A 64-year-old man initially presented with worsening back pain. Thoracic spine x-ray revealed a T11 compression fracture, confirmed by magnetic resonance imaging. Complete blood count revealed a white blood cell count of 7.3 K/uL with 54% lymphocyte predominance and peripheral smear demonstrated a population of small lymphocytes with round nuclei and an atypical chromatin pattern suggestive of CLL/MBCL. Flow cytometry revealed a monoclonal B-cell CD5 positive, CD23 positive, CD10 negative population with an absolute count of 1.6 K/uL. Due to the instability and pain associated with the spinal fracture, patient had kyphoplasty performed and intraoperative bone biopsies were taken from both T11 and T12 vertebrae. Interestingly each bone biopsy revealed involvement by both a kappa-light chain restricted plasma cell neoplasm, ranging from 15% to 30% cellularity, as well as a CD5-positive B-cell lymphocyte population. It suggested two concurrent but pathologically distinct pathologies including plasma cell myeloma and a separate B-cell lymphoproliferative disorder with immunophenotypic features suggestive of CLL/MBCL. Bone marrow biopsy was performed for definitive evaluation and confirmed multiple myeloma with 15-20% kappa-restricted plasma cells identified, and also confirmed concurrent MBCL with CD5 and CD23-positive, kappa-restricted B-cells identified on bone marrow flow cytometry. Adding an additional layer of complexity, bone marrow molecular genetics revealed presence of a MYD88 mutation, raising concern for possible lymphoplasmacytic lymphoma (LPL). However, secondary pathologic review ruled out LPL, as the immunophenotypic pattern of the clonal B-cells was not consistent with that of LPL, and although the MYD88 mutation is predominantly seen in LPL, it has also been seen in a small percentage of CLL/SLL cases and exceedingly rarely described in MM as well. Serum protein electrophoresis with immunofixation, serum quantitative immunoglobulins and serum quantitative free light chain assay revealed findings consistent with IgG kappa multiple myeloma and systemic CT imaging was negative for any lymphadenopathy, confirming MBCL. Patient was started on first-line multiple myeloma systemic therapy for transplant eligible patients and has demonstrated an excellent response to treatment thus far. This patient case serves to demonstrate the importance of maintaining a broad differential when approaching hematological problems; It also underlines the necessity for a complete diagnostic evaluation to identify rare clinical conundrums such as with our patient, allowing for proper and timely treatment. While we use “Occam’s razor” to explain multiple problems with a single unifying diagnosis the rare possibility of divergent diagnosis is to be always entertained.
APA, Harvard, Vancouver, ISO, and other styles
21

Masset, Benjamin, and Ismail Sekkat. "Implementation of Customer Relationship Management in the Cloud : The example of SMEs through a multiple case study analysis." Thesis, Högskolan i Halmstad, Sektionen för ekonomi och teknik (SET), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-15913.

Full text
Abstract:
Purpose: The aim of this thesis is to build a practical guide to get a clear understanding about the implementation process of Customer Relationship Management in the cloud within Small. It also describes the different concepts that are Customer Relationship Management, Cloud computing and CRM in the cloud, especially related to the SMEs, in order to have a great insight that gives the opportunity to implement successfully this paradigm.   Scientific method: The research lies in the interpretative field of inquiry. Abduction is used to combine empirical data with theoretical studies in orderto tryto investigate patterns that could give an understanding of the phenomena that is studied. Descriptive research approach using multiple-case study design is used.   Theoretical frame of references: The first part of the theoretical frame of references explores existing theories. This leads to CRM and Cloud Computing. The second part explores different means of analysing our problematic.   Empirical method: The chosen approach is qualitative. Interviews have been conducted for data collection. Documentsarehave beengathered and analysed to support the interviewguides. We also gathered a previous practical guide from Salesforce in order to compare our results.   Analysis: Analysing hosted CRM implementation of three SMEs using Salesforce, it describes the key facts that have to be taken into account to implement the Salesforce CRM solution.   Conclusion: The findings show how three companies can be analysed to draw conclusions about the implementation process. According to interviews, theories, documents from hosted CRM provider, some suggestions have been advised to avoid problems concerning the implementation in SMEs.
APA, Harvard, Vancouver, ISO, and other styles
22

Kusnadi, Ardy Daniel, and Fredrik Einarsson. "Marketing Strategy for Software as a Service Companies within the Logistics Vertical Software Niche : A multiple case study." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20152.

Full text
Abstract:
BackgroundUtilizing the Software as a Service (SaaS) business model is a distinct trend for marketing softwarevia the Internet. It allows software suppliers to expand their market globally and to extend theiroffering to customers by simplifying their software procurements and ownerships. The trend has beenongoing for some time concerning horizontal software niches and now intensifies for vertical niches.Logistics is such an examples of a vertical software niche. ObjectivesThis thesis aims to investigate the utilized marketing strategies for companies using a Software as aService business model within the logistics niche. The purpose of this thesis is to deepen theknowledge about how to market a vertical Software as a Service solution within the logistics domain. MethodologyAn explorative research method in the form of a multiple case study is used. Three companies aresampled using a theoretical sampling approach. SaaS ideally requires less people contact and themarketing materials are integrated in the published SaaS on the respective companies’ web pages.Data publicly available on the Internet is collected and used to investigate the utilized marketingstrategies. FindingsThe identified marketing strategies are categorized according to an eight-element model utilized inearlier studies. The eight elements are product, price, place, promotion, people, process, productivity&amp; quality, and physical environment. The categorization does help to guide during data collections anddata analysis. The last element physical environment is confirmed to be not relevant since the requiredphysical material are chosen and decided by customers themselves. ConclusionsThe marketing strategies within this niche is at large consistent with earlier findings. One of the newlyfindings is that the sample companies choose one of SaaS strong points that is most suitable to theiroffering solution and emphasize it in their marketing strategies. Here are easiness, scalability andflexibility. Some main deviations however exist. The sample companies do not provide easilyavailable trial accounts. They instead offer manned online demonstrations. The market is also notfound to be as global as the business model enables. The reason of being that is the fact that theproducts/services are too dependent on the integrations to local-market software solutions. Recommendations for future researchA similar study with a larger sample may strengthen the findings. Performing interviews in addition toonline data collection may extend more information about post customer contact marketing strategiesas well as reasons behind the selected strategies.
APA, Harvard, Vancouver, ISO, and other styles
23

Kari, Tim, and Wesley Kleinreesink. "Internet-of-Things and cloud computing adoption in manufacturing among small to medium sized enterprises in Sweden : A multiple case study on current IoT and cloud computing technology adoption within Swedish SMEs." Thesis, Linnéuniversitetet, Institutionen för ekonomistyrning och logistik (ELO), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-95952.

Full text
Abstract:
Title: Internet of things and cloud computing adoption within manufacturing among small to medium sized enterprises in Sweden. Authors: Tim Kari and Wesley Kleinreesink Background &amp; Problem discussion: Industries in Europe are facing economic challenges related to global societal and technological developments. The adoption of industry 4.0 technologies such as IoT and cloud computing within manufacturing can be used as a solution to these challenges. SMEs form the backbone of the Swedish economy, making up a large amount of the employment and added value within the country, making them important in this context. Little is known about the maturity levels of IoT and cloud computing and the challenges encountered during adoption of of these technologies by Swedish SMEs. Purpose: The purpose of this thesis is to investigate maturity levels of IoT and cloud computing adoptions and the associated adoption challenges by looking at Swedish SMEs in the manufacturing industry that are adopting or are interested in adopting IoT and cloud computing technologies within their manufacturing. By addressing the maturity levels and adoption challenges found among the cases and providing more insight into the context in which these are occurring. These insights can then be used for the purposes of addressing maturity levels as practitioner and contributing to current literature regarding maturity levels and adoption challenges found. Method: Following an exploratory research strategy, qualitative data has been gathered both in the form of a literature review as well as a multiple case study through semi-structured interviews. The data has then been analyzed by conducting a conceptual analysis, a cross-case synthesis and pattern matching Findings &amp; Conclusion: The findings indicate that the levels of maturity vary highly between categories and cases, with only a few examples of reaching higher (integrated) levels of maturity. Adoption challenges found were mainly centered around organizational and human challenges as opposed to technical ones, indicating that further focus needs to be put on organizational change management. Furthermore, an apparent lack of knowledge among the case companies may explain both the narrow and simple implementations of IoT and cloud computing as well as the lack of drivers for further adoption. The implications of this means that managers need a larger focus on change management and more comprehensive implementation plans. There is also the need to consider the need for digitalization and to do it in an efficient and useful manner. Further research is needed on these topics, with possible avenues being a focus on smaller companies, a study with a larger sample size or a focus on industries with a higher volume of production as the ones presented in this study were all relatively low-volume.
APA, Harvard, Vancouver, ISO, and other styles
24

Gardner, Richard Scott. "Clonal Diversity of Quaking Aspen (Populus Tremuloides): How Multiple Clones May Add to Theresilience and Persistence of this Forest Type." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1729.

Full text
Abstract:
Conservation and restoration of quaking aspen in the western United States requires an understanding of how and when aspen clones became established, how clones adapt to environmental challenges, and how individual clones interact within stands. I used molecular tools to identify individual clones in a natural population of aspen in southern Utah and detected high and low levels of clonal diversity within stands. Stands with high clonal diversity were located in areas with a more frequent fire history, indicating that fires may have prepared sites for seed germination and establishment over time. Conversely, areas of low clonal diversity corresponded to areas with less frequent fire. The same molecular tools were then used to investigate clonal interactions/succession over relatively recent time. For this portion of the study I sampled small, medium, and large aspen ramets (stems) at 25 subplots within spatially separated one-hectare plots, and mapped the clonal identities. I found that approximately 25% of the clones appeared to be spreading into adjacent clones, while 75% of the clones had a stationary pattern. In the final portion of the study, I again used molecular tools to identify aspen clones and investigated tradeoffs between growth and defense chemistry in mature, naturally-occurring trees. Growth was estimated using a ten-year basal area increment, and the percent dry weight of salicortin, tremulacin, and condensed tannins was measured in the same trees. Overall I discovered evidence for a tradeoff between growth and salicortin/tremulacin, and a marginally significant but positive relationship between growth and condensed tannins.
APA, Harvard, Vancouver, ISO, and other styles
25

Lachat, Elise. "Relevé et consolidation de nuages de points issus de multiples capteurs pour la numérisation 3D du patrimoine." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD012/document.

Full text
Abstract:
La numérisation 3D du patrimoine bâti est un procédé qui s’inscrit dans de multiples applications (documentation, visualisation, etc.), et peut tirer profit de la diversité des techniques de mesure disponibles. Afin d’améliorer la complétude et la qualité des livrables, de plus en plus de projets de numérisation s’appuient sur la combinaison de nuages de points provenant de différentes sources. La connaissance des performances propres aux différents capteurs, ainsi que de la qualité de leurs mesures, est alors souhaitable. Par la suite, plusieurs pistes peuvent être explorées en vue d’intégrer des nuages hétérogènes au sein d’un même projet, de leur recalage à la modélisation finale. Une approche pour le recalage simultané de plusieurs nuages de points est exposée dans ces travaux. La gestion de potentielles fautes parmi les observations, ou de bruit de mesure inhérent à certaines techniques de levé, est envisagée à travers l’ajout d’estimateurs robustes dans la méthodologie de recalage<br>Three dimensional digitization of built heritage is involved in a wide range of applications (documentation, visualization, etc.), and may take advantage of the diversity of measurement techniques available. In order to improve the completeness as well as the quality of deliverables, more and more digitization projects rely on the combination of data coming from different sensors. To this end, the knowledge of sensor performances along with the quality of the measurements they produce is recommended. Then, different solutions can be investigated to integrate heterogeneous point clouds within a same project, from their registration to the modeling steps. A global approach for the simultaneous registration of multiple point clouds is proposed in this work, where the introduction of individual weights for each dataset is foreseen. Moreover, robust estimators are introduced in the registration framework, in order to deal with potential outliers or measurement noise among the data
APA, Harvard, Vancouver, ISO, and other styles
26

Tibayrenc, Michel. "La variabilité isoenzymatique de Trypanosoma cruzi, agent de la maladie de Chagas : signification génétique, taxonomique et épidémiologique." Paris 11, 1986. http://www.theses.fr/1986PA112310.

Full text
Abstract:
L'interprétation génétique des zymogrammes de 523 stocks de Trypanosoma cruzi isolés à partir d'hôtes et d'écosystèmes variés, et d'une vaste aire géographique (allant des Etats-Unis au sud du Brésil) révèle une forte variabilité génétique (seulement un locus monomorphe sur 15) et suggère que le génome du parasite a une structure diploïde. Les données ne fournissent aucune indication de sexualité mendélienne, bien qu'il existe de nombreuses opportunités d'échange génétique entre des génotypes extrêmement différents. La structure populationnelle de T. Cruzi apparaît multiclonale et complexe. Les clones naturels (ou zymodèmes) recensés sont nombreux (43 clones différents ont été distingués parmi 121 stocks étudiés pour 15 loci) et tendent à se répartir sur l'éventail total des génotypes possibles, en une structure non hiérarchisée : il est impossible de les rassembler en un petit nombre de subdivisions nettement tranchées, qui pourraient représenter des taxons naturels. Les données disponibles suggèrent que la variabilité génétique de T. Cruzi reflète la longue évolution séparée de clones multiples. Nous proposons l'hypothèse que cette évolution clonale de longue durée pourrait expliquer la variabilité biologique et médicale de l'agent de la maladie de Chagas.
APA, Harvard, Vancouver, ISO, and other styles
27

Clofent-Sanchez, Gisèle. "Recherche du stade ontogénique de la cellule souche dans le myélome multiple : étude par clonage en dilution limite et par réarrangements des gènes d'immunoglobuline." Montpellier 2, 1989. http://www.theses.fr/1989MON20080.

Full text
Abstract:
Le myelome multiple est une neoplasie b a expression tumorale plasmocytaire siegeant essentiellement dans la moelle osseuse. Le clone tumoral secrete une immunoglobuline monoclonale, definie par ses determinants idiotypiques. Les travaux de ces dernieres annees notant une expansion de lymphocytes b idiotypiques et tumoraux parmi la population mononucleee totale du sang peripherique ou de la moelle osseuse sont sujets a controverses. Nous avons etudie le compartiment b du myelome, par clonage en dilution limite et infection avec le virus d'epstein-barr, afin de mieux apprehender le clone tumoral et sa regulation. Nous avons defini un compartiment tumoral, resistant au virus d'epstein-barr, incapable d'emerger in vitro ainsi qu'un compartiment normal, peut-etre oriente vers la regulation du reseau idiotypique. Nous n'avons pas detecte de lymphocytes b tumoraux dans le sang peripherique des malades par rearrangements des genes d'immunoglobuline. Cette etude, etendue aux sites de proliferation de la tumeur, cautionne le stade post-commutatif comme cible de la transformation maligne. L'etude de la famille vh (v) confirme l'hypothese d'une cellule souche tumorale plasmocytaire ou pleplasmocytaire. Les cellules b idiotypiques, decrites dans la litterature, peuvent etre assimilees a des lymphocytes polyclonaux portant des reactivites idiotypiques croisees avec les cellules tumorales. Leur presence est interpretee comme l'expression d'un reseau regulateur de la tumeur
APA, Harvard, Vancouver, ISO, and other styles
28

Nerur, Radhakrishnan Ganapathy Subramaniam. "Effects of IT Infrastructure services on business process implementation-Focus on small and medium enterprises in emerging markets." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20450.

Full text
Abstract:
An organization’s information technology (IT) infrastructure capability is increasinglyrealized as a critical part to business effectiveness and efficiency. IT infrastructure servicesare particularly important for organizations looking to deploy business processes indeveloping markets. There has also been an interest from many small and medium sizedorganizations whose core business is not in IT to outsource and manage these servicesthrough third party service providers. However there is a need to create an understanding forthese organizations to deploy the right infrastructure services in order to enable easierimplementation or reengineering of the business process. There has been little researchfocusing on the patterns of the IT infrastructure capabilities in the small and medium sizedorganizations in the developing markets.The research aims for a comprehensive coverage by analyzing the requirements in thedeveloping markets and proposing a selection model for the organizations to choose ITservice provider in case they decide to outsource the infrastructure services. The effect of theIT infrastructure services on the business process implementation is presented with anemphasis on the boundary crossing services. Using empirical case study, the research analysesa firm in developing markets and compares it against four strategically similar organizationsfrom different industries. Data collection was primarily qualitative and ably supported bysecondary data.The requirements in developing markets reflect the same as in mature markets. The pricing isseen to play a major role in the selection of the service providers with service security notvery much organization’s priority. The number of boundary crossing services effectivelyenables information sharing and control. These services are the drivers in simplifying thebusiness process implementation. The findings have implications for both business andtechnical managers in regard to planning the IT strategy in the long term and developingappropriate infrastructure according to the process needs.<br>Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
29

Le, Trung-Dung. "Gestion de masses de données dans une fédération de nuages informatiques." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S101.

Full text
Abstract:
Les fédérations de nuages informatiques peuvent être considérées comme une avancée majeure dans l’informatique en nuage, en particulier dans le domaine médical. En effet, le partage de données médicales améliorerait la qualité des soins. La fédération de ressources permettrait d'accéder à toutes les informations, même sur une personne mobile, avec des données hospitalières distribuées sur plusieurs sites. En outre, cela permettrait d’envisager de plus grands volumes de données sur plus de patients et ainsi de fournir des statistiques plus fines. Les données médicales sont généralement conformes à la norme DICOM (Digital Imaging and Communications in Medicine). Les fichiers DICOM peuvent être stockés sur différentes plates-formes, telles qu’Amazon, Microsoft, Google Cloud, etc. La gestion des fichiers, y compris le partage et le traitement, sur ces plates-formes, suit un modèle de paiement à l’utilisation, selon des modèles de prix distincts et en s’appuyant sur divers systèmes de gestion de données (systèmes de gestion de données relationnelles ou SGBD ou systèmes NoSQL). En outre, les données DICOM peuvent être structurées en lignes ou colonnes ou selon une approche hybride (ligne-colonne). En conséquence, la gestion des données médicales dans des fédérations de nuages soulève des problèmes d’optimisation multi-objectifs (MOOP - Multi-Objective Optimization Problems) pour (1) le traitement des requêtes et (2) le stockage des données, selon les préférences des utilisateurs, telles que le temps de réponse, le coût monétaire, la qualités, etc. Ces problèmes sont complexes à traiter en raison de la variabilité de l’environnement (liée à la virtualisation, aux communications à grande échelle, etc.). Pour résoudre ces problèmes, nous proposons MIDAS (MedIcal system on clouD federAtionS), un système médical sur les fédérations de groupes. Premièrement, MIDAS étend IReS, une plate-forme open source pour la gestion de flux de travaux d’analyse sur des environnements avec différents systèmes de gestion de bases de données. Deuxièmement, nous proposons un algorithme d’estimation des valeurs de coût dans une fédération de nuages, appelé Algorithme de régression %multiple linéaire dynamique (DREAM). Cette approche permet de s’adapter à la variabilité de l'environnement en modifiant la taille des données à des fins de formation et de test, et d'éviter d'utiliser des informations expirées sur les systèmes. Troisièmement, l’algorithme génétique de tri non dominé à base de grilles (NSGA-G) est proposé pour résoudre des problèmes d’optimisation multi-crtières en présence d’espaces de candidats de grande taille. NSGA-G vise à trouver une solution optimale approximative, tout en améliorant la qualité du font de Pareto. En plus du traitement des requêtes, nous proposons d'utiliser NSGA-G pour trouver une solution optimale approximative à la configuration de données DICOM. Nous fournissons des évaluations expérimentales pour valider DREAM, NSGA-G avec divers problèmes de test et jeux de données. DREAM est comparé à d'autres algorithmes d'apprentissage automatique en fournissant des coûts estimés précis. La qualité de la NSGA-G est comparée à celle des autres algorithmes NSGA présentant de nombreux problèmes dans le cadre du MOEA. Un jeu de données DICOM est également expérimenté avec NSGA-G pour trouver des solutions optimales. Les résultats expérimentaux montrent les qualités de nos solutions en termes d'estimation et d'optimisation de problèmes multi-objectifs dans une fédération de nuages<br>Cloud federations can be seen as major progress in cloud computing, in particular in the medical domain. Indeed, sharing medical data would improve healthcare. Federating resources makes it possible to access any information even on a mobile person with distributed hospital data on several sites. Besides, it enables us to consider larger volumes of data on more patients and thus provide finer statistics. Medical data usually conform to the Digital Imaging and Communications in Medicine (DICOM) standard. DICOM files can be stored on different platforms, such as Amazon, Microsoft, Google Cloud, etc. The management of the files, including sharing and processing, on such platforms, follows the pay-as-you-go model, according to distinct pricing models and relying on various systems (Relational Data Management Systems or DBMSs or NoSQL systems). In addition, DICOM data can be structured following traditional (row or column) or hybrid (row-column) data storages. As a consequence, medical data management in cloud federations raises Multi-Objective Optimization Problems (MOOPs) for (1) query processing and (2) data storage, according to users preferences, related to various measures, such as response time, monetary cost, qualities, etc. These problems are complex to address because of heterogeneous database engines, the variability (due to virtualization, large-scale communications, etc.) and high computational complexity of a cloud federation. To solve these problems, we propose a MedIcal system on clouD federAtionS (MIDAS). First, MIDAS extends IReS, an open source platform for complex analytics workflows executed over multi-engine environments, to solve MOOP in the heterogeneous database engines. Second, we propose an algorithm for estimating of cost values in a cloud environment, called Dynamic REgression AlgorithM (DREAM). This approach adapts the variability of cloud environment by changing the size of data for training and testing process to avoid using the expire information of systems. Third, Non-dominated Sorting Genetic Algorithm based ob Grid partitioning (NSGA-G) is proposed to solve the problem of MOOP is that the candidate space is large. NSGA-G aims to find an approximate optimal solution, while improving the quality of the optimal Pareto set of MOOP. In addition to query processing, we propose to use NSGA-G to find an approximate optimal solution for DICOM data configuration. We provide experimental evaluations to validate DREAM, NSGA-G with various test problem and dataset. DREAM is compared with other machine learning algorithms in providing accurate estimated costs. The quality of NSGA-G is compared to other NSGAs with many problems in MOEA framework. The DICOM dataset is also experimented with NSGA-G to find optimal solutions. Experimental results show the good qualities of our solutions in estimating and optimizing Multi-Objective Problem in a cloud federation
APA, Harvard, Vancouver, ISO, and other styles
30

Sousa, Jeovane Vicente de. "Computação em nuvem no contexto das smart grids: uma aplicação para auxílio à localização de faltas em sistemas de distribuição." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-30102018-100504/.

Full text
Abstract:
A computação em nuvem tem sido vislumbrada como a principal tecnologia capaz de integrar e gerenciar os diversos sistemas envolvidos em uma Smart Grid. Nesse sentido, esta pesquisa tem por objetivo desenvolver uma infraestrutura de computação em nuvem capaz de armazenar e manipular dados em sistemas de distribuição. Analisando a infraestrutura das principais aplicações que utilizam computação em nuvem nesse contexto, foi proposta uma arquitetura com serviços essenciais, que pode ser estendida, para abrigar serviços e aplicações voltadas aos sistemas de distribuição inteligentes. A partir dessa proposta, uma infraestrutura de computação em nuvem foi implementada, utilizando ferramentas open source. Essa infraestrutura permitiu o desenvolvimento de uma nova aplicação para auxílio à localização de faltas, utilizando mineração de dados sobre os dados provenientes de smart meters, que é capaz de reduzir o problema da múltipla estimação nos sistemas de distribuição radial, auxiliando na definição do ramal faltoso. Para isso, uma versão otimizada da ferramenta de mineração de dados DAMICORE (Data Mining of Code Repositories) foi implementada estendendo os serviços básicos da arquitetura proposta. A aplicação desenvolvida foi avaliada utilizando centenas de simulações de falta sujeitas ao problema da múltipla estimação, aplicadas ao longo de um alimentador de testes, sendo capaz de reduzir mais de 80% das extensões de falta susceptíveis ao problema da múltipla estimação. Os resultados apresentados mostraram que a arquitetura proposta e a infraestrutura de computação em nuvem desenvolvida são capazes de suportar novas aplicações para os sistemas de distribuição inteligentes contribuindo para o desenvolvimento das smart grids e para a difusão da computação em nuvem nesse contexto. Como contribuição adicional, a aplicação em nuvem desenvolvida permitirá reduzir a múltipla estimação na localização de faltas em sistemas de distribuição.<br>Cloud computing has been envisioned as the main technology capable to integrate and manage many systems on a Smart Grid. Thus, this research aims to develop a cloud computing infrastructure to store and manipulate smart distribution system data. By analyzing the infrastructure of the main applications using cloud computing for smart distribution systems, an extensible architecture with essential services was proposed to host smart distribution systems services and applications. Based on this proposition, a cloud computing platform was developed using open source tools. A new application to reduce multiple estimation for fault location in radial distribution systems using datamining techniques over smart meter data was implemented using this infrastructure. An optimized version of the datamining tool known as DAMICORE (Data Mining of Code Repositories) was implemented as an extension to the proposed architecture basic services. The new cloud application was tested using hundreds of fault simulations through a test feeder, being able to reduce the line extensions with multiple estimation by more than 80% in the simulated fault cases. The results show that the proposed cloud computing architecture and infrastructure enable new smart distribution systems applications, contributing to the development of smart grids and diffusion of cloud computing in this context. As an additional contribution, the cloud application developed will help to reduce the multiple estimation for fault locations in distribution systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Padilha, Alan Schreiner. "Emprego de dados laser scanner terrestre e de sensores embarcados em veículos aéreos não tripulados para a extração de variáveis dendrométricas." Universidade do Estado de Santa Catarina, 2017. http://tede.udesc.br/handle/handle/2338.

Full text
Abstract:
Submitted by Claudia Rocha (claudia.rocha@udesc.br) on 2017-12-12T16:07:14Z No. of bitstreams: 1 PGEF17MA074.pdf: 2480663 bytes, checksum: 26fb5ceef2b21f799b8a35512121fad5 (MD5)<br>Made available in DSpace on 2017-12-12T16:07:14Z (GMT). No. of bitstreams: 1 PGEF17MA074.pdf: 2480663 bytes, checksum: 26fb5ceef2b21f799b8a35512121fad5 (MD5) Previous issue date: 2017-02-22<br>The objective of this work is to extract different dendrometric variable such as tottal forest height (h), stem diameter at the breast height (DBH), volume (V) and stem diameter at regular height intervals (Hx) directly from cloud points data derived from a sensor coupled to Unmanned Aircraft Veicle System (UAVS) alone.<br>Este trabalho tem como objetivo a extração de variáveis dendrométricas tais como a altura total (h), diâmetro a altura do peito (DAP) e diâmetro em diferentes alturas do tronco a partir de dados derivados de TLS, sensores embarcados em VANT bem como a sua integração. A área de estudo consiste em um plantio misto, com espécies dos gêneros Pinus spp. e Eucapyptus spp., e área aproximada de 4.200 m². Os dados TLS foram coletados à campo, empregando o método de varreduras múltiplas. O recobrimento foi realizado com VANT a uma máxima de 120 metros. Todos os dados foram referenciados ao Sistema Geodésico Brasileiro mediante a coleta de observações para o campo. Para a validação dos resultados foram coletados dados utilizando técnicas e equipamentos tradicionais. O pré-processamento e processamento dos dados foram realizados empregando os aplicativos computacionais Scene, CloudCompare e Photoscan/Agisoft. Para a extração das variáveis dendrométrica empregaram-se os aplicativos Python e DetecTree. A detecção das árvores a partir de dados TLS obteve um acerto de 98,98%. Por outro lado, a detecção das árvores individual de árvores, usando somente a ortoimagem não obteve bons resultados. Quando comparada a verdade de campo, os diâmetros obtidos a 1,30 m (DAP) e a 3,3 metros de altura, apresentaram igualdade estatística ao nível de significância de 5%. No entanto, a metodologia usada para extração da altura total neste estudo, não apresentou igualdade estatística ao nível de significância de 5%
APA, Harvard, Vancouver, ISO, and other styles
32

Idoudi, Hassan. "Dynamic Population Evacuation." Electronic Thesis or Diss., Université Gustave Eiffel, 2024. http://www.theses.fr/2024UEFL2010.

Full text
Abstract:
Cette thèse vise à améliorer la planification et l'exécution des évacuations de population en intégrant des techniques de simulation avancées dans les réseaux routiers urbains. Bien que de nombreuses recherches aient été menées à l'aide de méthodes analytiques, cette thèse aborde des lacunes spécifiques, notamment en ce qui concerne les choix de destinations et d'itinéraires. Elle introduit en outre la communication véhiculaire dans la planification des évacuations, ce qui permet une approche plus adaptative et plus réaliste des différents scénarios d'évacuation. Le travail souligne que la simulation est un outil essentiel, permettant une modélisation dynamique des évacuations de population. Elle relie l'attribution des abris et l'affectation du trafic afin de reproduire les schémas de déplacement des individus dans les réseaux de transport. En outre, cette étude souligne le rôle significatif de la technologie de communication véhiculaire dans l'amélioration de l'efficacité de la planification et de l'exécution des évacuations. Elle souligne l'importance de la coordination en temps réel et de la gestion adaptative dans des conditions en constante évolution. En explorant plusieurs scénarios, nous montrons que la gestion en ligne, associée à la technologie de communication véhiculaire, peut améliorer l'efficacité des processus d'évacuation. Cela est particulièrement vrai lorsqu'ils sont intégrés à des architectures de réseaux ad hoc véhiculaires (VANET) bien structurées. La recherche suggère également que diverses architectures VANET peuvent influencer la fiabilité de la communication véhiculaire dans les situations d'urgence, offrant ainsi des perspectives essentielles pour la conception de réseaux véhiculaires idéaux pour les évacuations d'urgence. En outre, cette thèse introduit une modélisation dynamique de la propagation des risques, facilitant une approche plus détaillée et adaptative des simulations d'évacuation. En incorporant des facteurs de risque dynamiques, le potentiel pour des solutions plus efficaces dans la planification de l'évacuation et la gestion opérationnelle en temps-réel est mis en lumière, en particulier dans des conditions qui changent rapidement<br>This thesis aims to improve planning and executing population evacuations by integrating advanced simulation techniques within urban road networks. Although much research has been conducted using analytical methods, this thesis addresses specific gaps, especially in destination and route choices. It further introduces vehicular communication into evacuation planning, providing a more adaptive and realistic approach to various evacuation scenarios.The work underscores simulation as a pivotal tool, enabling dynamic modelling of population evacuations. It links shelter allocation and traffic assignment to replicate the movement patterns of individuals across transport networks. Moreover, this study emphasizes the significant role of vehicular communication technology in amplifying the efficiency of evacuation planning and execution. It highlights the importance of real-time coordination and adaptive management in ever-changing conditions. By exploring multiple scenarios, we show that online management, paired with vehicular communication technology, can enhance the efficiency of evacuation processes. This is especially true when integrated with well-structured Vehicular Ad-hoc Network (VANET) architectures. The research also suggests that various VANET architectures can influence the reliability of vehicular communication in emergencies, offering critical insights for designing vehicular networks ideal for emergency evacuations.Furthermore, this thesis successfully introduces dynamic risk modeling of hazard propagation, facilitating a more detailed and adaptive approach to evacuee simulations. By incorporating dynamic risk factors, the potential for more advantageous outcomes in evacuation planning and real-time operational management is unveiled, especially in rapidly changing conditions
APA, Harvard, Vancouver, ISO, and other styles
33

Maheu, Bruno. "Généralisation de la théorie de Lorenz-Mie et applications." Rouen, 1987. http://www.theses.fr/1987ROUES025.

Full text
Abstract:
Théorie de la diffusion d'un faisceau gaussien par un diffuseur sphérique homogène et isotrope. Les résultats ouvrent sur des applications à la granulométrie optique. On expose par la suite un modèle à quatre flux pour décrire la diffusion multiple d'une onde électromagnétique par un nuage dense de diffuseurs
APA, Harvard, Vancouver, ISO, and other styles
34

HU, TING-LUN, and 胡庭綸. "Fusion of Multiple Point Clouds." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ca9h39.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>資訊工程系<br>106<br>Recently, 3D machine vision is widely used in auto-driving system, industrial inspection and object reconstruction. While a 2D image lacks the surface information of objects, a 3D image can represents surfaces and shapes of objects. In building modelling, the inner and the shape of historical buildings can be modelled by point clouds for reconstruction when damaged or moving. Usually, point clouds are captured by LIDAR which is a high expensive instrument. The cost of LIDAR depends on its layer resolution. In other words, a good resolution point cloud spends much more money. In order to obtain a high resolution point cloud with the lower-price LIDAR, an alignment method for point clouds is proposed in this paper. The proposed method is based on Iterative Closest Point algorithm (ICP algorithm). The proposed method uses ICP to find the transformation matrix between two point clouds and then transforms the new point cloud to the coordinate system of the previous point cloud. Then, two point clouds are merged. As a result, a better resolution point cloud is obtained. In the simulation, the error of alignment of the proposed system is less than 50cm in a 6mX6m room. Furthermore, the alignment method with local features can get better alignment result.
APA, Harvard, Vancouver, ISO, and other styles
35

Hsin-HangYang and 楊欣翰. "Developing a Mobile Application Proxy for Multiple Clouds." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/00591499522641651188.

Full text
Abstract:
碩士<br>國立成功大學<br>工程科學系碩士在職專班<br>102<br>In recent years, cloud hosting services have provided a good platform for mobile application providers so as to lower the cost of hardware maintenance and management. For end users, they are able to utilize cloud resources in a well network environment. Nevertheless, current cloud services and mobile applications in Taiwan are generally lack of locality. Therefore, it possibly takes much time to use these services through oversea connections. In addition, most of current mobile applications are with a single cloud hosting, resulting in a higher probability of service congestion or service interruption. In this work, we aim at alleviating the impacts of service congestion and service interruption with multiple clouds, and improving user experiences with the proxy architecture. Empirical studies show that we can reduce the bad user experience from service congestion or service interruption, and lower the risk of single cloud hosting.
APA, Harvard, Vancouver, ISO, and other styles
36

Lu, Jianxu. "Simulation of Lidar Return Signals Associated with Water Clouds." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-7138.

Full text
Abstract:
We revisited an empirical relationship between the integrated volume depolar- ization ratio, oacc, and the effective multiple scattering factor, -n, on the basis of Monte Carlo simulations of spaceborne lidar backscatter associated with homogeneous wa- ter clouds. The relationship is found to be sensitive to the extinction coefficient and to the particle size. The layer integrated attenuated backscatter is also obtained. Comparisons made between the simulations and statistics derived relationships of the layer integrated depolarization ratio, oacc, and the layer integrated attenuated backscatter, -n, based on the measurement by the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite show that a cloud with a large effective size or a large extinction coefficient has a relatively large integrated backscatter and a cloud with a small effective size or a large extinction coefficient has a large integrated volume depolarization ratio. The present results also show that optically thin water clouds may not obey the empirical relationship derived by Y. X. Hu. and co-authors.
APA, Harvard, Vancouver, ISO, and other styles
37

Akbulut, Yagmur. "Autonomous Resource Allocation in Clouds: A Comprehensive Analysis of Single Synthesizing Criterion and Outranking Based Multiple Criteria Decision Analysis Methods." Thesis, 2014. http://hdl.handle.net/1828/5579.

Full text
Abstract:
Cloud computing is an emerging trend where clients are billed for services on a pay-per-use basis. Service level agreements define the formal negotiations between the clients and the service providers on common metrics such as processing power, memory and bandwidth. In the case of service level agreement violations, the service provider is penalised. From service provider's point of view, providing cloud services efficiently within the negotiated metrics is an important problem. Particularly, in large-scale data center settings, manual administration for resource allocation is not a feasible option. Service providers aim to maximize resource utilization in the data center, as well as, avoiding service level agreement violations. On the other hand, from the client's point of view, the cloud must continuously ensure enough resources to the changing workloads of hosted application environments and services. Therefore, an autonomous cloud manager that is capable of dynamically allocating resources in order to satisfy both the client and the service provider's requirements emerges as a necessity. In this thesis, we focus on the autonomous resource allocation in cloud computing environments. A distributed resource consolidation manager for clouds, called IMPROMPTU, was introduced in our previous studies. IMPROMPTU adopts a threshold based reactive design where each unique physical machine is coupled with an autonomous node agent that manages resource consolidation independently from the rest of the autonomous node agents. In our previous studies, IMPROMPTU demonstrated the viability of Multiple Criteria Decision Analysis (MCDA) to provide resource consolidation management that simultaneously achieves lower numbers of reconfiguration events and service level agreement violations under the management of three well-known outranking-based methods called PROMETHEE II, ELECTRE III and PAMSSEM II. The interesting question of whether more efficient single synthesizing criterion and outranking based MCDA methods exist was left open for research. This thesis addresses these limitations by analysing the capabilities of IMPROMPTU using a comprehensive set of single synthesizing criterion and outranking based MCDA methods in the context of dynamic resource allocation. The performances of PROMETHEE II, ELECTRE III, PAMSSEM II, REGIME, ORESTE, QUALIFEX, AHP and SMART are investigated by in-depth analysis of simulation results. Most importantly, the question of what denotes the properties of good MCDA methods for this problem domain is answered.<br>Graduate<br>0984
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Wen-Teh, and 王文德. "A Virtual Cloud Storage System over Multiple Cloud Storage Providers." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/46191320315547499872.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學研究所<br>104<br>With the growth of cloud services, cloud storage is already part of our life. Cloud services provide user a cheap cloud storage to manage and backup data efficiently. As the cloud services become general, some security issues also follow if we still depend on single cloud storage. Thus we model an algorithm to simulate RAID (Redundant Array of Independent Disks) on cloud storage. Papak system is a cloud storage system which maintain data privacy without dedicated encryption but integrated with RAID simulation. We simulate a real RAID storage structure and pattern where data is split and uploaded to multiple cloud storages, this method increase data privacy, resolve dependence and transmission bottleneck of single cloud storage. Besides, Papak system also suffer cloud storage offline temporarily but keep data integrity. Papak system also support remote data rebuilding if necessary. We implement and evaluate our algorithm, simulate multiple situations and also compare with other algorithm. The results show our algorithm achieve better performance when process large files or numerous cloud storages are connected.
APA, Harvard, Vancouver, ISO, and other styles
39

Jiang, Hejhan, and 江和展. "Scheduling Multiple Workflows on HPC Cloud." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/58721443346465938298.

Full text
Abstract:
碩士<br>國立臺中教育大學<br>資訊科學系<br>99<br>Cloud computing is getting popular in recent years. It provides several kinds of services for various users. High Performance Computing (HPC) cloud has recently become one of the promising cloud services. It provides on-demand high-performance computing services for compute-intensive scientific and engineering applications. Many large-scale applications are usually constructed as workflows due to large amounts of interrelated computation and communication. Most previous researches focus on single workflow scheduling. Since cloud has to serve many users simultaneously, how to schedule multiple workflows efficiently becomes an important issue in HPC cloud environments. Traditionally, list scheduling and clustering are the two most important workflow scheduling strategies. In this thesis, we propose a hybrid approach for multi-workflow scheduling, which takes advantage of both listing scheduling and clustering. For task allocation, we developed a distributed gap search scheme which outperforms existing approaches. The proposed approaches have been evaluated with a series of simulation experiments and compared to existing methods in the literature. The results indicate that our hybrid approach outperforms typical listing scheduling and clustering methods significantly in terms of average makespan, up to 12% performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
40

Tasi, Meng-Ting, and 蔡孟廷. "Multiple Replica Provable Data Possession Mechanisms in Cloud Computing." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/38442621516959654359.

Full text
Abstract:
碩士<br>南台科技大學<br>資訊管理系<br>100<br>Recently, the service of data storage in cloud becomes mature. The new model of data storage, however, brings many new information security problems and challenges. For examples, the providers of cloud storage service may base on their own benefits to hide the faults during storing data and do not inform data owners. They may deliberately remove less access data in order to save storage space and cost. They also probably cheat data owners that multiple replicas of the data have been stored, in fact, only one replica is stored. Therefore, the main objective of our research is how to verify the integrity and availability of stored data in cloud computing environment. The thesis first focuses on the security of the static data stored in the cloud storage devices and tries to design secure and efficient provable data possession protocols. It will not restrict the number of verifying data possession, and offer verification to multiple servers at the same time. Furthermore, we consider the dynamic data processing, and also design the dynamic provable data possession protocols in the cloud computing environment. It can achieve the destination of extending the applied environment of the provable data possession protocols.
APA, Harvard, Vancouver, ISO, and other styles
41

Shen, Tang-Ming, and 沈堂名. "Applying Multiple Attribute Decision Methods to IaaS Cloud Service Selection." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/53850125953115860969.

Full text
Abstract:
碩士<br>國立高雄應用科技大學<br>資訊管理系碩士在職專班<br>102<br>Cloud computing is an emerging business model. Many Cloud computing related issues were discussed on recent Gartner's annual report on top 10 Strategic Technology Trends . Its advantage is gradually approved and adopted by enterprises. Business opportunities make cloud computing industry developed dramatically. It is an unavoidable trend for business organizations to adjust their information infrastructure. More and more organizations transfer their information systems from capital cost assets into variable cost by buying leasing service. When accepting more clouding infrastructure services, more enterprises outsource some of their existing IT system and limit the expansion of IT center. It is a critical issue for enterprises to evaluate their current IT resources and make a choice to meet their requirement from various combinations proposed by competing cloud service providers. This research applies the evaluation architecture of Simple Multi-Attribute Ranking Technique to evaluate proposals provided by cloud services suppliers. It also applies some monitoring tools and weight ranking techniques for the information people an automatic judgment method to transfer the information systems into cloud services and select the optimum solution to meet system requirement with cost-effective.
APA, Harvard, Vancouver, ISO, and other styles
42

Chang, Chuang Hung, and 莊鴻璋. "The Integration and Development Research of Multiple Cloud Application Services." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/y45cmz.

Full text
Abstract:
碩士<br>國立高雄應用科技大學<br>資訊工程系<br>104<br>Elevated had little experience may find themselves looking for a computer, the installation of web server software, connect to the web and fixed IP, it can be used as a web server. Yes it is true, but only in such a way just for fun or experimental nature of the site, for a stable operation of the corporate Web site, there are still more details of the non-professionals can not be taken into account. You need at least one servo host, the spring should also tens of thousands, even with inexpensive personal computers, and takes about ten thousand inhabitants. Some people may use the NAS to act as a server, but the NAS CPU / RAM performance is poor, we do not recommend to the official website operated when the daily number of visitors exceeded 2,000 people after the installation of the CMS and other more or eat when software resources, NAS bottlenecks will quickly emerge. In addition, you also need to keep the electrical systems, firewalls, and air-conditioning cooling water circulation system used .... These are professional computer room has some devices without their own preparations. Firewall, backup and other issues, host to 24 hours / 365 days without damage, maintaining continuous line at any time, to have a backup plan if damaged. A high value information, such as membership lists or order database, once damaged beyond repair can cause a big problem. In addition, without a firewall, are more likely to be hacked, the room will have professional personnel control, unauthorized persons can not access easily enter the host information. Manpower and maintenance costs in addition to the above costs, the cost of labor time also need to be considered. Because a lot of people know people who work network information, if the time on the management host, can be used to make the loss more productive things. Management System (CMS) generally require LAMP environment, you need a basic knowledge of Linux, with Mail, FTP, HTTP, DNS ... and other settings, we need to cooperate. If the setting is not good, will lead to a site can not build or some functions do not work. Two or three years using several different cloud platforms such as AWS (Amazon Web Service), Google Cloud Platform, Ali goes with recent cooperation with domestic Foxconn (Snow Mountain cloud), to construct the multi-architecture application integration and development of cloud services.
APA, Harvard, Vancouver, ISO, and other styles
43

Dong-Yang, Tang, and 唐東暘. "Multiple QoS-aware Selection of Cloud Service based on Genetic Algorithm." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/17992895391910145238.

Full text
Abstract:
碩士<br>東海大學<br>資訊工程學系<br>101<br>Cloud computing is providing services based on service-oriented architecture in recent years. The cloud application can be implemented by service composition in this architecture. An important challenge is how to select a service for each task involved in a composite service such that the overall QoS for the composite service is optimal. This includes customer focused elements such as response time, cost, reliability and availability. QoS-based service selection is a combination problem and is a kind of NP-Hard problems. This study investigated how to optimize the selection problems and propose a genetic algorithm-based method which solving the problem by considering many QoS attributes in the gene. By the evolution process, the gene will be more suitable solution for services to ensure service quality.
APA, Harvard, Vancouver, ISO, and other styles
44

Chang, Wei-Chun, and 張位群. "Real-time 3D Rendering Based on Multiple Cameras and Point Cloud." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/01223749333352243814.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊工程學系<br>102<br>3D model construction technic is very popular in recent decades. The movie “Avatar” is a milestone of this research; the surrounding, characters and alien creatures in Avatar are so lifelike that viewers would be impressed and shocked by the impact of virtual reality. The video game World of Warcraft and Diablo are another successful instance about 3D model. Kinect is a 3D sensor produced by Microsoft, and is widely used in many research fields, such as Computer Vision, Computer Graphics, etc. With Kinect, the task of efficiently acquiring 3D data would be possible. In other words, the distances between all the positions in Kinect vision and Kinect can be captured and reliable. According to these data, Kinect is able to divide the image into foreground (human) and background quickly and easily draw the geography of human. So this paper would integrate the traditional model building technic and much research about Kinect, and then propose a real-time 3D point cloud displaying method by multiple Kinect.
APA, Harvard, Vancouver, ISO, and other styles
45

ZHAO, BO-XU, and 趙伯勗. "Multiple moving object detection and tracking method using point cloud segmentation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/q65u84.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>資訊工程系<br>106<br>In this thesis, a moving detection and tracking method is proposed for multiple targets by using point cloud segmentation. LIDAR systems are widely used in autonomous systems. In an ego-motion system, it is an interesting research topic to identify moving objects from scene point clouds obtained by the mobile LIDAR. The proposed method can detect moving objects within a moving scene and the information of moving objects, e.g., relative velocity, can be used for collision avoidance for a driverless vehicle. The proposed approach consists of five steps: (1) point cloud capturing, (2) ground point removal, (3) segmentation, (4) foreground and background detection, (5) moving object tracking. Firstly, the 3D point cloud scene is retrieved by LiDAR mounted on a ego-motion system. Then, in order to reduce the computation complexity, ground points are removed by the ground detection algorithm. In third step, the rest points are grouped and segmented by the voxel grouping method to eliminate the noise point and to form objects. The velocities of objects are computed with respect to the ego-motion system for identifying the foreground (moving object) and the background (static objects). Finally, Kalman filter is used to track moving objects and to expect the position of these objects. The expecting position of moving objects can be used for collision avoidance.
APA, Harvard, Vancouver, ISO, and other styles
46

Baptista, Arménio Ferreira. "Digital management of multiple advertising displays." Master's thesis, 2018. http://hdl.handle.net/10773/25886.

Full text
Abstract:
The technological boom that we have been experiencing in the last decade has impacted the retail sector in several ways. Captivating customers through smart advertising, engaging them in the retail process and enhancing their experience has been a long-time desideratum in this industry. Recent technology makes it possible to follow unprecedented approaches for achieving these goals. In this thesis, we present a solution based on a series of autonomous stations (either static, such as monitors, or mobile, such as autonomous robots) that can be used in any type of multimedia advertising across one or multiple entities. The main core of the presented solution is a Web Server that can store the uploaded contents, which exposes a web dashboard for their management. Registered users can manage and distribute the contents through the connected terminals (or agents), being the only requirement a network connection between the server and the agents. We present results of this system within several research events that took place within the local academic environment. The system was used with the aim of automatizing the dissemination and advertising of local research works. Ultimately, we expect to extend its use to the retail sector in an attempt to impact modern advertising.<br>A explosão tecnológica que temos experenciado na última década tem tido impacto sobre o setor do retalho de várias formas. Cativar o público-alvo através de estratégias de publicidade, no processo de retalho e aprimorando a sua experiência tem sido um desiderato nesta indústria há bastante tempo. A tecnologia recente possibilita seguir abordagens nunca antes vistas para atingir estes objetivos. Nesta dissertação, é apresentada uma solução baseada em várias estações autónomas (sejam estáticas, como monitores, ou até móveis, como robôs autónomos) que podem ser usadas em qualquer publicidade de conteúdos multimédia de uma ou várias entidades. A parte central da solução apresentada é um servidor Web capaz de armazenar os conteúdos carregados, que expõe uma página web para a sua gestão. Utilizadores registados podem gerir e distribuir os conteúdos pelos terminais conectados (ou agentes), sendo o único requisito uma ligação de rede entre o servidor e os agentes. Apresentamos, também, resultados deste sistema dentro de vários eventos de investigação que tiveram lugar no ambiente académico local. O sistema foi usado com o objetivo de automatizar a disseminação e publicitação de trabalhos de investigação locais. Por fim, é expectável extender o seu uso no setor do retalho numa tentativa de impactar a publicidade moderna.<br>Mestrado em Engenharia de Computadores e Telemática
APA, Harvard, Vancouver, ISO, and other styles
47

HUANG, WEI-CHIANG, and 黃韋強. "Apply Multiple Gas Sensor to Air Quality Monitoring System with Cloud Applications." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/01363625717489682603.

Full text
Abstract:
碩士<br>中華大學<br>電機工程學系<br>105<br>As rapid progress of science and technology, environmental pollution becomes more and more serious. Among them air pollution is the most obviously and people are often in this state. But people cannot prevent immediately causing considerable injury. Thus, we integrate MQ-9 (Carbon Monoxide Sensor), MQ-135 (Smoke and Harmful Gas Sensor), and DHT11 (Humidity and Temperature Sensor) to sense and alarm abnormal status. So that, users can identify the situation of air quality to take protective measures, and reduce the hazards caused by air pollution. In this studying, we develop sensing circuits to detect humidity, temperature, carbon monoxide, combustible gas and smoke. In addition, we use an embedded microcontroller system (Arduino UNO R3 development board) to integrate the outputs of these sensing circuits. When sensing data show abnormal, our system can issue a warning signal immediately and transmit to personal cloud space by a WiFi module. Since we also combine smartphone apps, users can remote monitor air quality. In this thesis, an "Apply Multiple Gas Sensor to Air Quality Monitoring System with Cloud Applications" is proposed for smart living. Combing air quality monitoring system and smart phone, all sensing data can send to personal cloud space by internet. As long as the Internet is available, people can know the air quality in anytime and anywhere.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Jun-Yi, and 陳俊毅. "Study on Software Defined Networking for Cloud System across Multiple Network Domains." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/p9p8wh.

Full text
Abstract:
碩士<br>國立高雄應用科技大學<br>資訊工程系<br>102<br>Cloud computing has emerged as a promising paradigm for cost efficient and pervasive service delivery across data communication networks. While the computing and distributed system issues of cloud technologies have been extensively investigated, fewer attention has been devoted to the unique set of networking challenges to a cloud system. In this paper, we propose a platform to dynamically build and manage virtual networks across multiple data centers. The specific contributions presented in this paper are the following: First, we propose a novel mechanism called VirtualTransits to transparently extend a virtual network across one or more data centers. Second, we present an integrated system to incorporate several important datacenter networking schemes into a coherent platform that enables the dynamic configuration and management of virtual networks both intra-cloud and inter-cloud. Third, we provide the results of performance measurements from the implementation based on real production networks.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Huichen, and 李慧貞. "Enhancing Data Availability and Reliability of Cloud Computing through Multiple Hadoop Clusters." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/52244745882445906249.

Full text
Abstract:
碩士<br>輔仁大學<br>資訊工程學系碩士班<br>102<br>Cloud computing has been widely used in different areas recently. Hadoop is a very popular platform adopted in cloud computing. It provides a framework that enables the distributed processing of large data sets across clusters of omputers using simple programming models. The Hadoop Distributed File System (HDFS) is the default file system in Hadoop. The HDFS architecture consists of two parts, NameNode and DataNode. A typical HDFS cluster consists of one name node (called NameNode) and multiple datanodes (called DataNode). The NameNode mainly maintains the metadata of the entire namespace while DataNodes store all data blocks in the entire cluster. To achieve reliability and availability, HDFS keeps three extra copies for each file block among different DataNodes by default. However, keeping multiple file block replicas cannot assure high data reliability and availability one would like HDFS to have. This is because one single NameNode failure could bring down the entire HDFS service. Without the access to the metadata managed by NameNode, there is no way to have the access to files in a Hadoop cluster no matter how many block replicas that HDFS holds. Even the usage of BackupNode and AvatarNode has improved the reliability of NameNode, one still cannot exclude the possibility of losing NameNode service in unexpected accidents such as fire or earthquakes. As a result, it is imperative to keep important files duplicated in multiple Hadoop clusters to achieve high data availability and reliability accordingly. Currently, distcp is the only tool used to copy files from one (source) Hadoop to another (remote destination) Hadoop. For files with remote duplication, whenever their contents increase through data appending, their remote duplications need to be synchronized to maintain data consistency. distcp not only requires users to execute it manually, it also always transfers the entire contents of the file from the source Hadoop to the destination Hadoop, which could cause a great amount of waste in time and network bandwidth accordingly. To overcome this problem, we designed and implemented an efficient scheme in HDFS to conduct real-time synchronization for files duplicated among different Hadoop clusters to achieve high data availability and reliability. Consequently, users could be assured of better chances to have access to their important files from one of the synchronized Hadoop clusters in cases of cloud failure. Our experimental results show that, compared with distcp, our method can reduce the required time by up to 99.20%. Besides, compared with the original data appending with no remote duplication, depending on the size of data appended, our scheme will increase the time between 13.74% and 50.50% data reliability and availability can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
50

Yo, Chun-Wei, and 游鈞為. "Design and Implementation of Cloud-based Multiple Video Streaming Recording Service Platform." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/03403887359335718898.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊工程學系<br>99<br>網際網路發展日趨蓬勃,除了頻寬技術和無線通訊應用的發展,雲端的概念亦逐漸成形,雲端技術的成熟亦導入新服務模式的創新。同時,多媒體影音串流一直是網際網路應用的大宗。多媒體服務因上述改變亦隨著時代發展而有不同模式的改革。傳統的多媒體網際網路服務模式多是採用用戶端-伺服器端,但此模式相比於雲端技術有兩個明顯缺點。第一是伺服器上的負載量無法準確的符合使用量,即並沒有負載上的彈性。此問題將導致業者無法針對實際單位運行能力所需付出的成本進行管控,而面臨瞬間超載或浪費運算能力。其二是無法隨著服務規模的增長而有效提升基礎設施的擴建,迫使每當服務需求量劇烈提升所造成原先軟硬體配備不足,進而必須從新佈建整個系統。因此,在此篇論文中,我們藉由雲端系統中具有資源動態分配的IaaS架構研製了一系統平台。 此系統平台以當今網際網路多媒體串流服務的錄製為目標,即時性地服務使用者需求,並提供完整應用程式介面模組。系統平台參考雲端計算架構,改善傳統用戶端-伺服器端架構,成功地將雲端技術與現有系統架構整合。於雲端計算資源調配問題提出改良的動態分配演算法來提供一系統調控的新模組。當中服務是,針對多串流影音時間上的同步問題亦提出了解決的機制。 除此之外,本文亦做了系列的量測來說明此系統平台能大幅減輕運算能力、降低頻寬負荷,提升整體效能。並提出對於系統的改善建議和未來方向,以使系統平台更加完備。
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography