Dissertations / Theses on the topic 'BIM Cloud'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'BIM Cloud.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Magda, Jakub. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414311.
Full textAlreshidi, Eissa. "Towards facilitating team collaboration during construction project via the development of cloud-based BIM governance solution." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/88955/.
Full textLongo, Rosario Alessandro. "Dalla generazione di modelli 3D densi mediante TLS e fotogrammetria alla modellazione BIM." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13284/.
Full textStaufčík, Jakub. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-400177.
Full textTaher, Abdo, and Benyamin Ulger. "Tillämpning av BIM i ett byggnadsprojekt : Centrum för idrott och kultur i Knivsta." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55198.
Full textPurpose: The purpose of this study is to describe how the CIK project was executed. Furthermore, it is investigated how the CIK project could have been designed with BIM as a working method and what changes it would entail for the project. Method: This study has consisted of a literature study of BIM and a case study of the CIK project. The case study included an object description as well as documents and interviews were analyzed. In addition, remodeling and observations of the CIK project were performed with BIM tools. Results: The results initially show how the traditional working methods were implemented in the CIK project. The initial lack of requirements created ambiguities. The architects used traditional sketching methods for the design. Calculation and scheduling were handled separately from the 3D model. The 3D models in the CIK project were used for visualizations and coordination. However, the contracted documents for various deliveries were traditional 2D drawings. The architects produced over 300 drawings and over 100 different doors that were presented on different drawings. At the end of the production, selected PDF-documents were re-stamped in consultation with the customers for the administration. Furthermore, the results show how the CIK project with a BIM application could have been carried out. Initially, a BIM manual is created by a BIM strategist to specify the requirements. During design, parametric and generative design are used to find different solutions to meet the requirements. The calculation and schedule must be linked to the BIM model. All information management takes place in cloud services such as BIMeye. Through StreamBIM, the production then retrieves all the necessary information from the BIM model. Additional detailed drawings should be linked to the objects in StreamBIM. During production, the BIM model is updated before the delivery to the customers. Conclusions: The conclusions that can be drawn from this study are that the information management is an important aspect to implement during the construction process. For the CIK project, BIM would mean a completely new way of managing and centralizing the information by linking data to the objects in the model. The point of BIM projects is to constantly turn to the model or database, which is linked to the model to retrieve the necessary information. The result is therefore a release from the information islands and duplication of work that is formed in a traditional design.
Thomson, C. P. H. "From point cloud to building information model : capturing and processing survey data towards automation for high quality 3D models to aid a BIM process." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1485847/.
Full textEliasson, Oscar, and Adam Söderberg. "Projektering av dörrmiljöer - metoder och informationshantering." Thesis, Tekniska Högskolan, Jönköping University, JTH, Byggnadsteknik och belysningsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50368.
Full textPurpose: Construction projects have become more and more complex, with increasing demands on quality, environment, and sustainability. With the development, BIM models have become a major part of the construction projects. The requirements and complexity create large amounts of information in the projects. This makes the models heavy and hard to work with. One subprocess that contributes to the complexity and amount of information is the design of door environments. Because the models are growing, solutions will be designed to facilitate information management linked to the models. One such solution is databases that are linked to the model. The study therefore intends to investigate whether a database service linked to a model can make the design of door environments more effective. Method: To achieve the goal and answer the study's questions, a qualitative approach has been used. A literature study was done to gather facts for problem description and theoretical framework. The empiricism is based on six semi-structured interviews as well as an observation and a test of BIMEYE. Based on the empiricism, the questions and the selected theories, an analysis has been designed. Findings: The result shows that the design of door environments is based on the fact that all requirements are identified at the beginning of a project and that knowledgeable project members are required during door design, in order to identify the functions of the complex door environments. To give a clearer presentation of the doors and their functions, the respondents preferred door environment drawings where each door is presented separately. Problems with coordination of information and information that disappear can be solved by gathering the information in a database so that the information is stored in one place. There it can be made available for processing by everyone involved in the project. The study shows that cloud-based databases linked to BIM models can streamline information management during the design work, as it is the source of information for several different tools. Implications: Door environments will remain complex and contain a large number of functions. To facilitate and streamline the work process, the cloud service BIMEYE or a similar service can be used. Such a service contributes to a more secure information management and will reduce the number of misses during review. When transitioning to a database-based way of working, it is recommended, based on the study's results, that the employees get trained and that a standardized workflow is developed. Limitations: The work was limited to studying the design process for door environments. It is therefore uncertain whether the results of the study can be applied to other sub-processes during the design phase. The study has also been delimited from cloud services other than BIMEYE which may make the result not applicable to other similar services. Furthermore, it cannot be ruled out that the result is not applicable to ArchiCad and Simplebim because the study has focused on Revit and its connection to BIMEYE.
Martinini, Elena. "Building Information Modeling: analisi e utilizzo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8272/.
Full textPenk, David. "Vyhotovení 3D modelu části budovy SPŠ stavební Brno." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2021. http://www.nusl.cz/ntk/nusl-444256.
Full textHaltmar, Jan. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-400156.
Full textCrabtree, Gärdin David, and Alexander Jimenez. "Optical methods for 3D-reconstruction of railway bridges : Infrared scanning, Close range photogrammetry and Terrestrial laser scanning." Thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67716.
Full textMassafra, Angelo. "La modellazione parametrica per la valutazione degli stati deformativi delle capriate lignee con approccio HBIM. Evoluzione della fabbrica e della copertura del teatro comunale di Bologna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Find full textLukášová, Pavlína. "Cloud Computing jako nástroj BCM." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75556.
Full textAnagnostopoulos, Ioannis. "Generating As-Is BIMs of existing buildings : from planar segments to spaces." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/281699.
Full textIslam, Md Zahidul. "A Cloud Based Platform for Big Data Science." Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-103700.
Full textTalevi, Iacopo. "Big Data Analytics and Application Deployment on Cloud Infrastructure." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14408/.
Full textMcCaul, Christopher Francis. "Big Data: Coping with Data Obesity in Cloud Environments." Thesis, Ulster University, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.724751.
Full textMartins, Pedro Miguel Pereira Serrano. "Evaluation and optimization of a session-based middleware for data management." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12609.
Full textThe current massive daily production of data has created a non-precedent opportunity for information extraction in many domains. However, this huge rise in the quantities of generated data that needs to be processed, stored, and timely delivered, has created several new challenges. In an effort to attack these challenges [Dom13] proposed a middleware with the concept of a Session capable of dynamically aggregating, processing and disseminating large amounts of data to groups of clients, depending on their interests. However, this middleware is deployed on a commercial cloud with limited processing support in order to reduce its costs. Moreover, it does not explore the scalability and elasticity capabilities provided by the cloud infrastructure, which presents a problem even if the associated costs may not be a concern. This thesis proposes to improve the middleware’s performance and to add to it the capability of scaling when inside a cloud by requesting or dismissing additional instances. Additionally, this thesis also addresses the scalability and cost problems by exploring alternative deployment scenarios for the middleware, that consider free infrastructure providers and open-source cloud management providers. To achieve this, an extensive evaluation of the middleware’s architecture is performed using a profiling tool and several test applications. This information is then used to propose a set of solutions for the performance and scalability problems, and then a subset of these is implemented and tested again to evaluate the gained benefits.
Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.
Full textDurante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
Kemp, Gavin. "CURARE : curating and managing big data collections on the cloud." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1179/document.
Full textThe emergence of new platforms for decentralized data creation, such as sensor and mobile platforms and the increasing availability of open data on the Web, is adding to the increase in the number of data sources inside organizations and brings an unprecedented Big Data to be explored. The notion of data curation has emerged to refer to the maintenance of data collections and the preparation and integration of datasets, combining them to perform analytics. Curation tasks include extracting explicit and implicit meta-data; semantic metadata matching and enrichment to add quality to the data. Next generation data management engines should promote techniques with a new philosophy to cope with the deluge of data. They should aid the user in understanding the data collections’ content and provide guidance to explore data. A scientist can stepwise explore into data collections and stop when the content and quality reach a satisfaction point. Our work adopts this philosophy and the main contribution is a data collections’ curation approach and exploration environment named CURARE. CURARE is a service-based system for curating and exploring Big Data. CURARE implements a data collection model that we propose, used for representing their content in terms of structural and statistical meta-data organised under the concept of view. A view is a data structure that provides an aggregated perspective of the content of a data collection and its several associated releases. CURARE provides tools focused on computing and extracting views using data analytics methods and also functions for exploring (querying) meta-data. Exploiting Big Data requires a substantial number of decisions to be performed by data analysts to determine which is the best way to store, share and process data collections to get the maximum benefit and knowledge from them. Instead of manually exploring data collections, CURARE provides tools integrated in an environment for assisting data analysts determining which are the best collections that can be used for achieving an analytics objective. We implemented CURARE and explained how to deploy it on the cloud using data science services on top of which CURARE services are plugged. We have conducted experiments to measure the cost of computing views based on datasets of Grand Lyon and Twitter to provide insight about the interest of our data curation approach and environment
Huang, Xueli. "Achieving Data Privacy and Security in Cloud." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/372805.
Full textPh.D.
The growing concerns in term of the privacy of data stored in public cloud have restrained the widespread adoption of cloud computing. The traditional method to protect the data privacy is to encrypt data before they are sent to public cloud, but heavy computation is always introduced by this approach, especially for the image and video data, which has much more amount of data than text data. Another way is to take advantage of hybrid cloud by separating the sensitive data from non-sensitive data and storing them in trusted private cloud and un-trusted public cloud respectively. But if we adopt the method directly, all the images and videos containing sensitive data have to be stored in private cloud, which makes this method meaningless. Moreover, the emergence of the Software-Defined Networking (SDN) paradigm, which decouples the control logic from the closed and proprietary implementations of traditional network devices, enables researchers and practitioners to design new innovative network functions and protocols in a much easier, flexible, and more powerful way. The data plane will ask the control plane to update flow rules when the data plane gets new network packets with which it does not know how to deal with, and the control plane will then dynamically deploy and configure flow rules according to the data plane's requests, which makes the whole network could be managed and controlled efficiently. However, this kind of reactive control model could be used by hackers launching Distributed Denial-of-Service (DDoS) attacks by sending large amount of new requests from the data plane to the control plane. For image data, we divide the image is into pieces with equal size to speed up the encryption process, and propose two kinds of method to cut the relationship between the edges. One is to add random noise in each piece, the other is to design a one-to-one mapping function for each piece to map different pixel value into different another one, which cuts off the relationship between pixels as well the edges. Our mapping function is given with a random parameter as inputs to make each piece could randomly choose different mapping. Finally, we shuffle the pieces with another random parameter, which makes the problems recovering the shuffled image to be NP-complete. For video data, we propose two different methods separately for intra frame, I-frame, and inter frame, P-frame, based on their different characteristic. A hybrid selective video encryption scheme for H.264/AVC based on Advanced Encryption Standard (AES) and video data themselves is proposed for I-frame. For each P-slice of P-frame, we only abstract small part of them in private cloud based on the characteristic of intra prediction mode, which efficiently prevents P-frame being decoded. For cloud running with SDN, we propose a framework to keep the controller away from DDoS attack. We first predict the amount of new requests for each switch periodically based on its previous information, and the new requests will be sent to controller if the predicted total amount of new requests is less than the threshold. Otherwise these requests will be directed to the security gate way to check if there is a attack among them. The requests that caused the dramatic decrease of entropy will be filter out by our algorithm, and the rules of these request will be made and sent to controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to honey pot.
Temple University--Theses
Tang, Yuzhe. "Secure and high-performance big-data systems in the cloud." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53995.
Full textJayapandian, Catherine Praveena. "Cloudwave: A Cloud Computing Framework for Multimodal Electrophysiological Big Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405516626.
Full textSaker, Vanessa. "Automated feature synthesis on big data using cloud computing resources." Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32452.
Full textWoodworth, Jason W. "Secure Semantic Search over Encrypted Big Data in the Cloud." Thesis, University of Louisiana at Lafayette, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10286646.
Full textCloud storage is a widely used service for both a personal and enterprise demands. However, despite its advantages, many potential users with sensitive data refrain from fully using the service due to valid concerns about data privacy. An established solution to this problem is to perform encryption on the client?s end. This approach, however, restricts data processing capabilities (e.g. searching over the data). In particular, searching semantically with real-time response is of interest to users with big data. To address this, this thesis introduces an architecture for semantically searching encrypted data using cloud services. It presents a method that accomplishes this by extracting and encrypting key phrases from uploaded documents and comparing them to queries that have been expanded with semantic information and then encrypted. It presents an additional method that builds o? of this and uses topic-based clustering to prune the amount of searched data and improve performance times for big-data-scale. Results of experiments carried out on real datasets with fully implemented prototypes show that results are accurate and searching is e?cient.
Foschini, Federico. "Amber: a Cloud Service Architecture proposal." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13840/.
Full textGiraud, Matthieu. "Secure Distributed MapReduce Protocols : How to have privacy-preserving cloud applications?" Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC033/document.
Full textIn the age of social networks and connected objects, many and diverse data are produced at every moment. The analysis of these data has led to a new science called "Big Data". To best handle this constant flow of data, new calculation methods have emerged.This thesis focuses on cryptography applied to processing of large volumes of data, with the aim of protection of user data. In particular, we focus on securing algorithms using the distributed computing MapReduce paradigm to perform a number of primitives (or algorithms) essential for data processing, ranging from the calculation of graph metrics (e.g. PageRank) to SQL queries (i.e. set intersection, aggregation, natural join).In the first part of this thesis, we discuss the multiplication of matrices. We first describe a standard and secure matrix multiplication for the MapReduce architecture that is based on the Paillier’s additive encryption scheme to guarantee the confidentiality of the data. The proposed algorithms correspond to a specific security hypothesis: collusion or not of MapReduce cluster nodes, the general security model being honest-but-curious. The aim is to protect the confidentiality of both matrices, as well as the final result, and this for all participants (matrix owners, calculation nodes, user wishing to compute the result). On the other hand, we also use the matrix multiplication algorithm of Strassen-Winograd, whose asymptotic complexity is O(n^log2(7)) or about O(n^2.81) which is an improvement compared to the standard matrix multiplication. A new version of this algorithm adapted to the MapReduce paradigm is proposed. The safety assumption adopted here is limited to the non-collusion between the cloud and the end user. The version uses the Paillier’s encryption scheme.The second part of this thesis focuses on data protection when relational algebra operations are delegated to a public cloud server using the MapReduce paradigm. In particular, we present a secureintersection solution that allows a cloud user to obtain the intersection of n > 1 relations belonging to n data owners. In this solution, all data owners share a key and a selected data owner sharesa key with each of the remaining keys. Therefore, while this specific data owner stores n keys, the other owners only store two keys. The encryption of the real relation tuple consists in combining the use of asymmetric encryption with a pseudo-random function. Once the data is stored in the cloud, each reducer is assigned a specific relation. If there are n different elements, XOR operations are performed. The proposed solution is very effective. Next, we describe the variants of grouping and aggregation operations that preserve confidentiality in terms of performance and security. The proposed solutions combine the use of pseudo-random functions with the use of homomorphic encryption for COUNT, SUM and AVG operations and order preserving encryption for MIN and MAX operations. Finally, we offer secure versions of two protocols (cascade and hypercube) adapted to the MapReduce paradigm. The solutions consist in using pseudo-random functions to perform equality checks and thus allow joining operations when common components are detected. All the solutions described above are evaluated and their security proven
Al, Buhussain Ali. "Design and Analysis of an Adjustable and Configurable Bio-inspired Heuristic Scheduling Technique for Cloud Based Systems." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34794.
Full textBarbosa, Margarida de Carvalho Jerónimo. "As-built building information modeling (BIM) workflows." Doctoral thesis, Universidade de Lisboa, Faculdade de Arquitetura, 2018. http://hdl.handle.net/10400.5/16380.
Full textAs metodologias associadas ao software BIM (Building Information Modeling) representam nos dias de hoje um dos sistemas integrados mais utilizado para a construção de novos edifícios. Ao usar BIM no desenvolvimento de projetos, a colaboração entre os diferentes intervenientes num projeto de arquitetura, engenharia e construção, melhora de um modo muito significativo. Esta tecnologia também pode ser aplicada para intervenções em edifícios existentes. Na presente tese pretende-se melhorar os processos de registo, documentação e gestão da informação, recorrendo a ferramentas BIM para estabelecer um conjunto de diretrizes de fluxo de trabalho, para modelar de forma eficiente as estruturas existentes a partir de nuvens de pontos, complementados com outros métodos apropriados. Há vários desafios que impedem a adoção do software BIM para o planeamento de intervenções em edifícios existentes. Volk et al. (2014) indica que os principais obstáculos de adoção BIM são o esforço de modelação/conversão dos elementos do edifício captados em objetos BIM, a dificuldade em actualizar informação em BIM e as dificuldades em lidar com as incertezas associadas a dados, objetos e relações que ocorrem em edifícios existentes. A partir desta análise, foram desenvolvidas algumas diretrizes de fluxo de trabalho BIM para modelação de edifícios existentes. As propostas indicadas para as diretrizes BIM em edifícios existentes, incluem tolerâncias e standards para modelar elementos de edifícios existentes. Tal metodologia permite que as partes interessadas tenham um entendimento e um acordo sobre o que é suposto ser modelado. Na presente tese, foi investigado um conjunto de tópicos de pesquisa que foram formuladas e colocadas, enquadrando os diferentes obstáculos e direcionando o foco de pesquisa segundo quatro vectores fundamentais: 1. Os diferentes tipos de dados de um edifício que podem ser adquiridos a partir de nuvens de pontos; 2. Os diferentes tipos de análise de edifícios; 3. A utilização de standards e BIM para edifícios existentes; 4. Fluxos de trabalho BIM para edifícios existentes e diretrizes para ateliers de arquitectura. A partir da pesquisa efetuada, pode-se concluir que é há necessidade de uma melhor utilização da informação na tomada de decisão no âmbito de um projeto de intervenção arquitetónica. Diferentes tipos de dados, não apenas geométricos, são necessários como base para a análise dos edifícios. Os dados não geométricos podem referir-se a características físicas do tecido construído, tais como materiais, aparência e condição. Além disso, o desempenho ambiental, estrutural e mecânico de um edifício, bem como valores culturais, históricos e arquitetónicos, essenciais para a compreensão do seu estado atual. Estas informações são fundamentais para uma análise mais profunda que permita a compreensão das ações de intervenção que são necessárias no edifício. Através de tecnologias Fotogrametria (ADP) e Laser Scanning (TLS), pode ser gerada informação precisa e actual. O produto final da ADP e TLS são nuvens de pontos, que podem ser usadas de forma complementar. A combinação destas técnicas com o levantamento tradicional Robotic Total Station (RTS) fornece uma base de dados exata que, juntamente com outras informações existentes, permitem o planeamento adequado da intervenção. Os problemas de utilização de BIM para intervenção em edifícios existentes referem-se principalmente à análise e criação de geometria do edifício, o que geralmente é uma etapa prévia para a conexão de informação não-geométrica de edifícios. Por esta razão, a presente tese centra-se principalmente na busca de diretrizes para diminuir a dificuldade em criar os elementos necessários para o BIMs. Para tratar dados incertos e pouco claros ou informações semânticas não visíveis, pode-se complementar os dados originais com informação adicional. Os fluxos de trabalho apresentados na presente tese focam-se principalmente na falta de informação visível. No caso de projetos de remodelação, a informação não visível pode ser adquirida de forma limitada através de levantamentos ADP ou TLS após a demolição de alguns elementos e/ou camadas de parede. Tal metodologia permite um melhor entendimento das camadas de materiais não visíveis dos elementos do edifício, quando a intervenção é uma demolição parcial. Este processo é útil apenas se uma parte do material do elemento é removida e não pode ser aplicada a elementos não intervencionados. O tratamento da informação em falta pode ser feito através da integração de diferentes tipos de dados com diferentes origens. Devem ser implementados os fluxos de trabalho para a integração da informação. Diferentes fluxos de trabalho podem criar informação em falta, usada como complemento ou como base para a tomada de decisão quando não há dados disponíveis. Relativamente à adição de dados em falta através da geração de nuvem de pontos, os casos de estudo destacam a importância de planear o levantamento, fazendo com que todas as partes compreendam as necessidades associadas ao projeto. Além da precisão, o nível de tolerância de interpretação e modelação, requeridos pelo projeto, também devem ser acordados e entendidos. Nem todas as ferramentas e métodos de pesquisa são adequados para todos os edifícios. A escala, os materiais e a acessibilidade do edifício desempenham um papel importante no planeamento do levantamento. Para lidar com o elevado esforço de modelação, é necessário entender os fluxos de trabalho necessários para analisar a geometria dos elementos do edifício. Os BIMs construídos são normalmente gerados manualmente através de desenhos CAD e/ou nuvens de pontos. Estes são usados como base geométrica a partir da qual a informação é extraída. A informação utilizada para planear a intervenção do edifício deve ser verificada, confirmando se é uma representação do estado actual do edifício. As técnicas de levantamento 3D para capturar a condição atual do edifício devem ser integradas no fluxo de trabalho BIM, construído para capturar os dados do edifício sobre os quais serão feitas as decisões de intervenção. O resultado destas técnicas deve ser integrado com diferentes tipos de dados para fornecer uma base mais precisa e completa. O atelier de arquitetura deve estar habilitado com competências técnicas adequadas para saber o que pedir e o que utilizar da forma mais adequada. Os requisitos de modelação devem concentrar-se principalmente no conteúdo deste processo, ou seja, o que modelar, como desenvolver os elementos no modelo, quais as informações que o modelo deve conter e como deve ocorrer a troca de informações no modelo. O levantamento das nuvens de pontos deve ser efectuado após ter sido estipulado o objetivo do projeto, standards, tolerâncias e tipo de conteúdo na modelação. As tolerâncias e normas de modelação são diferentes entre empresas e países. Independentemente destas diferenças, os documentos standard têm como objetivo produzir e receber informação num formato de dados consistente e em fluxos de trabalho de troca eficiente entre os diferentes intervenientes do projeto. O pensamento crítico do fluxo de trabalho de modelação e a comunicação e acordo entre todas os intervenientes são os principais objetivos das diretrizes apresentadas nesta tese. O estabelecimento e o acordo de tolerâncias de modelação e o nível de desenvolvimento e detalhes presentes nas BIMs, entre as diferentes partes envolvidas no projeto, são mais importantes do que as definições existentes atualmente e que são utilizadas pela indústria da AEC. As ferramentas automáticas ou semi-automáticas para extração da forma geométrica, eliminação ou redução de tarefas repetitivas durante o desenvolvimento de BIMs e a análise de condições de ambiente ou de cenários, são também um processo de diminuição do esforço de modelação. Uma das razões que justifica a necessidade de standards é a estrutura e a melhoria da colaboração, não só para os intervenientes fora da empresa, mas também dentro dos ateliers de arquitetura. Os dados e standards de fluxo de trabalho são difíceis de implementar diariamente de forma eficiente, resultando muitas vezes em dados e fluxos de trabalho confusos. Quando tal situação ocorre, a qualidade dos resultados do projeto reduz-se e pode ficar comprometida. As normas aplicadas aos BIMs construídos, exatamente como as normas aplicadas aos BIMs para edifícios novos, contribuem para a criação de informação credível e útil. Para atualizar um BIMs durante o ciclo de vida de um edifício,é necessário adquirir a informação sobre o estado actual do edifício. A monitorização de dados pode ser composta por fotografias, PCM, dados de sensores, ou dados resultantes da comparação de PCM e BIMs e podem representar uma maneira de atualizar BIMs existentes. Isto permite adicionar continuamente informações, documentando a evolução e a história da construção e possibilita avaliar possíveis intervenções de prevenção para a sua valorização. BIM não é geralmente usado para documentar edifícios existentes ou intervenções em edifícios existentes. No presente trabalho propõe-se melhorar tal situação usando standards e/ou diretrizes BIM e apresentar uma visão inicial e geral dos componentes que devem ser incluídos em tais standards e/ou linhas de orientação.
ABSTRACT: Building information modeling (BIM) is most often used for the construction of new buildings. By using BIM in such projects, collaboration among stakeholders in an architecture, engineering and construction project is improved. This scenario might also be targeted for interventions in existing buildings. This thesis intends to enhance processes of recording, documenting and managing information by establishing a set of workflow guidelines to efficiently model existing structures with BIM tools from point cloud data, complemented with any other appropriate methods. There are several challenges hampering BIM software adoption for planning interventions in existing buildings. Volk et al. (2014) outlines that the as-built BIM adoption main obstacles are: the required modeling/conversion effort from captured building data into semantic BIM objects; the difficulty in maintaining information in a BIM; and the difficulties in handling uncertain data, objects, and relations occurring in existing buildings. From this analysis, it was developped a case for devising BIM workflow guidelines for modeling existing buildings. The proposed content for BIM guidelines includes tolerances and standards for modeling existing building elements. This allows stakeholders to have a common understanding and agreement of what is supposed to be modeled and exchanged.In this thesis, the authors investigate a set of research questions that were formed and posed, framing obstacles and directing the research focus in four parts: 1. the different kind of building data acquired; 2. the different kind of building data analysis processes; 3. the use of standards and as-built BIM and; 4. as-built BIM workflows and guidelines for architectural offices. From this research, the authors can conclude that there is a need for better use of documentation in which architectural intervention project decisions are made. Different kind of data, not just geometric, is needed as a basis for the analysis of the current building state. Non-geometric information can refer to physical characteristics of the built fabric, such as materials, appearance and condition. Furthermore environmental, structural and mechanical building performance, as well as cultural, historical and architectural values, style and age are vital to the understanding of the current state of the building. These information is necessary for further analysis allowing the understanding of the necessary actions to intervene. Accurate and up to date information information can be generated through ADP and TLS surveys. The final product of ADP and TLS are the point clouds, which can be used to complement each other. The combination of these techniques with traditional RTS survey provide an accurate and up to date base that, along with other existing information, allow the planning of building interventions. As-built BIM adoption problems refer mainly to the analysis and generation of building geometry, which usually is a previous step to the link of non-geometric building information. For this reason the present thesis focus mainly in finding guidelines to decrease the difficulty in generating the as-built-BIMs elements. To handle uncertain data and unclear or hidden semantic information, one can complement the original data with additional missing information. The workflows in the present thesis address mainly the missing visible information. In the case of refurbishment projects the hidden information can be acquired to some extend with ADP or TLS surveys after demolition of some elements and wall layers. This allows a better understanding of the non visible materials layers of a building element whenever it is a partial demolition. This process is only useful if a part of the element material is removed, it can not be applied to the non intervened elements. The handling of visible missing data, objects and relations can be done by integrating different kind of data from different kind of sources. Workflows to connect them in a more integrated way should be implemented. Different workflows can create additional missing information, used to complement or as a base for decision making when no data is available. Relating to adding missing data through point cloud data generation the study cases outlined the importance of planning the survey, with all parts understanding what the project needs are. In addition to accuracy, the level of interpretation and modelling tolerances, required by the project, must also be agreed and understood. Not all survey tools and methods are suitable for all buildings: the scale, materials and accessibility of building play a major role in the survey planning. To handle the high modeling/conversion effort one has to understand the current workflows to analyse building geometry. As-built BIMs are majorly manually generated through CAD drawings and/or PCM data. These are used as a geometric basis input from where information is extracted. The information used to plan the building intervention should be checked, confirming it is a representation of the as-is state of the building. The 3D surveys techniques to capture the as-is state of the building should be integrated in the as-built BIM workflow to capture the building data in which intervention decisions are made. The output of these techniques should be integrated with different kind of data to provide the most accurate and complete basis. The architectural company should have technical skills to know what to ask for and to use it appropriately. Modeling requirements should focus primarily on the content of this process: what to model, how to develop the elements in the model, what information should the model contain, and how should information in the model be exchanged. The point clouds survey should be done after stipulating the project goal, standards, tolerances and modeling content. Tolerances and modeling guidelines change across companies and countries. Regardless of these differences the standards documents have the purpose of producing and receiving information in a consistent data format, in efficient exchange workflows between project stakeholders. The critical thinking of the modeling workflow and, the communication and agreement between all parts involved in the project, is the prime product of this thesis guidelines. The establishment and agreement of modeling tolerances and the level of development and detail present in the BIMs, between the different parts involved on the project, is more important than which of the existing definitions currently in use by the AEC industry is chosen. Automated or semi-automated tools for elements shape extraction, elimination or reduction of repetitive tasks during the BIMs development and, analysis of environment or scenario conditions are also a way of decreasing the modeling effort. One of the reasons why standards are needed is the structure and improvement of the collaboration not only with outside parts but also inside architectural offices. Data and workflow standards are very hard to implement daily, in a practical way, resulting in confusing data and workflows. These reduce the quality of communication and project outputs. As-built BIM standards, exactly like BIM standards, contribute to the creation of reliable and useful information. To update a BIMs during the building life-cycle, one needs to acquire the as-is building state information. Monitoring data, whether consisted by photos, PCM, sensor data, or data resulting from the comparison of PCM and BIMs can be a way of updating existing BIMs. It allows adding continuously information, documenting the building evolution and story, and evaluating possible prevention interventions for its enhancement. BIM environments are not often used to document existing buildings or interventions in existing buildings. The authors propose to improve the situation by using BIM standards and/or guidelines, and the authors give an initial overview of components that should be included in such a standard and/or guideline.
N/A
Safieddine, Ibrahim. "Optimisation d'infrastructures de cloud computing sur des green datacenters." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM083/document.
Full textNext-generation green datacenters were designed for optimized consumption and improved quality of service level Service Level Agreement (SLA). However, in recent years, the datacenter market is growing rapidly, and the concentration of the computing power is increasingly important, thereby increasing the electrical power and cooling consumptions. A datacenter consists of computing resources, cooling systems, and power distribution. Many research studies have focused on reducing the consumption of datacenters to improve the PUE, while guaranteeing the same level of service. Some works aims the dynamic sizing of resources according to the load, to reduce the number of started servers, others seek to optimize the cooling system which represents an important part of total consumption. In this thesis, in order to reduce the PUE, we study the design of an autonomous system for global cooling optimization, which is based on external data sources such as the outside temperature and weather forecasting, coupled with an overall IT load prediction module to absorb the peaks of activity, to optimize activere sources at a lower cost while preserving service level quality. To ensure a better SLA, we propose a distributed architecture to detect the complex operation anomalies in real time, by analyzing large data volumes from thousands of sensors deployed in the datacenter. Early identification of abnormal behaviors, allows a better reactivity to deal with threats that may impact the quality of service, with autonomous control loops that automate the administration. We evaluate the performance of our contributions on data collected from an operating datacenter hosting real applications
Chihoub, Houssem Eddine. "Managing consistency for big data applications : tradeoffs and self-adaptiveness." Thesis, Cachan, Ecole normale supérieure, 2013. http://www.theses.fr/2013DENS0059/document.
Full textIn the era of Big Data, data-intensive applications handle extremely large volumes of data while requiring fast processing times. A large number of such applications run in the cloud in order to benefit from cloud elasticity, easy on-demand deployments, and cost-efficient Pays-As-You-Go usage. In this context, replication is an essential feature in the cloud in order to deal with Big Data challenges. Therefore, replication therefore, enables high availability through multiple replicas, fast data access to local replicas, fault tolerance, and disaster recovery. However, replication introduces the major issue of data consistency across different copies. Consistency management is a critical for Big Data systems. Strong consistency models introduce serious limitations to systems scalability and performance due to the required synchronization efforts. In contrast, weak and eventual consistency models reduce the performance overhead and enable high levels of availability. However, these models may tolerate, under certain scenarios, too much temporal inconsistency. In this Ph.D thesis, we address this issue of consistency tradeoffs in large-scale Big Data systems and applications. We first, focus on consistency management at the storage system level. Accordingly, we propose an automated self-adaptive model (named Harmony) that scale up/down the consistency level at runtime when needed in order to provide as high performance as possible while preserving the application consistency requirements. In addition, we present a thorough study of consistency management impact on the monetary cost of running in the cloud. Hereafter, we leverage this study in order to propose a cost efficient consistency tuning (named Bismar) in the cloud. In a third direction, we study the consistency management impact on energy consumption within the data center. According to our findings, we investigate adaptive configurations of the storage system cluster that target energy saving. In order to complete our system-side study, we focus on the application level. Applications are different and so are their consistency requirements. Understanding such requirements at the storage system level is not possible. Therefore, we propose an application behavior modeling that apprehend the consistency requirements of an application. Based on the model, we propose an online prediction approach- named Chameleon that adapts to the application specific needs and provides customized consistency
Pagliari, Alessio. "Network as an On-Demand Service for Multi-Cloud Workloads." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.
Find full textMadonia, Tommaso. "Container-based spot market in the cloud: design of a bid advisor." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13367/.
Full textFlatt, Taylor. "CrowdCloud: Combining Crowdsourcing with Cloud Computing for SLO Driven Big Data Analysis." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2234.
Full textGolchay, Roya. "From mobile to cloud : Using bio-inspired algorithms for collaborative application offloading." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI009.
Full textNot bounded by time and place, and having now a wide range of capabilities, smartphones are all-in-one always connected devices - the favorite devices selected by users as the most effective, convenient and neces- sary communication tools. Current applications developed for smartphones have to face a growing demand in functionalities - from users, in data collecting and storage - from IoT device in vicinity, in computing resources - for data analysis and user profiling; while - at the same time - they have to fit into a compact and constrained design, limited energy savings, and a relatively resource-poor execution environment. Using resource- rich systems is the classic solution introduced in Mobile Cloud Computing to overcome these mobile device limitations by remotely executing all or part of applications to cloud environments. The technique is known as application offloading. Offloading to a cloud - implemented as geographically-distant data center - however introduces a great network latency that is not acceptable to smartphone users. Hence, massive offloading to a centralized architecture creates a bottleneck that prevents scalability required by the expanding market of IoT devices. Fog Computing has been introduced to bring back the storage and computation capabilities in the user vicinity or close to a needed location. Some architectures are emerging, but few algorithms exist to deal with the dynamic properties of these environments. In this thesis, we focus our interest on designing ACOMMA, an Ant-inspired Collaborative Offloading Middleware for Mobile Applications that allowing to dynamically offload application partitions - at the same time - to several remote clouds or to spontaneously-created local clouds including devices in the vicinity. The main contributions of this thesis are twofold. If many middlewares dealt with one or more of offloading challenges, few proposed an open architecture based on services which is easy to use for any mobile device without any special requirement. Among the main challenges are the issues of what and when to offload in a dynamically changing environment where mobile device profile, context, and server properties play a considerable role in effectiveness. To this end, we develop bio-inspired decision-making algorithms: a dynamic bi-objective decision-making process with learning, and a decision-making process in collaboration with other mobile devices in the vicinity. We define an offloading mechanism with a fine-grained method-level application partitioning on its call graph. We use ant colony algorithms to optimize bi-objectively the CPU consumption and the total execution time - including the network latency
Romanazzi, Stefano. "Water Supply Network Management: Sensor Analysis using Google Cloud Dataflow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Find full textHe, Yijun, and 何毅俊. "Protecting security in cloud and distributed environments." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49617631.
Full textpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Ribot, Stephane. "Adoption of Big Data And Cloud Computing Technologies for Large Scale Mobile Traffic Analysis." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE3049.
Full textA new economic paradigm is emerging as a result of enterprises generating and managing increasing amounts of data and looking for technologies like cloud computing and Big Data to improve data-driven decision making and ultimately performance. Mobile service providers are an example of firms that are looking to monetize the collected mobile data. Our thesis explores cloud computing determinants of adoption and Big Data determinants of adoption at the user level. In this thesis, we employ a quantitative research methodology and operationalized using a cross-sectional survey so temporal consistency could be maintained for all the variables. The TTF model was supported by results analyzed using partial least square (PLS) structural equation modeling (SEM), which reflects positive relationships between individual, technology and task factors on TTF for mobile data analysis.Our research makes two contributions: the development of a new TTF construct – task-Big Data/cloud computing technology fit model – and the testing of that construct in a model overcoming the rigidness of the original TTF model by effectively addressing technology through five subconstructs related to technology platform (Big Data) and technology infrastructure (cloud computing intention to use). These findings provide direction to mobile service providers for the implementation of cloud-based Big Data tools in order to enable data-driven decision-making and monetize the output from mobile data traffic analysis
Domingos, João Nuno Silva Tabar. "On the cloud deployment of a session abstraction for service/data aggregation." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/9923.
Full textThe global cyber-infrastructure comprehends a growing number of resources, spanning over several abstraction layers. These resources, which can include wireless sensor devices or mobile networks, share common requirements such as richer inter-connection capabilities and increasing data consumption demands. Additionally, the service model is now widely spread, supporting the development and execution of distributed applications. In this context, new challenges are emerging around the “big data” topic. These challenges include service access optimizations, such as data-access context sharing, more efficient data filtering/ aggregation mechanisms, and adaptable service access models that can respond to context changes. The service access characteristics can be aggregated to capture specific interaction models. Moreover, ubiquitous service access is a growing requirement, particularly regarding mobile clients such as tablets and smartphones. The Session concept aggregates the service access characteristics, creating specific interaction models, which can then be re-used in similar contexts. Existing Session abstraction implementations also allow dynamic reconfigurations of these interaction models, so that the model can adapt to context changes, based on service, client or underlying communication medium variables. Cloud computing on the other hand, provides ubiquitous access, along with large data persistence and processing services. This thesis proposes a Session abstraction implementation, deployed on a Cloud platform, in the form of a middleware. This middleware captures rich/dynamic interaction models between users with similar interests, and provides a generic mechanism for interacting with datasources based on multiple protocols. Such an abstraction contextualizes service/users interactions, can be reused by other users in similar contexts. This Session implementation also permits data persistence by saving all data in transit in a Cloud-based repository, The aforementioned middleware delivers richer datasource-access interaction models, dynamic reconfigurations, and allows the integration of heterogenous datasources. The solution also provides ubiquitous access, allowing client connections from standard Web browsers or Android based mobile devices.
Boretti, Gabriele. "Sistemi cloud per l'analisi di big data: BigQuery, la soluzione proposta da Google." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18613/.
Full textDi, Sheng, and 狄盛. "Optimal divisible resource allocation for self-organizing cloud." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B4703130X.
Full textMa, Ka-kui, and 馬家駒. "Lightweight task mobility support for elastic cloud computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47869513.
Full textpublished_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Palanisamy, Balaji. "Cost-effective and privacy-conscious cloud service provisioning: architectures and algorithms." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52157.
Full textOlsson, Fredrik. "Feature Based Learning for Point Cloud Labeling and Grasp Point Detection." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150785.
Full textIkken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.
Full textThe typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
Li, Zhen. "CloudVista: a Framework for Interactive Visual Cluster Exploration of Big Data in the Cloud." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1348204863.
Full textSellén, David. "Big Data analytics for the forest industry : A proof-of-conceptbuilt on cloud technologies." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28541.
Full textLee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.
Full textDelehag, Lundmark Joel. "Photogrammetry for health monitoring of bridges : Using point clouds for deflection measurements and as-built BIM modelling." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75953.
Full textAl-Odat, Zeyad Abdel-Hameed. "Analyses, Mitigation and Applications of Secure Hash Algorithms." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/32058.
Full text