To see the other types of publications on this topic, follow the link: BIM Cloud.

Dissertations / Theses on the topic 'BIM Cloud'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'BIM Cloud.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Magda, Jakub. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-414311.

Full text
Abstract:
This thesis deals with creating BIM model using laser scanning. It includes information about laser scanning, BIM and proces of modelling. Result of thesis is information model created in software Revit.
APA, Harvard, Vancouver, ISO, and other styles
2

Alreshidi, Eissa. "Towards facilitating team collaboration during construction project via the development of cloud-based BIM governance solution." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/88955/.

Full text
Abstract:
Construction projects involve multi-discipline, multi-actor collaboration, and during their lifecycle, enormous amounts of data are generated. This data is often sensitive, raising major concerns related to access rights, ownership, intellectual property (IP) and secu- rity. Thus, dealing with this information raises several issues, such as data inconsistency, different versions of data, data loss etc. Therefore, the collaborative Building Information Modelling (BIM) approach has recently been considered a useful contributory technique to minimise the complexity of team collaboration during construction projects. Further- more, it has been argued that there is a role for Cloud technology in facilitating team collaboration across a building's lifecycle, by applying the ideologies of BIM governance. Therefore, this study investigates and seeks to develop a BIM governance solution util- ising a Cloud infrastructure. The study employed two research approaches: the first being a wide consultation with key BIM experts taking the form of: (i) a comprehensive questionnaire; followed by (ii) several semi-structured interviews. The second approach was an iterative software engineering approach including: (i) Software Modelling, using Business Process Model Notation (BPMN) and Unifed Modelling Language (UML), and (ii) Software Prototype Development. The fndings reveal several remaining barriers to BIM adoption, including Information Communication Technology (ICT) and collabora- tion issues; therefore highlighting an urgent need to develop a BIM governance solution underpinned by Cloud technology, to tackle these barriers and issues. The key fndings from this research led to: (a) the development of a BIM governance framework (G-BIM); (b) defnition of functional, non-functional, and domain specific requirements for develop- ing a Cloud-based BIM Governance Platfrom (GovernBIM); (c) development of a set of BPMN diagrams to describe the internal and external business procedures of the Govern- BIM platform lifecycle; (d) evaluation of several fundamental use cases for the adoption of the GovernBIM platform; (e) presentation of a core BIM governance model (class di- agram) to present the internal structure of the GovernBIM platform; (f) provision of a well-structured, Cloud-based architecture to develop a GovernBIM platform for practical implementation; and (j) development of a Cloud-based prototype focused on the main identified functionalities of BIM governance. Despite the fact that a number of concerns remain (i.e. privacy and security) the proposed Cloud-based GovernBIM solution opens up an opportunity to provide increased control over the collaborative process, and to resolve associated issues, e.g. ownership, data inconsistencies, and intellectual property. Finally, it presents a road map for further development of Cloud-based BIM governance platforms.
APA, Harvard, Vancouver, ISO, and other styles
3

Longo, Rosario Alessandro. "Dalla generazione di modelli 3D densi mediante TLS e fotogrammetria alla modellazione BIM." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13284/.

Full text
Abstract:
La tesi tratta la ricerca di procedure che permettano di rilevare oggetti utilizzando il maggior numero di informazioni geometriche ottenibili da una nuvola di punti densa generata da un rilievo fotogrammetrico o da TLS realizzando un modello 3D importabile in ambiente FEM. Il primo test si è eseguito su una piccola struttura, 1.2x0.5x0.2m, in modo da definire delle procedure di analisi ripetibili; la prima consente di passare dalla nuvola di punti “Cloud” all’oggetto solido “Solid” al modello agli elementi finiti “Fem” e per questo motivo è stata chiamata “metodo CSF”, mentre la seconda, che prevede di realizzare il modello della struttura con un software BIM è stata chiamata semplicemente “metodo BIM”. Una volta dimostrata la fattibilità della procedura la si è validata adottando come oggetto di studio un monumento storico di grandi dimensioni, l’Arco di Augusto di Rimini, confrontando i risultati ottenuti con quelli di altre tesi sulla medesima struttura, in particolare si è fatto riferimento a modelli FEM 2D e a modelli ottenuti da una nuvola di punti con i metodi CAD e con un software scientifico sviluppato al DICAM Cloud2FEM. Sull’arco sono state eseguite due tipi di analisi, una lineare sotto peso proprio e una modale ottenendo risultati compatibili tra i vari metodi sia dal punto di vista degli spostamenti, 0.1-0.2mm, che delle frequenze naturali ma si osserva che le frequenze naturali del modello BIM sono più simili a quelle dei modelli generati da cloud rispetto al modello CAD. Il quarto modo di vibrare invece presenta differenze maggiori. Il confronto con le frequenze naturali del modello FEM ha restituito differenze percentuali maggiori dovute alla natura 2D del modello e all’assenza della muratura limitrofa. Si sono confrontate le tensioni normali dei modelli CSF e BIM con quelle ottenute dal modello FEM ottenendo differenze inferiori a 1.28 kg/cm2 per le tensioni normali verticali e sull’ordine 10-2 kg/cm2 per quelle orizzontali.
APA, Harvard, Vancouver, ISO, and other styles
4

Staufčík, Jakub. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-400177.

Full text
Abstract:
This thesis deals with the creation of a BIM model using laser scanning. In the first part of the thesis, basic information about building information modeling (BIM) is described. The next section describes the process of creating a BIM model, from data acquisition to modeling or a particular building. The model was created in Revit.
APA, Harvard, Vancouver, ISO, and other styles
5

Taher, Abdo, and Benyamin Ulger. "Tillämpning av BIM i ett byggnadsprojekt : Centrum för idrott och kultur i Knivsta." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55198.

Full text
Abstract:
Byggbranschen är en konservativ sektor där traditionella arbetssätt hindrar digitaliseringen från att nå sin fulla potential. BIM som står för building information modeling eller building information managment är det begreppet som står i fokus när digitaliseringen för byggprojekt diskuteras. Majoriteten av alla byggprojekt genomförs, trots fördelarna med BIM, fortfarande på ett mer eller mindre traditionellt sätt. I detta examensarbete undersöks CIK-projektet som står för Centrum för idrott och kultur i Knivsta. Projektet projekterades och framställdes med traditionella metoder. Utifrån personliga erfaranheter och flertalet personliga kommunikationer är det tydligt att kunskapsnivån kring den praktiska tillämpningen av BIM är väldigt låg i byggbranschen. Problematiken kring den låga kunskapsnivån för den praktiska tillämpningen leder till en underutvecklad digitalisering.  Syftet med detta examensarbete är att redogöra för hur CIK-projektet utfördes. Vidare undersöks det huruvida CIK-projektet hade kunnat projekteras med BIM som arbetssätt och vilka förändringar det skulle innebära för projektet. Metoden i arbetet har bestått av en litteraturstudie för att undersöka begreppet BIM och hur BIM tillämpas i byggprocessen. Därtill utfördes en litteraturstudie av Revit, BIMeye och StreamBIM som är BIM-verktygen som används i detta examensarbete. Vidare genomfördes en fallstudie av CIK-projektet där objektet beskrevs samt dokument och intervjuer analyserades. Slutligen utfördes ommodelleringar och observationer av CIK-projektet med BIM-verktygen som nämnts ovan. Resultatet påvisar inledningsvis hur de traditionella arbetsmetoderna genomfördes och därmed präglade CIK-projektet. Den uteblivna detaljerade kravställnigen från politikerna och beställarna resulterade i många oklarheter när den efterfrågade produkten skulle utformas. Arkitekterna använde sig av traditionella skissmetoder vid gestaltningen för att hitta lösningar på de efterfrågade behoven. Kalkyl och tidsplaneringen skapades samt justerades manuellt, vilket utfördes separerat från 3D-modellen. 3D-modellerna i CIK-projektet användes för visualiseringar och samordningar för kollisionskontroller. De kontrakterade handlingarna vid diverse leveranser var dock traditionella 2D-ritningar. Av dokumentanalysen kunde det konstateras att arkitekterna framställde över 300 ritningar och över 100 olika dörrar som presenterades på ett flertal olika ritningar. Revideringar under produktionen arbetades in i ritningarna och molnades därefter in för att markera ändringen. Därtill skapades det ett separat PM där revideringarna presenterades mer ingående. När byggnationen var färdigställd arbetades ytterligare korrigeringar in i handlingarna av konsulterna. Utvalda handlingar i samråd med beställarna stämplades sedan om till relationshandlingar och levererades som PDF-filer till förvaltarna. Vidare visar resultatet hur CIK-projektet med en BIM-tillämpning hade kunnat utföras. CIK med BIM innebär att det tillkommer minst två nya roller, en BIM-strateg och en BIM-samordnare. Vid projektstart skapar BIM-strategen tillsammans med beställarna en kravspecifikation i form av en BIM-manual för att beskriva standarder, leverensspecifikationer, kommunikationsmetoder med mera. Därtill ska det bland annat framgå hur den kommande modellen skall berikas med information samt vilka BIM-verktyg som skall användas. Den ansvariga BIM-samordnaren har sedan som arbetsuppgift att de ställda kraven för BIM-projektet efterföljs. Under gestaltningen ska beräkningskrafterna från parametrisk och generativ design användas för att hitta olika lösningar på krav. Därtill ska simulationer och analyser användas under projektets gång för att ta hänsyn till och säkerställa att kraven och behoven uppnås. Kalkylen och tidplanen ska vara kopplade till BIM-modellen för att ständigt få korrekta och uppdaterade kostnader och tidsåtgångar. Med hjälp av molntjänster som exempelvis BIMeye kan all data och information från BIM-modellen hanteras och diverse rapporter av informationen exporteras av samtliga i projektet. Via exempelvis StreamBIM hämtar sedan produktionen all nödvändig information från BIM-modellen för att kunna framställa CIK utan traditionella ritningar. Viktiga detaljritningar och kompletterande handlingar som exempelvis dörrkortet som skribenterna skapade ska kopplas till objekten i StreamBIM. Under hela produktionen ska sedan BIM-modellen ständigt uppdateras för att kunna skapa en digital tvilling av den verkliga byggnaden som sedan överlämnas till beställarna för förvaltning. BIM-modellen används och berikas sedan under hela byggnadens livstid. Vidare fördes det diskussioner i anknytning till resultatet, därtill diskuterades även metod och begränsningar för arbetet. CIK-projektet är ett bevis på att det går att arbeta på ett traditionellt sätt och ändå kunna framställa bra byggnader. Däremot hade ett BIM-tillämpat arbetssätt kunnat skapa ett mervärde för beställarna genom att kvalitetssäkra hela arbetet från idé till förvaltning. För att en tillämpning av BIM ska verkställas krävs det dock tydliga och specificerade krav från beställarna. Skribenterna anser därför att nationella krav skulle påskynda digitaliseringen i byggbranschen. BIM motiverar sig själv eftersom ändringar blir mer kostsamma i senare skeden jämfört med tidigt i projekteringen. Därför borde det läggas ner mer tid och kostnader på att analysera och hantera bygginformation tidigt för att inte riskera att större kostnader tillkommer senare i projektet. Eftersom begreppet BIM är brett och uppfattningarna kan variera mellan aktörer i branschen försvårades inhämtningen av vetenskaplig litteratur. Däremot underlättades säkerställandet av källkritiken genom författarnas erfarenheter av projektering på arkitektsidan. De slutsatser som kan dras från detta examensarbete är att bygginformationshanteringen är en viktig aspekt att beakta under byggprocessen. För komplexa projekt som CIK hade BIM inneburit ett helt nytt sätt att hantera och centralisera informationen genom att knyta data till objekten i modellen. Poängen med BIM-projekt är att ständigt vända sig till modellen eller molntjänstens databas som i sin tur är kopplad till modellen för att hämta nödvändig information. Resultatet blir därför ett frigörande från de informationsöarna och dubbelarbete som bildades i CIK-projektet med en traditionell projektering.
Purpose: The purpose of this study is to describe how the CIK project was executed. Furthermore, it is investigated how the CIK project could have been designed with BIM as a working method and what changes it would entail for the project. Method: This study has consisted of a literature study of BIM and a case study of the CIK project. The case study included an object description as well as documents and interviews were analyzed. In addition, remodeling and observations of the CIK project were performed with BIM tools. Results: The results initially show how the traditional working methods were implemented in the CIK project. The initial lack of requirements created ambiguities. The architects used traditional sketching methods for the design. Calculation and scheduling were handled separately from the 3D model. The 3D models in the CIK project were used for visualizations and coordination. However, the contracted documents for various deliveries were traditional 2D drawings. The architects produced over 300 drawings and over 100 different doors that were presented on different drawings. At the end of the production, selected PDF-documents were re-stamped in consultation with the customers for the administration. Furthermore, the results show how the CIK project with a BIM application could have been carried out. Initially, a BIM manual is created by a BIM strategist to specify the requirements. During design, parametric and generative design are used to find different solutions to meet the requirements. The calculation and schedule must be linked to the BIM model. All information management takes place in cloud services such as BIMeye. Through StreamBIM, the production then retrieves all the necessary information from the BIM model. Additional detailed drawings should be linked to the objects in StreamBIM. During production, the BIM model is updated before the delivery to the customers. Conclusions: The conclusions that can be drawn from this study are that the information management is an important aspect to implement during the construction process. For the CIK project, BIM would mean a completely new way of managing and centralizing the information by linking data to the objects in the model. The point of BIM projects is to constantly turn to the model or database, which is linked to the model to retrieve the necessary information. The result is therefore a release from the information islands and duplication of work that is formed in a traditional design.
APA, Harvard, Vancouver, ISO, and other styles
6

Thomson, C. P. H. "From point cloud to building information model : capturing and processing survey data towards automation for high quality 3D models to aid a BIM process." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1485847/.

Full text
Abstract:
Building Information Modelling has, more than any previous initiative, established itself as the process by which operational change can occur, driven by a desire to eradicate the inefficiencies in time and value and requiring a change of approach to the whole lifecycle of construction from design through construction to operation and eventual demolition. BIM should provide a common digital platform which allows different stakeholders to supply and retrieve information thereby reducing waste through enhanced decision making. Through the provision of measurement and representative digital geometry for construction and management purposes, surveying is very much a part of BIM. Given that all professions that are involved with construction have to consider the way in which they handle data to fit with the BIM process, it stands to reason that Geomatic or Land Surveyors play a key part. This is further encouraged by the fact that 3D laser scanning has been adopted as the primary measurement technique for geometry capture for BIM. Also it is supported by a laser scanning work stream from the UK Government backed BIM Task Group. Against this backdrop, the research in this thesis investigates the 3D modelling aspects of BIM, from initial geometry capture in the real world, to the generation and storage of the virtual world model, while keeping the workflow and outputs compatible with the BIM process. The focus will be made on a key part of the workflow for capturing as-built conditions: the geometry creation from point clouds. This area is considered a bottleneck in the BIM process for existing assets not helped by their often poor or non-existent documentation. Automated modelling is seen as desirable commercially with the goal of reducing time, and therefore cost, and making laser scanning a more viable proposition for a range of tasks in the lifecycle.
APA, Harvard, Vancouver, ISO, and other styles
7

Eliasson, Oscar, and Adam Söderberg. "Projektering av dörrmiljöer - metoder och informationshantering." Thesis, Tekniska Högskolan, Jönköping University, JTH, Byggnadsteknik och belysningsvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50368.

Full text
Abstract:
Syfte: Byggprojekt har blivit allt mer komplexa i tur med att kraven på kvalitet, miljö och hållbarhet har ökat. Med utvecklingen har även BIM-modeller blivit en stor del av byggprojekten. Kraven och komplexiteten gör att mängden information är väldigt stor i projekten. Det gör att modellerna blir tunga och svårarbetade. En delprocess som bidrar till komplexiteten och mängden information är projekteringen av dörrmiljöer. På grund av att modellerna växer kommer lösningar som ska underlätta informationshantering kopplad till modellerna. En sådan lösning är databaser som kopplas till modellen. Studiens avser därför att undersöka om en databas-tjänst kopplad till en modell kan effektivisera projekteringen av dörrmiljöer. Metod: För att uppnå målet och besvara studiens frågeställningar har ett kvalitativt arbetssätt nyttjats. En litteraturstudie gjordes för att samla fakta till problembeskrivning och teoretiskt ramverk. Empirin är baserad på sex semi-strukturerade intervjuer samt en observation och ett test av tjänsten BIMEYE. Utifrån empirin, frågeställningarna och valda teorier har sedan en analys utformats. Resultat:Resultatet visar att projektering av dörrmiljöer bygger på att alla krav identifieras i början av ett projekt och att kunniga projektmedlemmar krävs under dörrprojekteringen, för att kunna identifiera funktionerna på de komplexa dörrmiljöerna. För att tydligare redovisa dörrarna och deras funktioner föredrog respondenterna dörrmiljökort där varje dörr redovisas var för sig. Problem med informationssamordning och att information försvinner, kan lösas genom att samla informationen i en databas så att informationen lagras på en plats. Där kan den göras tillgänglig för bearbetning av alla involverade i projektet. Studien visar att molnbaserade databaser kopplade till BIM-modeller kan effektivisera informationshanteringen under projekteringsarbetet, då den är källan till information för flera olika verktyg. Konsekvenser: Dörrmiljöer kommer att förbli komplexa och innehålla en stor mängd funktioner. För att underlätta och effektivisera arbetsprocessen kan molntjänsten BIMEYE eller liknande tjänst användas. En sådan tjänst bidrar till en säkrare informationshantering och kommer minska antalet missar vid granskning. Vid en övergång till ett databasbaserat arbetssätt rekommenderas utifrån studiens resultat att de anställda utbildas och att en standardiserad arbetsgång tas fram. Begränsningar: Arbetet begränsades till att studera projekteringsprocessen för dörrmiljöer. Det är därför osäkert huruvida studiens resultat går att applicera på andra delprocesser under projekteringen. Studien har också avgränsats från andra molntjänster än BIMEYE vilket kan göra att resultatet inte är tillämpbart för andra liknande tjänster. Dessutom kan det inte uteslutas att resultatet inte är applicerbart för programvarorna ArchiCad och Simplebim eftersom studien har fokuserat på Revit och dess koppling till BIMEYE.
Purpose: Construction projects have become more and more complex, with increasing demands on quality, environment, and sustainability. With the development, BIM models have become a major part of the construction projects. The requirements and complexity create large amounts of information in the projects. This makes the models heavy and hard to work with. One subprocess that contributes to the complexity and amount of information is the design of door environments. Because the models are growing, solutions will be designed to facilitate information management linked to the models. One such solution is databases that are linked to the model. The study therefore intends to investigate whether a database service linked to a model can make the design of door environments more effective. Method: To achieve the goal and answer the study's questions, a qualitative approach has been used. A literature study was done to gather facts for problem description and theoretical framework. The empiricism is based on six semi-structured interviews as well as an observation and a test of BIMEYE. Based on the empiricism, the questions and the selected theories, an analysis has been designed. Findings: The result shows that the design of door environments is based on the fact that all requirements are identified at the beginning of a project and that knowledgeable project members are required during door design, in order to identify the functions of the complex door environments. To give a clearer presentation of the doors and their functions, the respondents preferred door environment drawings where each door is presented separately. Problems with coordination of information and information that disappear can be solved by gathering the information in a database so that the information is stored in one place. There it can be made available for processing by everyone involved in the project. The study shows that cloud-based databases linked to BIM models can streamline information management during the design work, as it is the source of information for several different tools. Implications: Door environments will remain complex and contain a large number of functions. To facilitate and streamline the work process, the cloud service BIMEYE or a similar service can be used. Such a service contributes to a more secure information management and will reduce the number of misses during review. When transitioning to a database-based way of working, it is recommended, based on the study's results, that the employees get trained and that a standardized workflow is developed. Limitations: The work was limited to studying the design process for door environments. It is therefore uncertain whether the results of the study can be applied to other sub-processes during the design phase. The study has also been delimited from cloud services other than BIMEYE which may make the result not applicable to other similar services. Furthermore, it cannot be ruled out that the result is not applicable to ArchiCad and Simplebim because the study has focused on Revit and its connection to BIMEYE.
APA, Harvard, Vancouver, ISO, and other styles
8

Martinini, Elena. "Building Information Modeling: analisi e utilizzo." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8272/.

Full text
Abstract:
Dalla raffigurazione artistica fino alla modellazione digitale, passando per il disegno tecnico, la rappresentazione del progetto d’architettura ha conosciuto nel tempo evoluzioni significative che solo di recente hanno raggiunto l’apice nell’utilizzo di modelli cognitivi in grado di collezionare ed organizzare il patrimonio di informazioni che gravitano attorno all’intero processo edilizio. L’impiego sempre più diffuso dello strumento informatico, insieme al coordinamento delle specializzazioni nelle molte discipline coinvolte nel progetto, ha favorito negli ultimi anni l’adozione del Building Information Modeling un processo che permette di rivoluzionare il mondo delle costruzioni, coprendo molteplici aspetti del ciclo di vita per un manufatto edilizio. Questa Tesi intende presentare in maniera specifica le tappe che hanno consentito il formarsi del BIM. La migliore capacità di gestione, un linguaggio comune tra i progettisti, un’ottimizzazione di risorse e costi, unito ad un controllo convincente ed accurato delle fasi di lavoro, sono alcune delle potenzialità non ancora completamente espresse dal Building Information Modeling che è destinato a divenire una consapevolezza strategica nel bagaglio culturale del professionista contemporaneo.
APA, Harvard, Vancouver, ISO, and other styles
9

Penk, David. "Vyhotovení 3D modelu části budovy SPŠ stavební Brno." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2021. http://www.nusl.cz/ntk/nusl-444256.

Full text
Abstract:
The thesis deals with the creation of a 3D model from data collected by laser scanning. The first part deals with the theoretical foundations of buildings information modeling and method of laser scanning. The rest of the work describes the detailed process from data collection to the creation of the model. Most of the space is devoted to work in the Revit software environment.
APA, Harvard, Vancouver, ISO, and other styles
10

Haltmar, Jan. "Využití laserového skenování v informačním modelování budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-400156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Crabtree, Gärdin David, and Alexander Jimenez. "Optical methods for 3D-reconstruction of railway bridges : Infrared scanning, Close range photogrammetry and Terrestrial laser scanning." Thesis, Luleå tekniska universitet, Byggkonstruktion och brand, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67716.

Full text
Abstract:
The forecast of the next upcoming years estimates a growth of demand in transport. As the railway sector in Europe has developed over many years, the infrastructure presents performance issues because of, among other factors, asset maintenance activities being difficult and time consuming. There are currently 4000 railway bridges in Sweden managed by Trafikverket which are submitted to inspections at least every six years. The most common survey is done visually to determine the physical and functional condition of the bridges as well as finding damages that may exist on them. Because visual inspection is a subjective evaluation technique, the results of these bridge inspections may vary from inspector to inspector. The data collection is time consuming and written in standard inspection reports which may not provide sufficient visualization of damages. The inspector also needs to move around the bridge at close distance which could lead to unsafe working conditions. 3D modelling technology is becoming more and more common. Methods such as Close Ranged Photogrammetry (CRP) and Terrestrial Laser Scanning (TLS) are starting to be used for architecture and heritage preservation as well as engineering applications. Infrared (IR) scanning is also showing potential in creating 3D models but has yet not been used for structural analysis and inspections. A result from these methods is a point cloud, a 3D representation of a model in points that can be used for creating as-built Building Information Modeling (BIM)-models. In this study, the authors put these three methods to test to see if IR scanning and CRP are suitable ways, such as TLS is, to gather data for 3D-reconstruction of concrete railway bridges in fast, safe and non-disturbing ways. For this, the three technologies are performed on six bridges chosen by Trafikverket. The further aim is to determine if the 3D-reconstructions can be used for acquiring BIM-information to, among other things, create as-built drawings and to perform structural evaluations. As a result from the study, IR scanning and CRP show great potential as well as TLS in 3D-reconstruction of concrete railway bridges in fast, safe and non-disturbing ways. Still, there is a need of development regarding the technologies before we can start to rely on them completely.
APA, Harvard, Vancouver, ISO, and other styles
12

Massafra, Angelo. "La modellazione parametrica per la valutazione degli stati deformativi delle capriate lignee con approccio HBIM. Evoluzione della fabbrica e della copertura del teatro comunale di Bologna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Il lavoro di tesi si colloca in un ampio protocollo di ricerca già avviato riguardante lo studio delle capriate lignee di grandi luci site in edifici con significativa valenza storica. Il metodo sperimentale è stato messo a punto per successive approssimazioni e correzioni in seguito alla sua applicazione a diversi casi di studio e, in base ai feedback ottenuti in corso d’opera, è in continuo aggiornamento. La tesi, mira da un lato ad una nuova e approfondita implementazione del metodo, dall’altro all’applicazione dello stesso al fine di analizzare ed interpretare i movimenti e le deformazioni del sistema di copertura del Teatro Comunale di Bologna. Dalla nuvola di punti dell’intero sottotetto, acquisita tramite laser scanner, si estrapolano le singole capriate e, attraverso un programma di modellazione parametrica, si costruiscono degli algoritmi che generano dei modelli tridimensionali per ogni capriata. Il confronto tra tali modelli e la nuvola di punti iniziale consente di leggere le capriate in modo dettagliato, analizzarne spostamenti e deformazioni, derivare informazioni puntuali e comparate sul loro comportamento, trarre considerazioni globali sullo stato di salute dell’intero sottotetto e, se necessario, prevedere e progettare eventuali interventi di recupero o rinforzo strutturale. La struttura completamente parametrizzata della nuova versione del metodo ha indirizzato lo studio verso la ricerca di una correlazione fra gli algoritmi generativi ed il campo del Building Information Modeling, rivelandosi uno strumento con una vasta possibilità di collegamento con altri importanti temi di ricerca riguardo la digitalizzazione del patrimonio costruito. Il collegamento diretto con dei software di tipo BIM può infine consentire una relazione diretta con software di calcolo strutturale, costituendo un unico workflow che, partendo dal rilievo digitale tramite laser scanner, arriva all’ottenimento di un modello di calcolo degli oggetti studiati.
APA, Harvard, Vancouver, ISO, and other styles
13

Lukášová, Pavlína. "Cloud Computing jako nástroj BCM." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75556.

Full text
Abstract:
This thesis deals with possible interconnections between two concepts playing a big role in contemporary business and IT world. These concepts are Business Continuity Management and Cloud Computing. In the scope of this thesis there are certain areas identified where both concepts are complement, where Cloud Computing brings new opportunities for Business Continuity Management and where could possible problems arise during particular implementation. From the BCM perspective the impact lies on IT services, from the Cloud Computing perspective the thesis deals especially with security aspects. The thesis is also aimed at the characteristics of higher education and basic differences from commercial sphere. Based on defined differences and identified interconnections between BCM and Cloud Computing, the thesis argues for usage of suitable Cloud Computing solution for higher education regarding Business Continuity improvement. The multi-criterion comparison of several Infrastructure-as-a-Service solutions stems from this analysis focusing on technical, financial, and Business Continuity aspects. The result from this comparison together with conclusions from previous chapters serve as an input for subsequent practical proposal of Cloud Computing solution and its verification against Business Continuity improvement in specific conditions on University of Economics in Prague. The proposal is also represented by strategic map.
APA, Harvard, Vancouver, ISO, and other styles
14

Anagnostopoulos, Ioannis. "Generating As-Is BIMs of existing buildings : from planar segments to spaces." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/281699.

Full text
Abstract:
As-Is Building Information Models aid in the management, maintenance and renovation of existing buildings. However, most existing buildings do not have an accurate geometric depiction of their As-Is conditions. The process of generating As-Is models of existing structures involves practitioners, who manually convert Point Cloud Data (PCD) into semantically meaningful 3D models. This process requires a significant amount of manual effort and time. Previous research has been able to model objects by segmenting the point clouds into planes and classifying each one separately into classes, such as walls, floors and ceilings; this is insufficient for modelling, as BIM objects are comprised of multiple planes that form volumetric objects. This thesis introduces a novel method that focuses on the geometric creation of As-Is BIMs with enriched information. It tackles the problem by detecting objects, modelling them and enriching the model with spaces and object adjacencies from PCD. The first step of the proposed method detects objects by exploiting the relationships the segments should satisfy to be grouped into one object. It further proposes a method for detecting slabs with variations in height by finding local maxima in the point density. The second step models the geometry of walls and finally enriches the model with closed spaces encoded in the Industry Foundation Classes (IFC) standard. The method uses the point cloud density of detected walls to determine their width by projecting the wall into two directions and finding the edges with the highest density. It identifies adjacent walls by finding gaps or intersections between walls and exploits walls adjacency for correcting their boundaries, creating an accurate 3D geometry of the model. Finally, the method detects closed spaces by using a shortest-path algorithm. The method was tested on three original PCD which represent office floors. The method detects objects of class walls, floors and ceilings in PCD with an accuracy of approximately 96%. The precision and recall for the room detection were found to be 100%.
APA, Harvard, Vancouver, ISO, and other styles
15

Islam, Md Zahidul. "A Cloud Based Platform for Big Data Science." Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-103700.

Full text
Abstract:
With the advent of cloud computing, resizable scalable infrastructures for data processing is now available to everyone. Software platforms and frameworks that support data intensive distributed applications such as Amazon Web Services and Apache Hadoop enable users to the necessary tools and infrastructure to work with thousands of scalable computers and process terabytes of data. However writing scalable applications that are run on top of these distributed frameworks is still a demanding and challenging task. The thesis aimed to advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large data sets, collectively known as “big data”. The term “big-data” in this thesis refers to large, diverse, complex, longitudinal and/or distributed data sets generated from instruments, sensors, internet transactions, email, social networks, twitter streams, and/or all digital sources available today and in the future. We introduced architectures and concepts for implementing a cloud-based infrastructure for analyzing large volume of semi-structured and unstructured data. We built and evaluated an application prototype for collecting, organizing, processing, visualizing and analyzing data from the retail industry gathered from indoor navigation systems and social networks (Twitter, Facebook etc). Our finding was that developing large scale data analysis platform is often quite complex when there is an expectation that the processed data will grow continuously in future. The architecture varies depend on requirements. If we want to make a data warehouse and analyze the data afterwards (batch processing) the best choices will be Hadoop clusters and Pig or Hive. This architecture has been proven in Facebook and Yahoo for years. On the other hand, if the application involves real-time data analytics then the recommendation will be Hadoop clusters with Storm which has been successfully used in Twitter. After evaluating the developed prototype we introduced a new architecture which will be able to handle large scale batch and real-time data. We also proposed an upgrade of the existing prototype to handle real-time indoor navigation data.
APA, Harvard, Vancouver, ISO, and other styles
16

Talevi, Iacopo. "Big Data Analytics and Application Deployment on Cloud Infrastructure." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14408/.

Full text
Abstract:
This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered.
APA, Harvard, Vancouver, ISO, and other styles
17

McCaul, Christopher Francis. "Big Data: Coping with Data Obesity in Cloud Environments." Thesis, Ulster University, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.724751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Martins, Pedro Miguel Pereira Serrano. "Evaluation and optimization of a session-based middleware for data management." Master's thesis, Faculdade de Ciências e Tecnologia, 2014. http://hdl.handle.net/10362/12609.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The current massive daily production of data has created a non-precedent opportunity for information extraction in many domains. However, this huge rise in the quantities of generated data that needs to be processed, stored, and timely delivered, has created several new challenges. In an effort to attack these challenges [Dom13] proposed a middleware with the concept of a Session capable of dynamically aggregating, processing and disseminating large amounts of data to groups of clients, depending on their interests. However, this middleware is deployed on a commercial cloud with limited processing support in order to reduce its costs. Moreover, it does not explore the scalability and elasticity capabilities provided by the cloud infrastructure, which presents a problem even if the associated costs may not be a concern. This thesis proposes to improve the middleware’s performance and to add to it the capability of scaling when inside a cloud by requesting or dismissing additional instances. Additionally, this thesis also addresses the scalability and cost problems by exploring alternative deployment scenarios for the middleware, that consider free infrastructure providers and open-source cloud management providers. To achieve this, an extensive evaluation of the middleware’s architecture is performed using a profiling tool and several test applications. This information is then used to propose a set of solutions for the performance and scalability problems, and then a subset of these is implemented and tested again to evaluate the gained benefits.
APA, Harvard, Vancouver, ISO, and other styles
19

Navarro, Martín Joan. "From cluster databases to cloud storage: Providing transactional support on the cloud." Doctoral thesis, Universitat Ramon Llull, 2015. http://hdl.handle.net/10803/285655.

Full text
Abstract:
Durant les últimes tres dècades, les limitacions tecnològiques (com per exemple la capacitat dels dispositius d'emmagatzematge o l'ample de banda de les xarxes de comunicació) i les creixents demandes dels usuaris (estructures d'informació, volums de dades) han conduït l'evolució de les bases de dades distribuïdes. Des dels primers repositoris de dades per arxius plans que es van desenvolupar en la dècada dels vuitanta, s'han produït importants avenços en els algoritmes de control de concurrència, protocols de replicació i en la gestió de transaccions. No obstant això, els reptes moderns d'emmagatzematge de dades que plantegen el Big Data i el cloud computing—orientats a millorar la limitacions pel que fa a escalabilitat i elasticitat de les bases de dades estàtiques—estan empenyent als professionals a relaxar algunes propietats importants dels sistemes transaccionals clàssics, cosa que exclou a diverses aplicacions les quals no poden encaixar en aquesta estratègia degut a la seva alta dependència transaccional. El propòsit d'aquesta tesi és abordar dos reptes importants encara latents en el camp de les bases de dades distribuïdes: (1) les limitacions pel que fa a escalabilitat dels sistemes transaccionals i (2) el suport transaccional en repositoris d'emmagatzematge en el núvol. Analitzar les tècniques tradicionals de control de concurrència i de replicació, utilitzades per les bases de dades clàssiques per suportar transaccions, és fonamental per identificar les raons que fan que aquests sistemes degradin el seu rendiment quan el nombre de nodes i / o quantitat de dades creix. A més, aquest anàlisi està orientat a justificar el disseny dels repositoris en el núvol que deliberadament han deixat de banda el suport transaccional. Efectivament, apropar el paradigma de l'emmagatzematge en el núvol a les aplicacions que tenen una forta dependència en les transaccions és fonamental per a la seva adaptació als requeriments actuals pel que fa a volums de dades i models de negoci. Aquesta tesi comença amb la proposta d'un simulador de protocols per a bases de dades distribuïdes estàtiques, el qual serveix com a base per a la revisió i comparativa de rendiment dels protocols de control de concurrència i les tècniques de replicació existents. Pel que fa a la escalabilitat de les bases de dades i les transaccions, s'estudien els efectes que té executar diferents perfils de transacció sota diferents condicions. Aquesta anàlisi contínua amb una revisió dels repositoris d'emmagatzematge de dades en el núvol existents—que prometen encaixar en entorns dinàmics que requereixen alta escalabilitat i disponibilitat—, el qual permet avaluar els paràmetres i característiques que aquests sistemes han sacrificat per tal de complir les necessitats actuals pel que fa a emmagatzematge de dades a gran escala. Per explorar les possibilitats que ofereix el paradigma del cloud computing en un escenari real, es presenta el desenvolupament d'una arquitectura d'emmagatzematge de dades inspirada en el cloud computing la qual s’utilitza per emmagatzemar la informació generada en les Smart Grids. Concretament, es combinen les tècniques de replicació en bases de dades transaccionals i la propagació epidèmica amb els principis de disseny usats per construir els repositoris de dades en el núvol. Les lliçons recollides en l'estudi dels protocols de replicació i control de concurrència en el simulador de base de dades, juntament amb les experiències derivades del desenvolupament del repositori de dades per a les Smart Grids, desemboquen en el que hem batejat com Epidemia: una infraestructura d'emmagatzematge per Big Data concebuda per proporcionar suport transaccional en el núvol. A més d'heretar els beneficis dels repositoris en el núvol en quant a escalabilitat, Epidemia inclou una capa de gestió de transaccions que reenvia les transaccions dels clients a un conjunt jeràrquic de particions de dades, cosa que permet al sistema oferir diferents nivells de consistència i adaptar elàsticament la seva configuració a noves demandes de càrrega de treball. Finalment, els resultats experimentals posen de manifest la viabilitat de la nostra contribució i encoratgen als professionals a continuar treballant en aquesta àrea.
Durante las últimas tres décadas, las limitaciones tecnológicas (por ejemplo la capacidad de los dispositivos de almacenamiento o el ancho de banda de las redes de comunicación) y las crecientes demandas de los usuarios (estructuras de información, volúmenes de datos) han conducido la evolución de las bases de datos distribuidas. Desde los primeros repositorios de datos para archivos planos que se desarrollaron en la década de los ochenta, se han producido importantes avances en los algoritmos de control de concurrencia, protocolos de replicación y en la gestión de transacciones. Sin embargo, los retos modernos de almacenamiento de datos que plantean el Big Data y el cloud computing—orientados a mejorar la limitaciones en cuanto a escalabilidad y elasticidad de las bases de datos estáticas—están empujando a los profesionales a relajar algunas propiedades importantes de los sistemas transaccionales clásicos, lo que excluye a varias aplicaciones las cuales no pueden encajar en esta estrategia debido a su alta dependencia transaccional. El propósito de esta tesis es abordar dos retos importantes todavía latentes en el campo de las bases de datos distribuidas: (1) las limitaciones en cuanto a escalabilidad de los sistemas transaccionales y (2) el soporte transaccional en repositorios de almacenamiento en la nube. Analizar las técnicas tradicionales de control de concurrencia y de replicación, utilizadas por las bases de datos clásicas para soportar transacciones, es fundamental para identificar las razones que hacen que estos sistemas degraden su rendimiento cuando el número de nodos y/o cantidad de datos crece. Además, este análisis está orientado a justificar el diseño de los repositorios en la nube que deliberadamente han dejado de lado el soporte transaccional. Efectivamente, acercar el paradigma del almacenamiento en la nube a las aplicaciones que tienen una fuerte dependencia en las transacciones es crucial para su adaptación a los requerimientos actuales en cuanto a volúmenes de datos y modelos de negocio. Esta tesis empieza con la propuesta de un simulador de protocolos para bases de datos distribuidas estáticas, el cual sirve como base para la revisión y comparativa de rendimiento de los protocolos de control de concurrencia y las técnicas de replicación existentes. En cuanto a la escalabilidad de las bases de datos y las transacciones, se estudian los efectos que tiene ejecutar distintos perfiles de transacción bajo diferentes condiciones. Este análisis continua con una revisión de los repositorios de almacenamiento en la nube existentes—que prometen encajar en entornos dinámicos que requieren alta escalabilidad y disponibilidad—, el cual permite evaluar los parámetros y características que estos sistemas han sacrificado con el fin de cumplir las necesidades actuales en cuanto a almacenamiento de datos a gran escala. Para explorar las posibilidades que ofrece el paradigma del cloud computing en un escenario real, se presenta el desarrollo de una arquitectura de almacenamiento de datos inspirada en el cloud computing para almacenar la información generada en las Smart Grids. Concretamente, se combinan las técnicas de replicación en bases de datos transaccionales y la propagación epidémica con los principios de diseño usados para construir los repositorios de datos en la nube. Las lecciones recogidas en el estudio de los protocolos de replicación y control de concurrencia en el simulador de base de datos, junto con las experiencias derivadas del desarrollo del repositorio de datos para las Smart Grids, desembocan en lo que hemos acuñado como Epidemia: una infraestructura de almacenamiento para Big Data concebida para proporcionar soporte transaccional en la nube. Además de heredar los beneficios de los repositorios en la nube altamente en cuanto a escalabilidad, Epidemia incluye una capa de gestión de transacciones que reenvía las transacciones de los clientes a un conjunto jerárquico de particiones de datos, lo que permite al sistema ofrecer distintos niveles de consistencia y adaptar elásticamente su configuración a nuevas demandas cargas de trabajo. Por último, los resultados experimentales ponen de manifiesto la viabilidad de nuestra contribución y alientan a los profesionales a continuar trabajando en esta área.
Over the past three decades, technology constraints (e.g., capacity of storage devices, communication networks bandwidth) and an ever-increasing set of user demands (e.g., information structures, data volumes) have driven the evolution of distributed databases. Since flat-file data repositories developed in the early eighties, there have been important advances in concurrency control algorithms, replication protocols, and transactions management. However, modern concerns in data storage posed by Big Data and cloud computing—related to overcome the scalability and elasticity limitations of classic databases—are pushing practitioners to relax some important properties featured by transactions, which excludes several applications that are unable to fit in this strategy due to their intrinsic transactional nature. The purpose of this thesis is to address two important challenges still latent in distributed databases: (1) the scalability limitations of transactional databases and (2) providing transactional support on cloud-based storage repositories. Analyzing the traditional concurrency control and replication techniques, used by classic databases to support transactions, is critical to identify the reasons that make these systems degrade their throughput when the number of nodes and/or amount of data rockets. Besides, this analysis is devoted to justify the design rationale behind cloud repositories in which transactions have been generally neglected. Furthermore, enabling applications which are strongly dependent on transactions to take advantage of the cloud storage paradigm is crucial for their adaptation to current data demands and business models. This dissertation starts by proposing a custom protocol simulator for static distributed databases, which serves as a basis for revising and comparing the performance of existing concurrency control protocols and replication techniques. As this thesis is especially concerned with transactions, the effects on the database scalability of different transaction profiles under different conditions are studied. This analysis is followed by a review of existing cloud storage repositories—that claim to be highly dynamic, scalable, and available—, which leads to an evaluation of the parameters and features that these systems have sacrificed in order to meet current large-scale data storage demands. To further explore the possibilities of the cloud computing paradigm in a real-world scenario, a cloud-inspired approach to store data from Smart Grids is presented. More specifically, the proposed architecture combines classic database replication techniques and epidemic updates propagation with the design principles of cloud-based storage. The key insights collected when prototyping the replication and concurrency control protocols at the database simulator, together with the experiences derived from building a large-scale storage repository for Smart Grids, are wrapped up into what we have coined as Epidemia: a storage infrastructure conceived to provide transactional support on the cloud. In addition to inheriting the benefits of highly-scalable cloud repositories, Epidemia includes a transaction management layer that forwards client transactions to a hierarchical set of data partitions, which allows the system to offer different consistency levels and elastically adapt its configuration to incoming workloads. Finally, experimental results highlight the feasibility of our contribution and encourage practitioners to further research in this area.
APA, Harvard, Vancouver, ISO, and other styles
20

Kemp, Gavin. "CURARE : curating and managing big data collections on the cloud." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1179/document.

Full text
Abstract:
L'émergence de nouvelles plateformes décentralisées pour la création de données, tel que les plateformes mobiles, les capteurs et l'augmentation de la disponibilité d'open data sur le Web, s'ajoute à l'augmentation du nombre de sources de données disponibles et apporte des données massives sans précédent à être explorées. La notion de curation de données qui a émergé se réfère à la maintenance des collections de données, à la préparation et à l'intégration d'ensembles de données (data set), les combinant avec une plateforme analytique. La tâche de curation inclut l'extraction de métadonnées implicites et explicites ; faire la correspondance et l'enrichissement des métadonnées sémantiques afin d'améliorer la qualité des données. La prochaine génération de moteurs de gestion de données devrait promouvoir des techniques avec une nouvelle philosophie pour faire face au déluge des données. Ils devraient aider les utilisateurs à comprendre le contenue des collections de données et à apporter une direction pour explorer les données. Un scientifique peut explorer les collections de données pas à pas, puis s'arrêter quand le contenu et la qualité atteignent des niveaux satisfaisants. Notre travail adopte cette philosophie et la principale contribution est une approche de curation des données et un environnement d'exploration que nous avons appelé CURARE. CURARE est un système à base de services pour curer et explorer des données volumineuses sur les aspects variété et variabilité. CURARE implémente un modèle de collection de données, que nous proposons, visant représenter le contenu structurel des collections des données et les métadonnées statistiques. Le modèle de collection de données est organisé sous le concept de vue et celle-ci est une structure de données qui pourvoit une perspective agrégée du contenu des collections des données et de ses parutions (releases) associées. CURARE pourvoit des outils pour explorer (interroger) des métadonnées et pour extraire des vues en utilisant des méthodes analytiques. Exploiter les données massives requière un nombre considérable de décisions de la part de l'analyste des données pour trouver quelle est la meilleure façon pour stocker, partager et traiter les collections de données afin d'en obtenir le maximum de bénéfice et de connaissances à partir de ces données. Au lieu d'explorer manuellement les collections des données, CURARE fournit de outils intégrés à un environnement pour assister les analystes des données à trouver quelle est la meilleure collection qui peut être utilisée pour accomplir un objectif analytique donné. Nous avons implémenté CURARE et expliqué comment le déployer selon un modèle d'informatique dans les nuages (cloud computing) utilisant des services de science des donnés sur lesquels les services CURARE sont branchés. Nous avons conçu des expériences pour mesurer les coûts de la construction des vues à partir des ensembles des données du Grand Lyon et de Twitter, afin de pourvoir un aperçu de l'intérêt de notre approche et notre environnement de curation de données
The emergence of new platforms for decentralized data creation, such as sensor and mobile platforms and the increasing availability of open data on the Web, is adding to the increase in the number of data sources inside organizations and brings an unprecedented Big Data to be explored. The notion of data curation has emerged to refer to the maintenance of data collections and the preparation and integration of datasets, combining them to perform analytics. Curation tasks include extracting explicit and implicit meta-data; semantic metadata matching and enrichment to add quality to the data. Next generation data management engines should promote techniques with a new philosophy to cope with the deluge of data. They should aid the user in understanding the data collections’ content and provide guidance to explore data. A scientist can stepwise explore into data collections and stop when the content and quality reach a satisfaction point. Our work adopts this philosophy and the main contribution is a data collections’ curation approach and exploration environment named CURARE. CURARE is a service-based system for curating and exploring Big Data. CURARE implements a data collection model that we propose, used for representing their content in terms of structural and statistical meta-data organised under the concept of view. A view is a data structure that provides an aggregated perspective of the content of a data collection and its several associated releases. CURARE provides tools focused on computing and extracting views using data analytics methods and also functions for exploring (querying) meta-data. Exploiting Big Data requires a substantial number of decisions to be performed by data analysts to determine which is the best way to store, share and process data collections to get the maximum benefit and knowledge from them. Instead of manually exploring data collections, CURARE provides tools integrated in an environment for assisting data analysts determining which are the best collections that can be used for achieving an analytics objective. We implemented CURARE and explained how to deploy it on the cloud using data science services on top of which CURARE services are plugged. We have conducted experiments to measure the cost of computing views based on datasets of Grand Lyon and Twitter to provide insight about the interest of our data curation approach and environment
APA, Harvard, Vancouver, ISO, and other styles
21

Huang, Xueli. "Achieving Data Privacy and Security in Cloud." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/372805.

Full text
Abstract:
Computer and Information Science
Ph.D.
The growing concerns in term of the privacy of data stored in public cloud have restrained the widespread adoption of cloud computing. The traditional method to protect the data privacy is to encrypt data before they are sent to public cloud, but heavy computation is always introduced by this approach, especially for the image and video data, which has much more amount of data than text data. Another way is to take advantage of hybrid cloud by separating the sensitive data from non-sensitive data and storing them in trusted private cloud and un-trusted public cloud respectively. But if we adopt the method directly, all the images and videos containing sensitive data have to be stored in private cloud, which makes this method meaningless. Moreover, the emergence of the Software-Defined Networking (SDN) paradigm, which decouples the control logic from the closed and proprietary implementations of traditional network devices, enables researchers and practitioners to design new innovative network functions and protocols in a much easier, flexible, and more powerful way. The data plane will ask the control plane to update flow rules when the data plane gets new network packets with which it does not know how to deal with, and the control plane will then dynamically deploy and configure flow rules according to the data plane's requests, which makes the whole network could be managed and controlled efficiently. However, this kind of reactive control model could be used by hackers launching Distributed Denial-of-Service (DDoS) attacks by sending large amount of new requests from the data plane to the control plane. For image data, we divide the image is into pieces with equal size to speed up the encryption process, and propose two kinds of method to cut the relationship between the edges. One is to add random noise in each piece, the other is to design a one-to-one mapping function for each piece to map different pixel value into different another one, which cuts off the relationship between pixels as well the edges. Our mapping function is given with a random parameter as inputs to make each piece could randomly choose different mapping. Finally, we shuffle the pieces with another random parameter, which makes the problems recovering the shuffled image to be NP-complete. For video data, we propose two different methods separately for intra frame, I-frame, and inter frame, P-frame, based on their different characteristic. A hybrid selective video encryption scheme for H.264/AVC based on Advanced Encryption Standard (AES) and video data themselves is proposed for I-frame. For each P-slice of P-frame, we only abstract small part of them in private cloud based on the characteristic of intra prediction mode, which efficiently prevents P-frame being decoded. For cloud running with SDN, we propose a framework to keep the controller away from DDoS attack. We first predict the amount of new requests for each switch periodically based on its previous information, and the new requests will be sent to controller if the predicted total amount of new requests is less than the threshold. Otherwise these requests will be directed to the security gate way to check if there is a attack among them. The requests that caused the dramatic decrease of entropy will be filter out by our algorithm, and the rules of these request will be made and sent to controller. The controller will send the rules to each switch to make them direct the flows matching with the rules to honey pot.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
22

Tang, Yuzhe. "Secure and high-performance big-data systems in the cloud." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53995.

Full text
Abstract:
Cloud computing and big data technology continue to revolutionize how computing and data analysis are delivered today and in the future. To store and process the fast-changing big data, various scalable systems (e.g. key-value stores and MapReduce) have recently emerged in industry. However, there is a huge gap between what these open-source software systems can offer and what the real-world applications demand. First, scalable key-value stores are designed for simple data access methods, which limit their use in advanced database applications. Second, existing systems in the cloud need automatic performance optimization for better resource management with minimized operational overhead. Third, the demand continues to grow for privacy-preserving search and information sharing between autonomous data providers, as exemplified by the Healthcare information networks. My Ph.D. research aims at bridging these gaps. First, I proposed HINDEX, for secondary index support on top of write-optimized key-value stores (e.g. HBase and Cassandra). To update the index structure efficiently in the face of an intensive write stream, HINDEX synchronously executes append-only operations and defers the so-called index-repair operations which are expensive. The core contribution of HINDEX is a scheduling framework for deferred and lightweight execution of index repairs. HINDEX has been implemented and is currently being transferred to an IBM big data product. Second, I proposed Auto-pipelining for automatic performance optimization of streaming applications on multi-core machines. The goal is to prevent the bottleneck scenario in which the streaming system is blocked by a single core while all other cores are idling, which wastes resources. To partition the streaming workload evenly to all the cores and to search for the best partitioning among many possibilities, I proposed a heuristic based search strategy that achieves locally optimal partitioning with lightweight search overhead. The key idea is to use a white-box approach to search for the theoretically best partitioning and then use a black-box approach to verify the effectiveness of such partitioning. The proposed technique, called Auto-pipelining, is implemented on IBM Stream S. Third, I proposed ǫ-PPI, a suite of privacy preserving index algorithms that allow data sharing among unknown parties and yet maintaining a desired level of data privacy. To differentiate privacy concerns of different persons, I proposed a personalized privacy definition and substantiated this new privacy requirement by the injection of false positives in the published ǫ-PPI data. To construct the ǫ-PPI securely and efficiently, I proposed to optimize the performance of multi-party computations which are otherwise expensive; the key idea is to use addition-homomorphic secret sharing mechanism which is inexpensive and to do the distributed computation in a scalable P2P overlay.
APA, Harvard, Vancouver, ISO, and other styles
23

Jayapandian, Catherine Praveena. "Cloudwave: A Cloud Computing Framework for Multimodal Electrophysiological Big Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405516626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Saker, Vanessa. "Automated feature synthesis on big data using cloud computing resources." Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32452.

Full text
Abstract:
The data analytics process has many time-consuming steps. Combining data that sits in a relational database warehouse into a single relation while aggregating important information in a meaningful way and preserving relationships across relations, is complex and time-consuming. This step is exceptionally important as many machine learning algorithms require a single file format as an input (e.g. supervised and unsupervised learning, feature representation and feature learning, etc.). An analyst is required to manually combine relations while generating new, more impactful information points from data during the feature synthesis phase of the feature engineering process that precedes machine learning. Furthermore, the entire process is complicated by Big Data factors such as processing power and distributed data storage. There is an open-source package, Featuretools, that uses an innovative algorithm called Deep Feature Synthesis to accelerate the feature engineering step. However, when working with Big Data, there are two major limitations. The first is the curse of modularity - Featuretools stores data in-memory to process it and thus, if data is large, it requires a processing unit with a large memory. Secondly, the package is dependent on data stored in a Pandas DataFrame. This makes the use of Featuretools with Big Data tools such as Apache Spark, a challenge. This dissertation aims to examine the viability and effectiveness of using Featuretools for feature synthesis with Big Data on the cloud computing platform, AWS. Exploring the impact of generated features is a critical first step in solving any data analytics problem. If this can be automated in a distributed Big Data environment with a reasonable investment of time and funds, data analytics exercises will benefit considerably. In this dissertation, a framework for automated feature synthesis with Big Data is proposed and an experiment conducted to examine its viability. Using this framework, an infrastructure was built to support the process of feature synthesis on AWS that made use of S3 storage buckets, Elastic Cloud Computing services, and an Elastic MapReduce cluster. A dataset of 95 million customers, 34 thousand fraud cases and 5.5 million transactions across three different relations was then loaded into the distributed relational database on the platform. The infrastructure was used to show how the dataset could be prepared to represent a business problem, and Featuretools used to generate a single feature matrix suitable for inclusion in a machine learning pipeline. The results show that the approach was viable. The feature matrix produced 75 features from 12 input variables and was time efficient with a total end-to-end run time of 3.5 hours and a cost of approximately R 814 (approximately $52). The framework can be applied to a different set of data and allows the analysts to experiment on a small section of the data until a final feature set is decided. They are able to easily scale the feature matrix to the full dataset. This ability to automate feature synthesis, iterate and scale up, will save time in the analytics process while providing a richer feature set for better machine learning results.
APA, Harvard, Vancouver, ISO, and other styles
25

Woodworth, Jason W. "Secure Semantic Search over Encrypted Big Data in the Cloud." Thesis, University of Louisiana at Lafayette, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10286646.

Full text
Abstract:

Cloud storage is a widely used service for both a personal and enterprise demands. However, despite its advantages, many potential users with sensitive data refrain from fully using the service due to valid concerns about data privacy. An established solution to this problem is to perform encryption on the client?s end. This approach, however, restricts data processing capabilities (e.g. searching over the data). In particular, searching semantically with real-time response is of interest to users with big data. To address this, this thesis introduces an architecture for semantically searching encrypted data using cloud services. It presents a method that accomplishes this by extracting and encrypting key phrases from uploaded documents and comparing them to queries that have been expanded with semantic information and then encrypted. It presents an additional method that builds o? of this and uses topic-based clustering to prune the amount of searched data and improve performance times for big-data-scale. Results of experiments carried out on real datasets with fully implemented prototypes show that results are accurate and searching is e?cient.

APA, Harvard, Vancouver, ISO, and other styles
26

Foschini, Federico. "Amber: a Cloud Service Architecture proposal." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13840/.

Full text
Abstract:
Al giorno d’oggi, Cloud, Internet Of Things e Big Data sono tra i topic di maggior interesse, considerando il ruolo centrale che il dato sta assumendo sempre di più. Infatti, le nuove tipologie di soluzioni sono sempre più dedicate alla gestione e collezione dei dati per poter supportare i processi di analisi atti a migliorare il business aziendale. I processi di progettazione di queste soluzioni spesso sono particolarmente complessi e richiedono un effort importante, aumentandone quindi la complessità. Inoltre, la progettazione delle soluzioni interessate sono particolarmente correlate alla piattaforma ed alle tecnologie utilizzate, introducendo quindi criticità legate a tali dipendenze. Questa tesi propone la definizione di un modello di architettura che volta a supportare la progettazione di diverse tipologie di soluzioni Cloud per agevolare, semplificare e migliorare la loro progettazione. Uno degli obiettivi di questo modello è la trasparenza rispetto agli aspetti tecnologici. Infatti, è stato descritto come questo modello di architettura possa agire da wrapper tra la soluzione realizzata e la piattaforma che la ospita, rendendo così trasparente la complessità tecnologica. I moduli dell’architettura possono quindi essere distribuiti su diverse piattaforme, senza che ne vengano influenzate le caratteristiche e funzionalità, grazie alla loro indipendenza. All’interno della tesi viene proposto anche un caso di studio reale che mostra come possa essere effettuato il porting di un sistema Web su Cloud adottando l’architettura proposta. Il sistema interessato dal caso di studio è UniSAS, soluzione appartenente al contesto Smart Living, dedicata al miglioramento dell’accessibilità e al controllo degli accessi. L’architettura che viene definita e descritta in questa tesi permette quindi di ottimizzare l’effort e le risorse necessarie alla progettazione di soluzioni Cloud multi-piattaforma e multi-contesto, con l’obiettivo di ridurne la complessità e criticità.
APA, Harvard, Vancouver, ISO, and other styles
27

Giraud, Matthieu. "Secure Distributed MapReduce Protocols : How to have privacy-preserving cloud applications?" Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC033/document.

Full text
Abstract:
À l’heure des réseaux sociaux et des objets connectés, de nombreuses et diverses données sont produites à chaque instant. L’analyse de ces données a donné lieu à une nouvelle science nommée "Big Data". Pour traiter du mieux possible ce flux incessant de données, de nouvelles méthodes de calcul ont vu le jour. Les travaux de cette thèse portent sur la cryptographie appliquée au traitement de grands volumes de données, avec comme finalité la protection des données des utilisateurs. En particulier, nous nous intéressons à la sécurisation d’algorithmes utilisant le paradigme de calcul distribué MapReduce pour réaliser un certain nombre de primitives (ou algorithmes) indispensables aux opérations de traitement de données, allant du calcul de métriques de graphes (e.g. PageRank) aux requêtes SQL (i.e. intersection d’ensembles, agrégation, jointure naturelle). Nous traitons dans la première partie de cette thèse de la multiplication de matrices. Nous décrivons d’abord une multiplication matricielle standard et sécurisée pour l’architecture MapReduce qui est basée sur l’utilisation du chiffrement additif de Paillier pour garantir la confidentialité des données. Les algorithmes proposés correspondent à une hypothèse spécifique de sécurité : collusion ou non des nœuds du cluster MapReduce, le modèle général de sécurité étant honnête mais curieux. L’objectif est de protéger la confidentialité de l’une et l’autre matrice, ainsi que le résultat final, et ce pour tous les participants (propriétaires des matrices, nœuds de calcul, utilisateur souhaitant calculer le résultat). D’autre part, nous exploitons également l’algorithme de multiplication de matrices de Strassen-Winograd, dont la complexité asymptotique est O(n^log2(7)) soit environ O(n^2.81) ce qui est une amélioration par rapport à la multiplication matricielle standard. Une nouvelle version de cet algorithme adaptée au paradigme MapReduce est proposée. L’hypothèse de sécurité adoptée ici est limitée à la non-collusion entre le cloud et l’utilisateur final. La version sécurisée utilise comme pour la multiplication standard l’algorithme de chiffrement Paillier. La seconde partie de cette thèse porte sur la protection des données lorsque des opérations d’algèbre relationnelle sont déléguées à un serveur public de cloud qui implémente à nouveau le paradigme MapReduce. En particulier, nous présentons une solution d’intersection sécurisée qui permet à un utilisateur du cloud d’obtenir l’intersection de n > 1 relations appartenant à n propriétaires de données. Dans cette solution, tous les propriétaires de données partagent une clé et un propriétaire de données sélectionné partage une clé avec chacune des clés restantes. Par conséquent, alors que ce propriétaire de données spécifique stocke n clés, les autres propriétaires n’en stockent que deux. Le chiffrement du tuple de relation réelle consiste à combiner l’utilisation d’un chiffrement asymétrique avec une fonction pseudo-aléatoire. Une fois que les données sont stockées dans le cloud, chaque réducteur (Reducer) se voit attribuer une relation particulière. S’il existe n éléments différents, des opérations XOR sont effectuées. La solution proposée reste donc très efficace. Par la suite, nous décrivons les variantes des opérations de regroupement et d’agrégation préservant la confidentialité en termes de performance et de sécurité. Les solutions proposées associent l’utilisation de fonctions pseudo-aléatoires à celle du chiffrement homomorphe pour les opérations COUNT, SUM et AVG et à un chiffrement préservant l’ordre pour les opérations MIN et MAX. Enfin, nous proposons les versions sécurisées de deux protocoles de jointure (cascade et hypercube) adaptées au paradigme MapReduce. Les solutions consistent à utiliser des fonctions pseudo-aléatoires pour effectuer des contrôles d’égalité et ainsi permettre les opérations de jointure lorsque des composants communs sont détectés.(...)
In the age of social networks and connected objects, many and diverse data are produced at every moment. The analysis of these data has led to a new science called "Big Data". To best handle this constant flow of data, new calculation methods have emerged.This thesis focuses on cryptography applied to processing of large volumes of data, with the aim of protection of user data. In particular, we focus on securing algorithms using the distributed computing MapReduce paradigm to perform a number of primitives (or algorithms) essential for data processing, ranging from the calculation of graph metrics (e.g. PageRank) to SQL queries (i.e. set intersection, aggregation, natural join).In the first part of this thesis, we discuss the multiplication of matrices. We first describe a standard and secure matrix multiplication for the MapReduce architecture that is based on the Paillier’s additive encryption scheme to guarantee the confidentiality of the data. The proposed algorithms correspond to a specific security hypothesis: collusion or not of MapReduce cluster nodes, the general security model being honest-but-curious. The aim is to protect the confidentiality of both matrices, as well as the final result, and this for all participants (matrix owners, calculation nodes, user wishing to compute the result). On the other hand, we also use the matrix multiplication algorithm of Strassen-Winograd, whose asymptotic complexity is O(n^log2(7)) or about O(n^2.81) which is an improvement compared to the standard matrix multiplication. A new version of this algorithm adapted to the MapReduce paradigm is proposed. The safety assumption adopted here is limited to the non-collusion between the cloud and the end user. The version uses the Paillier’s encryption scheme.The second part of this thesis focuses on data protection when relational algebra operations are delegated to a public cloud server using the MapReduce paradigm. In particular, we present a secureintersection solution that allows a cloud user to obtain the intersection of n > 1 relations belonging to n data owners. In this solution, all data owners share a key and a selected data owner sharesa key with each of the remaining keys. Therefore, while this specific data owner stores n keys, the other owners only store two keys. The encryption of the real relation tuple consists in combining the use of asymmetric encryption with a pseudo-random function. Once the data is stored in the cloud, each reducer is assigned a specific relation. If there are n different elements, XOR operations are performed. The proposed solution is very effective. Next, we describe the variants of grouping and aggregation operations that preserve confidentiality in terms of performance and security. The proposed solutions combine the use of pseudo-random functions with the use of homomorphic encryption for COUNT, SUM and AVG operations and order preserving encryption for MIN and MAX operations. Finally, we offer secure versions of two protocols (cascade and hypercube) adapted to the MapReduce paradigm. The solutions consist in using pseudo-random functions to perform equality checks and thus allow joining operations when common components are detected. All the solutions described above are evaluated and their security proven
APA, Harvard, Vancouver, ISO, and other styles
28

Al, Buhussain Ali. "Design and Analysis of an Adjustable and Configurable Bio-inspired Heuristic Scheduling Technique for Cloud Based Systems." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34794.

Full text
Abstract:
Cloud computing environments mainly focus on the delivery of resources, platforms, and infrastructure as services to users over the Internet. More specifically, Cloud promises user access to a scalable amount of resources, making use of the elasticity on the provisioning of recourses by scaling them up and down depending on the demand. The cloud technology has gained popularity in recent years as the next big step in the IT industry. The number of users of Cloud services has been increasing steadily, so the need for efficient task scheduling is crucial for improving and maintaining performance. Moreover, those users have different SLAs that imposes different demands on the cloud system. In this particular case, a scheduler is responsible for assigning tasks to virtual machines in an effective and efficient matter to meet with the QoS promised to users. The scheduler needs to adapt to changes in the cloud environment along with defined demand requirements. Hence, an Adjustable and Configurable bio-inspired scheduling heuristic for cloud based systems (ACBH) is suggested. We also present an extensively comparative performance study on bio-inspired scheduling algorithms namely Ant Colony Optimization (ACO) and Honey Bee Optimization (HBO). Furthermore, a networking scheduling algorithm is also evaluated, which comprises Random Biased Sampling (RBS). The study of bio-inspired techniques concluded that all the bio-inspired algorithms follow the same flow that was later used in the development of (ACBH). The experimental results have shown that ACBH has a 90% better execution time that it closest rival which is ACO. ACBH has a better performance in terms of the fairness between execution time differences between tasks. HBO shows better scheduling when the objective consists mainly of costs. However, when there is multiple optimization objectives ACBH performs the best due to its configurability and adaptability.
APA, Harvard, Vancouver, ISO, and other styles
29

Barbosa, Margarida de Carvalho Jerónimo. "As-built building information modeling (BIM) workflows." Doctoral thesis, Universidade de Lisboa, Faculdade de Arquitetura, 2018. http://hdl.handle.net/10400.5/16380.

Full text
Abstract:
Tese de Doutoramento em Arquitetura, com a especialização em Conservação e Restauro apresentada na Faculdade de Arquitetura da Universidade de Lisboa para obtenção do grau de Doutor.
As metodologias associadas ao software BIM (Building Information Modeling) representam nos dias de hoje um dos sistemas integrados mais utilizado para a construção de novos edifícios. Ao usar BIM no desenvolvimento de projetos, a colaboração entre os diferentes intervenientes num projeto de arquitetura, engenharia e construção, melhora de um modo muito significativo. Esta tecnologia também pode ser aplicada para intervenções em edifícios existentes. Na presente tese pretende-se melhorar os processos de registo, documentação e gestão da informação, recorrendo a ferramentas BIM para estabelecer um conjunto de diretrizes de fluxo de trabalho, para modelar de forma eficiente as estruturas existentes a partir de nuvens de pontos, complementados com outros métodos apropriados. Há vários desafios que impedem a adoção do software BIM para o planeamento de intervenções em edifícios existentes. Volk et al. (2014) indica que os principais obstáculos de adoção BIM são o esforço de modelação/conversão dos elementos do edifício captados em objetos BIM, a dificuldade em actualizar informação em BIM e as dificuldades em lidar com as incertezas associadas a dados, objetos e relações que ocorrem em edifícios existentes. A partir desta análise, foram desenvolvidas algumas diretrizes de fluxo de trabalho BIM para modelação de edifícios existentes. As propostas indicadas para as diretrizes BIM em edifícios existentes, incluem tolerâncias e standards para modelar elementos de edifícios existentes. Tal metodologia permite que as partes interessadas tenham um entendimento e um acordo sobre o que é suposto ser modelado. Na presente tese, foi investigado um conjunto de tópicos de pesquisa que foram formuladas e colocadas, enquadrando os diferentes obstáculos e direcionando o foco de pesquisa segundo quatro vectores fundamentais: 1. Os diferentes tipos de dados de um edifício que podem ser adquiridos a partir de nuvens de pontos; 2. Os diferentes tipos de análise de edifícios; 3. A utilização de standards e BIM para edifícios existentes; 4. Fluxos de trabalho BIM para edifícios existentes e diretrizes para ateliers de arquitectura. A partir da pesquisa efetuada, pode-se concluir que é há necessidade de uma melhor utilização da informação na tomada de decisão no âmbito de um projeto de intervenção arquitetónica. Diferentes tipos de dados, não apenas geométricos, são necessários como base para a análise dos edifícios. Os dados não geométricos podem referir-se a características físicas do tecido construído, tais como materiais, aparência e condição. Além disso, o desempenho ambiental, estrutural e mecânico de um edifício, bem como valores culturais, históricos e arquitetónicos, essenciais para a compreensão do seu estado atual. Estas informações são fundamentais para uma análise mais profunda que permita a compreensão das ações de intervenção que são necessárias no edifício. Através de tecnologias Fotogrametria (ADP) e Laser Scanning (TLS), pode ser gerada informação precisa e actual. O produto final da ADP e TLS são nuvens de pontos, que podem ser usadas de forma complementar. A combinação destas técnicas com o levantamento tradicional Robotic Total Station (RTS) fornece uma base de dados exata que, juntamente com outras informações existentes, permitem o planeamento adequado da intervenção. Os problemas de utilização de BIM para intervenção em edifícios existentes referem-se principalmente à análise e criação de geometria do edifício, o que geralmente é uma etapa prévia para a conexão de informação não-geométrica de edifícios. Por esta razão, a presente tese centra-se principalmente na busca de diretrizes para diminuir a dificuldade em criar os elementos necessários para o BIMs. Para tratar dados incertos e pouco claros ou informações semânticas não visíveis, pode-se complementar os dados originais com informação adicional. Os fluxos de trabalho apresentados na presente tese focam-se principalmente na falta de informação visível. No caso de projetos de remodelação, a informação não visível pode ser adquirida de forma limitada através de levantamentos ADP ou TLS após a demolição de alguns elementos e/ou camadas de parede. Tal metodologia permite um melhor entendimento das camadas de materiais não visíveis dos elementos do edifício, quando a intervenção é uma demolição parcial. Este processo é útil apenas se uma parte do material do elemento é removida e não pode ser aplicada a elementos não intervencionados. O tratamento da informação em falta pode ser feito através da integração de diferentes tipos de dados com diferentes origens. Devem ser implementados os fluxos de trabalho para a integração da informação. Diferentes fluxos de trabalho podem criar informação em falta, usada como complemento ou como base para a tomada de decisão quando não há dados disponíveis. Relativamente à adição de dados em falta através da geração de nuvem de pontos, os casos de estudo destacam a importância de planear o levantamento, fazendo com que todas as partes compreendam as necessidades associadas ao projeto. Além da precisão, o nível de tolerância de interpretação e modelação, requeridos pelo projeto, também devem ser acordados e entendidos. Nem todas as ferramentas e métodos de pesquisa são adequados para todos os edifícios. A escala, os materiais e a acessibilidade do edifício desempenham um papel importante no planeamento do levantamento. Para lidar com o elevado esforço de modelação, é necessário entender os fluxos de trabalho necessários para analisar a geometria dos elementos do edifício. Os BIMs construídos são normalmente gerados manualmente através de desenhos CAD e/ou nuvens de pontos. Estes são usados como base geométrica a partir da qual a informação é extraída. A informação utilizada para planear a intervenção do edifício deve ser verificada, confirmando se é uma representação do estado actual do edifício. As técnicas de levantamento 3D para capturar a condição atual do edifício devem ser integradas no fluxo de trabalho BIM, construído para capturar os dados do edifício sobre os quais serão feitas as decisões de intervenção. O resultado destas técnicas deve ser integrado com diferentes tipos de dados para fornecer uma base mais precisa e completa. O atelier de arquitetura deve estar habilitado com competências técnicas adequadas para saber o que pedir e o que utilizar da forma mais adequada. Os requisitos de modelação devem concentrar-se principalmente no conteúdo deste processo, ou seja, o que modelar, como desenvolver os elementos no modelo, quais as informações que o modelo deve conter e como deve ocorrer a troca de informações no modelo. O levantamento das nuvens de pontos deve ser efectuado após ter sido estipulado o objetivo do projeto, standards, tolerâncias e tipo de conteúdo na modelação. As tolerâncias e normas de modelação são diferentes entre empresas e países. Independentemente destas diferenças, os documentos standard têm como objetivo produzir e receber informação num formato de dados consistente e em fluxos de trabalho de troca eficiente entre os diferentes intervenientes do projeto. O pensamento crítico do fluxo de trabalho de modelação e a comunicação e acordo entre todas os intervenientes são os principais objetivos das diretrizes apresentadas nesta tese. O estabelecimento e o acordo de tolerâncias de modelação e o nível de desenvolvimento e detalhes presentes nas BIMs, entre as diferentes partes envolvidas no projeto, são mais importantes do que as definições existentes atualmente e que são utilizadas pela indústria da AEC. As ferramentas automáticas ou semi-automáticas para extração da forma geométrica, eliminação ou redução de tarefas repetitivas durante o desenvolvimento de BIMs e a análise de condições de ambiente ou de cenários, são também um processo de diminuição do esforço de modelação. Uma das razões que justifica a necessidade de standards é a estrutura e a melhoria da colaboração, não só para os intervenientes fora da empresa, mas também dentro dos ateliers de arquitetura. Os dados e standards de fluxo de trabalho são difíceis de implementar diariamente de forma eficiente, resultando muitas vezes em dados e fluxos de trabalho confusos. Quando tal situação ocorre, a qualidade dos resultados do projeto reduz-se e pode ficar comprometida. As normas aplicadas aos BIMs construídos, exatamente como as normas aplicadas aos BIMs para edifícios novos, contribuem para a criação de informação credível e útil. Para atualizar um BIMs durante o ciclo de vida de um edifício,é necessário adquirir a informação sobre o estado actual do edifício. A monitorização de dados pode ser composta por fotografias, PCM, dados de sensores, ou dados resultantes da comparação de PCM e BIMs e podem representar uma maneira de atualizar BIMs existentes. Isto permite adicionar continuamente informações, documentando a evolução e a história da construção e possibilita avaliar possíveis intervenções de prevenção para a sua valorização. BIM não é geralmente usado para documentar edifícios existentes ou intervenções em edifícios existentes. No presente trabalho propõe-se melhorar tal situação usando standards e/ou diretrizes BIM e apresentar uma visão inicial e geral dos componentes que devem ser incluídos em tais standards e/ou linhas de orientação.
ABSTRACT: Building information modeling (BIM) is most often used for the construction of new buildings. By using BIM in such projects, collaboration among stakeholders in an architecture, engineering and construction project is improved. This scenario might also be targeted for interventions in existing buildings. This thesis intends to enhance processes of recording, documenting and managing information by establishing a set of workflow guidelines to efficiently model existing structures with BIM tools from point cloud data, complemented with any other appropriate methods. There are several challenges hampering BIM software adoption for planning interventions in existing buildings. Volk et al. (2014) outlines that the as-built BIM adoption main obstacles are: the required modeling/conversion effort from captured building data into semantic BIM objects; the difficulty in maintaining information in a BIM; and the difficulties in handling uncertain data, objects, and relations occurring in existing buildings. From this analysis, it was developped a case for devising BIM workflow guidelines for modeling existing buildings. The proposed content for BIM guidelines includes tolerances and standards for modeling existing building elements. This allows stakeholders to have a common understanding and agreement of what is supposed to be modeled and exchanged.In this thesis, the authors investigate a set of research questions that were formed and posed, framing obstacles and directing the research focus in four parts: 1. the different kind of building data acquired; 2. the different kind of building data analysis processes; 3. the use of standards and as-built BIM and; 4. as-built BIM workflows and guidelines for architectural offices. From this research, the authors can conclude that there is a need for better use of documentation in which architectural intervention project decisions are made. Different kind of data, not just geometric, is needed as a basis for the analysis of the current building state. Non-geometric information can refer to physical characteristics of the built fabric, such as materials, appearance and condition. Furthermore environmental, structural and mechanical building performance, as well as cultural, historical and architectural values, style and age are vital to the understanding of the current state of the building. These information is necessary for further analysis allowing the understanding of the necessary actions to intervene. Accurate and up to date information information can be generated through ADP and TLS surveys. The final product of ADP and TLS are the point clouds, which can be used to complement each other. The combination of these techniques with traditional RTS survey provide an accurate and up to date base that, along with other existing information, allow the planning of building interventions. As-built BIM adoption problems refer mainly to the analysis and generation of building geometry, which usually is a previous step to the link of non-geometric building information. For this reason the present thesis focus mainly in finding guidelines to decrease the difficulty in generating the as-built-BIMs elements. To handle uncertain data and unclear or hidden semantic information, one can complement the original data with additional missing information. The workflows in the present thesis address mainly the missing visible information. In the case of refurbishment projects the hidden information can be acquired to some extend with ADP or TLS surveys after demolition of some elements and wall layers. This allows a better understanding of the non visible materials layers of a building element whenever it is a partial demolition. This process is only useful if a part of the element material is removed, it can not be applied to the non intervened elements. The handling of visible missing data, objects and relations can be done by integrating different kind of data from different kind of sources. Workflows to connect them in a more integrated way should be implemented. Different workflows can create additional missing information, used to complement or as a base for decision making when no data is available. Relating to adding missing data through point cloud data generation the study cases outlined the importance of planning the survey, with all parts understanding what the project needs are. In addition to accuracy, the level of interpretation and modelling tolerances, required by the project, must also be agreed and understood. Not all survey tools and methods are suitable for all buildings: the scale, materials and accessibility of building play a major role in the survey planning. To handle the high modeling/conversion effort one has to understand the current workflows to analyse building geometry. As-built BIMs are majorly manually generated through CAD drawings and/or PCM data. These are used as a geometric basis input from where information is extracted. The information used to plan the building intervention should be checked, confirming it is a representation of the as-is state of the building. The 3D surveys techniques to capture the as-is state of the building should be integrated in the as-built BIM workflow to capture the building data in which intervention decisions are made. The output of these techniques should be integrated with different kind of data to provide the most accurate and complete basis. The architectural company should have technical skills to know what to ask for and to use it appropriately. Modeling requirements should focus primarily on the content of this process: what to model, how to develop the elements in the model, what information should the model contain, and how should information in the model be exchanged. The point clouds survey should be done after stipulating the project goal, standards, tolerances and modeling content. Tolerances and modeling guidelines change across companies and countries. Regardless of these differences the standards documents have the purpose of producing and receiving information in a consistent data format, in efficient exchange workflows between project stakeholders. The critical thinking of the modeling workflow and, the communication and agreement between all parts involved in the project, is the prime product of this thesis guidelines. The establishment and agreement of modeling tolerances and the level of development and detail present in the BIMs, between the different parts involved on the project, is more important than which of the existing definitions currently in use by the AEC industry is chosen. Automated or semi-automated tools for elements shape extraction, elimination or reduction of repetitive tasks during the BIMs development and, analysis of environment or scenario conditions are also a way of decreasing the modeling effort. One of the reasons why standards are needed is the structure and improvement of the collaboration not only with outside parts but also inside architectural offices. Data and workflow standards are very hard to implement daily, in a practical way, resulting in confusing data and workflows. These reduce the quality of communication and project outputs. As-built BIM standards, exactly like BIM standards, contribute to the creation of reliable and useful information. To update a BIMs during the building life-cycle, one needs to acquire the as-is building state information. Monitoring data, whether consisted by photos, PCM, sensor data, or data resulting from the comparison of PCM and BIMs can be a way of updating existing BIMs. It allows adding continuously information, documenting the building evolution and story, and evaluating possible prevention interventions for its enhancement. BIM environments are not often used to document existing buildings or interventions in existing buildings. The authors propose to improve the situation by using BIM standards and/or guidelines, and the authors give an initial overview of components that should be included in such a standard and/or guideline.
N/A
APA, Harvard, Vancouver, ISO, and other styles
30

Safieddine, Ibrahim. "Optimisation d'infrastructures de cloud computing sur des green datacenters." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM083/document.

Full text
Abstract:
Les centres de données verts de dernière génération ont été conçus pour une consommation optimisée et une meilleure qualité du niveau de service SLA. Cependant,ces dernières années, le marché des centres de données augmente rapidement,et la concentration de la puissance de calcul est de plus en plus importante, ce qui fait augmenter les besoins en puissance électrique et refroidissement. Un centre de données est constitué de ressources informatiques, de systèmes de refroidissement et de distribution électrique. De nombreux travaux de recherche se sont intéressés à la réduction de la consommation des centres de données afin d'améliorer le PUE, tout en garantissant le même niveau de service. Certains travaux visent le dimensionnement dynamique des ressources en fonction de la charge afin de réduire le nombre de serveurs démarrés, d'autres cherchent à optimiser le système de refroidissement qui représente un part important de la consommation globale.Dans cette thèse, afin de réduire le PUE, nous étudions la mise en place d'un système autonome d'optimisation globale du refroidissement, qui se base sur des sources de données externes tel que la température extérieure et les prévisions météorologiques, couplé à un module de prédiction de charge informatique globale pour absorber les pics d'activité, pour optimiser les ressources utilisés à un moindre coût, tout en préservant la qualité de service. Afin de garantir un meilleur SLA, nous proposons une architecture distribuée pour déceler les anomalies de fonctionnements complexes en temps réel, en analysant de gros volumes de données provenant des milliers de capteurs du centre de données. Détecter les comportements anormaux au plus tôt, permet de réagir plus vite face aux menaces qui peuvent impacter la qualité de service, avec des boucles de contrôle autonomes qui automatisent l'administration. Nous évaluons les performances de nos contributions sur des données provenant d'un centre de donnée en exploitation hébergeant des applications réelles
Next-generation green datacenters were designed for optimized consumption and improved quality of service level Service Level Agreement (SLA). However, in recent years, the datacenter market is growing rapidly, and the concentration of the computing power is increasingly important, thereby increasing the electrical power and cooling consumptions. A datacenter consists of computing resources, cooling systems, and power distribution. Many research studies have focused on reducing the consumption of datacenters to improve the PUE, while guaranteeing the same level of service. Some works aims the dynamic sizing of resources according to the load, to reduce the number of started servers, others seek to optimize the cooling system which represents an important part of total consumption. In this thesis, in order to reduce the PUE, we study the design of an autonomous system for global cooling optimization, which is based on external data sources such as the outside temperature and weather forecasting, coupled with an overall IT load prediction module to absorb the peaks of activity, to optimize activere sources at a lower cost while preserving service level quality. To ensure a better SLA, we propose a distributed architecture to detect the complex operation anomalies in real time, by analyzing large data volumes from thousands of sensors deployed in the datacenter. Early identification of abnormal behaviors, allows a better reactivity to deal with threats that may impact the quality of service, with autonomous control loops that automate the administration. We evaluate the performance of our contributions on data collected from an operating datacenter hosting real applications
APA, Harvard, Vancouver, ISO, and other styles
31

Chihoub, Houssem Eddine. "Managing consistency for big data applications : tradeoffs and self-adaptiveness." Thesis, Cachan, Ecole normale supérieure, 2013. http://www.theses.fr/2013DENS0059/document.

Full text
Abstract:
Dans l’ère de Big Data, les applications intensives en données gèrent des volumes de données extrêmement grand. De plus, ils ont besoin de temps de traitement rapide. Une grande partie de ces applications sont déployées sur des infrastructures cloud. Ceci est afin de bénéficier de l’élasticité des clouds, les déploiements sur demande et les coûts réduits strictement relatifs à l’usage. Dans ce contexte, la réplication est un moyen essentiel dans le cloud afin de surmonter les défis de Big Data. En effet, la réplication fournit les moyens pour assurer la disponibilité des données à travers de nombreuses copies de données, des accès plus rapide aux copies locales, la tolérance aux fautes. Cependant, la réplication introduit le problème majeur de la cohérence de données. La gestion de la cohérence est primordiale pour les systèmes de Big Data. Les modèles à cohérence forte présentent de grandes limitations aux aspects liées aux performances et au passage à l’échelle à cause des besoins de synchronisation. En revanche, les modèles à cohérence faible et éventuelle promettent de meilleures performances ainsi qu’une meilleure disponibilité de données. Toutefois, ces derniers modèles peuvent tolérer, sous certaines conditions, trop d’incohérence temporelle. Dans le cadre du travail de cette thèse, on s'adresse particulièrement aux problèmes liés aux compromis de cohérence dans les systèmes à large échelle de Big Data. Premièrement, on étudie la gestion de cohérence au niveau du système de stockage. On introduit un modèle de cohérence auto-adaptative (nommé Harmony). Ce modèle augmente et diminue de manière automatique le niveau de cohérence et le nombre de copies impliquées dans les opérations. Ceci permet de fournir de meilleures performances toute en satisfaisant les besoins de cohérence de l’application. De plus, on introduit une étude détaillée sur l'impact de la gestion de la cohérence sur le coût financier dans le cloud. On emploi cette étude afin de proposer une gestion de cohérence efficace qui réduit les coûts. Dans une troisième direction, on étudie les effets de gestion de cohérence sur la consommation en énergie des systèmes de stockage distribués. Cette étude nous mène à analyser les gains potentiels des reconfigurations adaptatives des systèmes de stockage en matière de réduction de la consommation. Afin de compléter notre travail au niveau système de stockage, on s'adresse à la gestion de cohérence au niveau de l’application. Les applications de Big Data sont de nature différente et ont des besoins de cohérence différents. Par conséquent, on introduit une approche de modélisation du comportement de l’application lors de ses accès aux données. Le modèle résultant facilite la compréhension des besoins en cohérence. De plus, ce modèle est utilisé afin de délivrer une cohérence customisée spécifique à l’application
In the era of Big Data, data-intensive applications handle extremely large volumes of data while requiring fast processing times. A large number of such applications run in the cloud in order to benefit from cloud elasticity, easy on-demand deployments, and cost-efficient Pays-As-You-Go usage. In this context, replication is an essential feature in the cloud in order to deal with Big Data challenges. Therefore, replication therefore, enables high availability through multiple replicas, fast data access to local replicas, fault tolerance, and disaster recovery. However, replication introduces the major issue of data consistency across different copies. Consistency management is a critical for Big Data systems. Strong consistency models introduce serious limitations to systems scalability and performance due to the required synchronization efforts. In contrast, weak and eventual consistency models reduce the performance overhead and enable high levels of availability. However, these models may tolerate, under certain scenarios, too much temporal inconsistency. In this Ph.D thesis, we address this issue of consistency tradeoffs in large-scale Big Data systems and applications. We first, focus on consistency management at the storage system level. Accordingly, we propose an automated self-adaptive model (named Harmony) that scale up/down the consistency level at runtime when needed in order to provide as high performance as possible while preserving the application consistency requirements. In addition, we present a thorough study of consistency management impact on the monetary cost of running in the cloud. Hereafter, we leverage this study in order to propose a cost efficient consistency tuning (named Bismar) in the cloud. In a third direction, we study the consistency management impact on energy consumption within the data center. According to our findings, we investigate adaptive configurations of the storage system cluster that target energy saving. In order to complete our system-side study, we focus on the application level. Applications are different and so are their consistency requirements. Understanding such requirements at the storage system level is not possible. Therefore, we propose an application behavior modeling that apprehend the consistency requirements of an application. Based on the model, we propose an online prediction approach- named Chameleon that adapts to the application specific needs and provides customized consistency
APA, Harvard, Vancouver, ISO, and other styles
32

Pagliari, Alessio. "Network as an On-Demand Service for Multi-Cloud Workloads." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
The PrEstoCloud project aims to enable on-demand resource scaling of Big Data applications to the cloud. In this context, we have to deal with the huge amount of data processed and more, in particular, its transportation between one cloud and another. The scope of this thesis is to develop a network-level architecture that could easily deal with Big Data application challenges and could be integrated into the PrEstoCloud consortium staying transparent to the application level. However, the connection between multiple cloud providers in this context presents a series of challenges: the architecture should adapt to the variable number of clouds to connect, it have to bypass the limitations of the cloud infrastructure and most importantly, it must have a general design able to work in every cloud provider. In this report, we present a general VPN-based Inter-Cloud architecture able to work in every kind of environment. We implemented a prototype with IPSec and OpenVPN, connecting the i3s laboratory with Amazon AWS and Azure, we evaluate our architecture and the used tools in two ways: (i) we test the stability over time of the architecture via latency tests; (ii) we perform non-intrusive Pathload tests in the Amazon, showing the usability of the available bandwidth estimator in the cloud, the AWS network characteristics discovered through the tests and a final comparison of the VPN tools overhead.
APA, Harvard, Vancouver, ISO, and other styles
33

Madonia, Tommaso. "Container-based spot market in the cloud: design of a bid advisor." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13367/.

Full text
Abstract:
Il cloud mette a disposizione a chiunque risorse computazionali “infinite” che possono essere acquistate tramite Internet ed essere pronte all’utilizzo in pochi secondi. Utilizzare il cloud permette di risparmiare sui costi e allo stesso tempo poter eseguire su larga scala applicazioni complesse come quelle di machine learning. Per poter sfruttare al massimo l’infrastruttura, e quindi aumentare i profitti, alcuni cloud provider vendono le risorse computazionali inutilizzate a prezzi scontati, queste vengono assegnate agli utenti disposti a pagare di più a cui vengono revocate in caso qualcun altro faccia un’offerta maggiore; questo modello prende il nome di cloud spot market. L’obiettivo del progetto di tesi svolto era quello di integrare Apache Spark con uno spot market basato su Kubernetes e di realizzare un “bid advisor” che possa essere utilizzato dall’utente per poter decidere quanto pagare per le risorse “spot” in modo da ottenere le performance desiderate. Inoltre, è stata proposta una soluzione per ridurre gli effetti negativi che la preemption delle risorse computazionali può avere quando si eseguono applicazioni basate su Spark. Il sistema finale è stato testato per verificarne il corretto funzionamento e per valutare la qualità delle previsioni effettuate dal bid advisor.
APA, Harvard, Vancouver, ISO, and other styles
34

Flatt, Taylor. "CrowdCloud: Combining Crowdsourcing with Cloud Computing for SLO Driven Big Data Analysis." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2234.

Full text
Abstract:
The evolution of structured data from simple rows and columns on a spreadsheet to more complex unstructured data such as tweets, videos, voice, and others, has resulted in a need for more adaptive analytical platforms. It is estimated that upwards of 80% of data on the Internet today is unstructured. There is a drastic need for crowdsourcing platforms to perform better in the wake of the tsunami of data. We investigated the employment of a monitoring service which would allow the system take corrective action in the event the results were trending in away from meeting the accuracy, budget, and time SLOs. Initial implementation and system validation has shown that taking corrective action generally leads to a better success rate of reaching the SLOs. Having a system which can dynamically adjust internal parameters in order to perform better can lead to more harmonious interactions between humans and machine algorithms and lead to more efficient use of resources.
APA, Harvard, Vancouver, ISO, and other styles
35

Golchay, Roya. "From mobile to cloud : Using bio-inspired algorithms for collaborative application offloading." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI009.

Full text
Abstract:
Actuellement les smartphones possèdent un grand éventail de fonctionnalités. Ces objets tout en un, sont constamment connectés. Il est l'appareil favori plébiscité par les utilisateurs parmi tous les dispositifs de communication existants. Les applications actuelles développées pour les smartphones doivent donc faire face à une forte augmentation de la demande en termes de fonctionnalités tandis que - dans un même temps - les smartphones doivent répondre à des critères de compacité et de conception qui les limitent en énergie et à un environnement d'exécution pauvre en ressources. Utiliser un système riche en ressource est une solution classique introduite en informatique dans les nuages mobiles (Mobile Cloud Computing), celle-ci permet de contourner les limites des appareils mobiles en exécutant à distance, toutes ou certaines parties des applications dans ces environnements de nuage. Certaines architectures émergent, mais peu d'algorithmes existent pour traiter les propriétés dynamiques de ces environnements. Dans cette thèse, nous focalisons notre intérêt sur la conception d'ACOMMA (Ant-inspired Collaborative Offloading Middleware for Mobile Applications), un interlogiciel d'exécution déportée collaborative inspirée par le comportement des fourmis, pour les applications mobiles. C'est une architecture orientée service permettant de décharger dynamiquement des partitions d'applications, de manière simultanée, sur plusieurs clouds éloignés ou sur un cloud local créé spontanément, incluant les appareils du voisinage. Les principales contributions de cette thèse sont doubles. Si beaucoup d'intergiciels traitent un ou plusieurs défis relatifs à l'éxecution déportée, peu proposent une architecture ouverte basée sur des services qui serait facile à utiliser sur n'importe quel support mobile sans aucun exigence particulière. Parmi les principaux défis il y a les questions de quoi et quand décharger dans cet environnement très dynamique. A cette fin, nous développons des algorithmes de prises de décisions bio-inspirées : un processus de prise de décision bi-objectif dynamique avec apprentissage et un processus de prise de décision en collaboration avec les autres dispositifs mobiles du voisinage. Nous définissons un mécanisme de dépôt d'exécution avec une méthode de partitionnement grain fin de son graphe d'appel. Nous utilisons les algorithmes des colonies de fourmis pour optimiser bi-objectivement la consommation du CPU et le temps total d'exécution, en incluant la latence du réseau. Nous montrons que les algorithmes des fourmis sont plus facilement re-adaptables face aux modifications du contexte, peuvent être très efficaces en ajoutant des algorithmes de cache par comparaison de chaîne (string matching caching) et autorisent facilement la dissémination du profil de l'application afin de créer une exécution déportée collaborative dans le voisinage
Not bounded by time and place, and having now a wide range of capabilities, smartphones are all-in-one always connected devices - the favorite devices selected by users as the most effective, convenient and neces- sary communication tools. Current applications developed for smartphones have to face a growing demand in functionalities - from users, in data collecting and storage - from IoT device in vicinity, in computing resources - for data analysis and user profiling; while - at the same time - they have to fit into a compact and constrained design, limited energy savings, and a relatively resource-poor execution environment. Using resource- rich systems is the classic solution introduced in Mobile Cloud Computing to overcome these mobile device limitations by remotely executing all or part of applications to cloud environments. The technique is known as application offloading. Offloading to a cloud - implemented as geographically-distant data center - however introduces a great network latency that is not acceptable to smartphone users. Hence, massive offloading to a centralized architecture creates a bottleneck that prevents scalability required by the expanding market of IoT devices. Fog Computing has been introduced to bring back the storage and computation capabilities in the user vicinity or close to a needed location. Some architectures are emerging, but few algorithms exist to deal with the dynamic properties of these environments. In this thesis, we focus our interest on designing ACOMMA, an Ant-inspired Collaborative Offloading Middleware for Mobile Applications that allowing to dynamically offload application partitions - at the same time - to several remote clouds or to spontaneously-created local clouds including devices in the vicinity. The main contributions of this thesis are twofold. If many middlewares dealt with one or more of offloading challenges, few proposed an open architecture based on services which is easy to use for any mobile device without any special requirement. Among the main challenges are the issues of what and when to offload in a dynamically changing environment where mobile device profile, context, and server properties play a considerable role in effectiveness. To this end, we develop bio-inspired decision-making algorithms: a dynamic bi-objective decision-making process with learning, and a decision-making process in collaboration with other mobile devices in the vicinity. We define an offloading mechanism with a fine-grained method-level application partitioning on its call graph. We use ant colony algorithms to optimize bi-objectively the CPU consumption and the total execution time - including the network latency
APA, Harvard, Vancouver, ISO, and other styles
36

Romanazzi, Stefano. "Water Supply Network Management: Sensor Analysis using Google Cloud Dataflow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
The growing field of IoT increases the amount of time series data produced every day. With such information overload it is necessary to promptly clean and process those information extracting meaningful knowledge and avoiding raw data storage. Nowadays cloud infrastructures allow to adopt this processing demand by providing new models for defining data-parallel processing pipelines, such as the Apache Beam unified model which evolved from Google Cloud Dataflow and MapReduce paradigm. The projects of this thesis have been implemented during a three-month internship at Injenia srl, and face this exact trail, by processing external IoT-acquired data, going through a cleansing and a processing phase in order to obtain neural networks ready-to-feed data. The sewerage project acquires signals from IoT sensors of a sewerage infrastructure and aims at predicting signals' trends over close future periods. The aqueduct project acquires the same information type from aqueduct plants and aims to reduce the false alarm rate of the telecontrol system. Given the good results of both projects it can be concluded that the data processing phase has produced high-quality information which is the main objective of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
37

He, Yijun, and 何毅俊. "Protecting security in cloud and distributed environments." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49617631.

Full text
Abstract:
Encryption helps to ensure that information within a session is not compromised. Authentication and access control measures ensure legitimate and appropriate access to information, and prevent inappropriate access to such resources. While encryption, authentication and access control each has its own responsibility in securing a communication session, a combination of these three mechanisms can provide much better protection for information. This thesis addresses encryption, authentication and access control related problems in cloud and distributed environments, since these problems are very common in modern organization environment. The first one is a User-friendly Location-free Encryption System for Mobile Users (UFLE). It is an encryption and authentication system which provides maximum security to sensitive data in distributed environment: corporate, home and outdoors scenarios, but requires minimum user effort (i.e. no biometric entry, or possession of cryptographic tokens) to access the data. It makes users securely and easily access data any time and any place, as well as avoids data breach due to stolen/lost laptops and USB flash. The multi-factor authentication protocol provided in this scheme is also applicable to cloud storage. The second one is a Simple Privacy-Preserving Identity-Management for Cloud Environment (SPICE). It is the first digital identity management system that can satisfy “unlinkability”and “delegatable authentication” in addition to other desirable properties in cloud environment. Unlinkability ensures that none of the cloud service providers (CSPs), even if they collude, can link the transactions of the same user. On the other hand, delegatable authentication is unique to the cloud platform, in which several CSPs may join together to provide a packaged service, with one of them being the source provider which interacts with the clients and performs authentication, while the others are receiving CSPs which will be transparent to the clients. The authentication should be delegatable such that the receiving CSP can authenticate a user without a direct communication with either the user or the registrar, and without fully trusting the source CSP. The third one addresses re-encryption based access control issue in cloud and distributed storage. We propose the first non-transferable proxy re-encryption scheme [16] which successfully achieves the non-transferable property. Proxy re-encryption allows a third-party (the proxy) to re-encrypt a ciphertext which has been encrypted for one party without seeing the underlying plaintext so that it can be decrypted by another. A proxy re-encryption scheme is said to be non-transferable if the proxy and a set of colluding delegatees cannot re-delegate decryption rights to other parties. The scheme can be utilized for a content owner to delegate content decryption rights to users in the untrusted cloud storage. The advantages of using such scheme are: decryption keys are managed by the content owner, and plaintext is always hidden from cloud provider.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
38

Ribot, Stephane. "Adoption of Big Data And Cloud Computing Technologies for Large Scale Mobile Traffic Analysis." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE3049.

Full text
Abstract:
L’émergence des technologies Big Data et Cloud computing pour répondre à l’accroissement constant de la complexité et de la diversité des données constituent un nouvel enjeu de taille pour les entreprises qui, désormais, doivent prendre en compte ce nouveau paradigme. Les opérateurs de services mobiles sont un exemple de sociétés qui cherchent à valoriser et monétiser les données collectées de leur utilisateurs. Cette recherche a pour objectif d’analyser ce nouvel enjeu qui allie d’une part l’explosion du nombre des données à analyser, et d’autre part, la constante émergence de nouvelles technologies et de leur adoption. Dans cette thèse, nous abordons la question de recherche suivante: « Dans quelle mesure les technologies Cloud Computing et Big Data contribuent aux tâches menées par les Data Scientists? » Sur la base d’une approche hypothético-déductive relayée par les théories classiques de l’adoption, les hypothèses et le modèle conceptuel sont inspirés du modèle de l’adéquation de la tâche et de la technologie (TTF) de Goodhue. Les facteurs proposés incluent le Big Data et le Cloud Computing, la tâche, la technologie, l'individu, le TTF, l’utilisation et les impacts réalisés. Cette thèse aborde sept hypothèses qui adressent spécifiquement les faiblesses des modèles précédents. Une enquête a été conduite auprès de 169 chercheurs contribuant à l’analyse des données mobiles. Une analyse quantitative a été effectuée afin de démontrer la validité des mesures effectuées et d’établir la pertinence du modèle théorique proposé. L’analyse partielle des moindres carrés a été utilisée (partial least square) pour établir les corrélations entre les construits. Cette recherche délivre deux contributions majeures : le développement d'un construit (TTF) spécifique aux technologies Big Data et Cloud computing ainsi que la validation de ce construit dans le modèle d’adéquation des technologies Big data - Cloud Computing et de l’analyse des données mobiles
A new economic paradigm is emerging as a result of enterprises generating and managing increasing amounts of data and looking for technologies like cloud computing and Big Data to improve data-driven decision making and ultimately performance. Mobile service providers are an example of firms that are looking to monetize the collected mobile data. Our thesis explores cloud computing determinants of adoption and Big Data determinants of adoption at the user level. In this thesis, we employ a quantitative research methodology and operationalized using a cross-sectional survey so temporal consistency could be maintained for all the variables. The TTF model was supported by results analyzed using partial least square (PLS) structural equation modeling (SEM), which reflects positive relationships between individual, technology and task factors on TTF for mobile data analysis.Our research makes two contributions: the development of a new TTF construct – task-Big Data/cloud computing technology fit model – and the testing of that construct in a model overcoming the rigidness of the original TTF model by effectively addressing technology through five subconstructs related to technology platform (Big Data) and technology infrastructure (cloud computing intention to use). These findings provide direction to mobile service providers for the implementation of cloud-based Big Data tools in order to enable data-driven decision-making and monetize the output from mobile data traffic analysis
APA, Harvard, Vancouver, ISO, and other styles
39

Domingos, João Nuno Silva Tabar. "On the cloud deployment of a session abstraction for service/data aggregation." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/9923.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The global cyber-infrastructure comprehends a growing number of resources, spanning over several abstraction layers. These resources, which can include wireless sensor devices or mobile networks, share common requirements such as richer inter-connection capabilities and increasing data consumption demands. Additionally, the service model is now widely spread, supporting the development and execution of distributed applications. In this context, new challenges are emerging around the “big data” topic. These challenges include service access optimizations, such as data-access context sharing, more efficient data filtering/ aggregation mechanisms, and adaptable service access models that can respond to context changes. The service access characteristics can be aggregated to capture specific interaction models. Moreover, ubiquitous service access is a growing requirement, particularly regarding mobile clients such as tablets and smartphones. The Session concept aggregates the service access characteristics, creating specific interaction models, which can then be re-used in similar contexts. Existing Session abstraction implementations also allow dynamic reconfigurations of these interaction models, so that the model can adapt to context changes, based on service, client or underlying communication medium variables. Cloud computing on the other hand, provides ubiquitous access, along with large data persistence and processing services. This thesis proposes a Session abstraction implementation, deployed on a Cloud platform, in the form of a middleware. This middleware captures rich/dynamic interaction models between users with similar interests, and provides a generic mechanism for interacting with datasources based on multiple protocols. Such an abstraction contextualizes service/users interactions, can be reused by other users in similar contexts. This Session implementation also permits data persistence by saving all data in transit in a Cloud-based repository, The aforementioned middleware delivers richer datasource-access interaction models, dynamic reconfigurations, and allows the integration of heterogenous datasources. The solution also provides ubiquitous access, allowing client connections from standard Web browsers or Android based mobile devices.
APA, Harvard, Vancouver, ISO, and other styles
40

Boretti, Gabriele. "Sistemi cloud per l'analisi di big data: BigQuery, la soluzione proposta da Google." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18613/.

Full text
Abstract:
Lo scopo della tesi è̀ quello di analizzare il servizio BigQuery di Google Cloud, strumento utilizzato per l’archiviazione, l’analisi e il monitoring di grandi database. Per fare ciò̀ vengono prima descritte le componenti che lo circondano, offrendo una panoramica completa sul Cloud Computing, partendo dai requisiti minimi per il suo funzionamento, come Internet, fino ad arrivare ai più complessi utilizzi di quest’ultimo, come le intelligenze artificiali. Viene fornito un quadro di riferimento sullo sviluppo e utilizzo dei Big Data, per porre così le basi per una migliore comprensione di quello che verrà di seguito approfondito. Tutta la tesi è genericamente improntata in ambito biomedico. Gli argomenti affrontati sono tutti ricollegabili all’ambito sanitario, a partire dalle fondamenta della tesi riguardo Internet, collegato con l’Internet of Things (IoT) e con la Telemedicina, dei quali viene mostrato lo stretto collegamento con il Cloud Computing. Anche l’utilizzo delle intelligenze artificiali verrà affrontato in ambito clinico, vedendo i vari utilizzi e vantaggi, fino ad arrivare all’argomento dei data set di carattere medico. La tesi quindi analizza le varie possibilità offerte dall’analisi di dati e per quanto riguarda l’ambito sanitario affronta inevitabilmente l’argomento della privacy e della sicurezza. A questo punto si viene a chiudere il ciclo Internet-Database-Cloud con BigQuery. Dopo una generica descrizione degli altri servizi di offerti dal Google Cloud, necessaria per comprendere come BigQuery possa integrarsi con questi ultimi, la tesi si addentra nell’analisi del suo funzionamento. Viene fornita una descrizione della struttura e dei metodi di analisi disponibili, riportando inoltre alcuni casi d’uso, citando vari articoli e fonti. L’ultima parte della tesi fornisce i risultati derivati dalla sperimentazione dello strumento su diversi dataset, sia contenti dati clinici, che forniti da Google.
APA, Harvard, Vancouver, ISO, and other styles
41

Di, Sheng, and 狄盛. "Optimal divisible resource allocation for self-organizing cloud." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B4703130X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ma, Ka-kui, and 馬家駒. "Lightweight task mobility support for elastic cloud computing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47869513.

Full text
Abstract:
Cloud computing becomes popular nowadays. It allows applications to use the enormous resources in the clouds. With the combination of mobile computing, mobile cloud computing is evolved. With the use of clouds, mobile applications can offload tasks to clouds in client-server model. For cloud computing, migration is an important function for supporting elasticity. Lightweight and portable task migration support allows better resource utilization and data access locality, which are essentials for the success of cloud computing. Various migration techniques are available, such as process migration, thread migration, and virtual machine live migration. However, for these existing migration techniques, migrations are too coarse-grained and costly, and this offsets the benefits from migration. Besides, the migration path is monotonic, and mobile and clouds resources cannot be utilized. In this study, we propose a new computation migration technique called stack-on-demand (SOD). This technique is based on the stack structure of tasks. Computation migration is carried out by exporting parts of the execution state to achieve lightweight and flexible migration. Compared to traditional task migration techniques, SOD allows lightweight computation migration. It allows dynamic execution flows in a multi-domain workflow style. With its lightweight feature, tasks of a large process can be migrated from clouds to small-capacity devices, such as iPhone, in order to use the unique resources, such as photos, found in the devices. In order to support its lightweight feature, various techniques have been introduced. To allow efficient access to remote objects in task migration, we propose an object faulting technique for efficient detection of remote objects. This technique avoids the checking of object status. To allow portable, lightweight application-level migration, asynchronous migration technique and twin method hierarchy instrumentation technique are proposed. This allows lightweight task migration from mobile device to cloud nodes, and vice versa. We implement the SOD concept as a middleware in a mobile cloud environment to allow transparent execution migration of Java programs. It has shown that SOD migration cost is pretty low, comparing to several existing migration mechanisms. We also conduct experiments with mobile devices to demonstrate the elasticity of SOD, in which server-side heavyweight processes can run adaptively on mobile devices to use the unique resources in the devices. On the other hand, mobile devices can seamlessly offload tasks to the cloud nodes to use the cloud resources. In addition, the system has incorporated a restorable communication layer, and this allows parallel programs to communicate properly with SOD migration.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
43

Palanisamy, Balaji. "Cost-effective and privacy-conscious cloud service provisioning: architectures and algorithms." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/52157.

Full text
Abstract:
Cloud Computing represents a recent paradigm shift that enables users to share and remotely access high-powered computing resources (both infrastructure and software/services) contained in off-site data centers thereby allowing a more efficient use of hardware and software infrastructures. This growing trend in cloud computing, combined with the demands for Big Data and Big Data analytics, is driving the rapid evolution of datacenter technologies towards more cost-effective, consumer-driven, more privacy conscious and technology agnostic solutions. This dissertation is dedicated to taking a systematic approach to develop system-level techniques and algorithms to tackle the challenges of large-scale data processing in the Cloud and scaling and delivering privacy-aware services with anytime-anywhere availability. We analyze the key challenges in effective provisioning of Cloud services in the context of MapReduce-based parallel data processing considering the concerns of cost-effectiveness, performance guarantees and user-privacy and we develop a suite of solution techniques, architectures and models to support cost-optimized and privacy-preserving service provisioning in the Cloud. At the cloud resource provisioning tier, we develop a utility-driven MapReduce Cloud resource planning and management system called Cura for cost-optimally allocating resources to jobs. While existing services require users to select a number of complex cluster and job parameters and use those potentially sub-optimal per-job configurations, the Cura resource management achieves global resource optimization in the cloud by minimizing cost and maximizing resource utilization. We also address the challenges of resource management and job scheduling for large-scale parallel data processing in the Cloud in the presence of networking and storage bottlenecks commonly experienced in Cloud data centers. We develop Purlieus, a self-configurable locality-based data and virtual machine management framework that enables MapReduce jobs to access their data either locally or from close-by nodes including all input, output and intermediate data achieving significant improvements in job response time. We then extend our cloud resource management framework to support privacy-preserving data access and efficient privacy-conscious query processing. Concretely, we propose and implement VNCache: an efficient solution for MapReduce analysis of cloud-archived log data for privacy-conscious enterprises. Through a seamless data streaming and prefetching model in VNCache, Hadoop jobs begin execution as soon as they are launched without requiring any apriori downloading. At the cloud consumer tier, we develop mix-zone based techniques for delivering anonymous cloud services to mobile users on the move through Mobimix, a novel road-network mix-zone based framework that enables real time, location based service delivery without disclosing content or location privacy of the consumers.
APA, Harvard, Vancouver, ISO, and other styles
44

Olsson, Fredrik. "Feature Based Learning for Point Cloud Labeling and Grasp Point Detection." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150785.

Full text
Abstract:
Robotic bin picking is the problem of emptying a bin of randomly distributedobjects through a robotic interface. This thesis examines an SVM approach to ex-tract grasping points for a vacuum-type gripper. The SVM is trained on syntheticdata and used to classify the points of a non-synthetic 3D-scanned point cloud aseither graspable or non-graspable. The classified points are then clustered intograspable regions from which the grasping points are extracted. The SVM models and the algorithm as a whole are trained and evaluated againstcubic and cylindrical objects. Separate SVM models are trained for each type ofobject in addition to one model being trained on a dataset containing both typesof objects. It is shown that the performance of the SVM in terms accuracy isdependent on the objects and their geometrical properties. Further, it is shownthat the algorithm is reasonably robust in terms of successfully picking objects,regardless of the scale of the objects.
APA, Harvard, Vancouver, ISO, and other styles
45

Ikken, Sonia. "Efficient placement design and storage cost saving for big data workflow in cloud datacenters." Thesis, Evry, Institut national des télécommunications, 2017. http://www.theses.fr/2017TELE0020/document.

Full text
Abstract:
Les workflows sont des systèmes typiques traitant le big data. Ces systèmes sont déployés sur des sites géo-distribués pour exploiter des infrastructures cloud existantes et réaliser des expériences à grande échelle. Les données générées par de telles expériences sont considérables et stockées à plusieurs endroits pour être réutilisées. En effet, les systèmes workflow sont composés de tâches collaboratives, présentant de nouveaux besoins en terme de dépendance et d'échange de données intermédiaires pour leur traitement. Cela entraîne de nouveaux problèmes lors de la sélection de données distribuées et de ressources de stockage, de sorte que l'exécution des tâches ou du job s'effectue à temps et que l'utilisation des ressources soit rentable. Par conséquent, cette thèse aborde le problème de gestion des données hébergées dans des centres de données cloud en considérant les exigences des systèmes workflow qui les génèrent. Pour ce faire, le premier problème abordé dans cette thèse traite le comportement d'accès aux données intermédiaires des tâches qui sont exécutées dans un cluster MapReduce-Hadoop. Cette approche développe et explore le modèle de Markov qui utilise la localisation spatiale des blocs et analyse la séquentialité des fichiers spill à travers un modèle de prédiction. Deuxièmement, cette thèse traite le problème de placement de données intermédiaire dans un stockage cloud fédéré en minimisant le coût de stockage. A travers les mécanismes de fédération, nous proposons un algorithme exacte ILP afin d’assister plusieurs centres de données cloud hébergeant les données de dépendances en considérant chaque paire de fichiers. Enfin, un problème plus générique est abordé impliquant deux variantes du problème de placement lié aux dépendances divisibles et entières. L'objectif principal est de minimiser le coût opérationnel en fonction des besoins de dépendances inter et intra-job
The typical cloud big data systems are the workflow-based including MapReduce which has emerged as the paradigm of choice for developing large scale data intensive applications. Data generated by such systems are huge, valuable and stored at multiple geographical locations for reuse. Indeed, workflow systems, composed of jobs using collaborative task-based models, present new dependency and intermediate data exchange needs. This gives rise to new issues when selecting distributed data and storage resources so that the execution of tasks or job is on time, and resource usage-cost-efficient. Furthermore, the performance of the tasks processing is governed by the efficiency of the intermediate data management. In this thesis we tackle the problem of intermediate data management in cloud multi-datacenters by considering the requirements of the workflow applications generating them. For this aim, we design and develop models and algorithms for big data placement problem in the underlying geo-distributed cloud infrastructure so that the data management cost of these applications is minimized. The first addressed problem is the study of the intermediate data access behavior of tasks running in MapReduce-Hadoop cluster. Our approach develops and explores Markov model that uses spatial locality of intermediate data blocks and analyzes spill file sequentiality through a prediction algorithm. Secondly, this thesis deals with storage cost minimization of intermediate data placement in federated cloud storage. Through a federation mechanism, we propose an exact ILP algorithm to assist multiple cloud datacenters hosting the generated intermediate data dependencies of pair of files. The proposed algorithm takes into account scientific user requirements, data dependency and data size. Finally, a more generic problem is addressed in this thesis that involve two variants of the placement problem: splittable and unsplittable intermediate data dependencies. The main goal is to minimize the operational data cost according to inter and intra-job dependencies
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Zhen. "CloudVista: a Framework for Interactive Visual Cluster Exploration of Big Data in the Cloud." Wright State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=wright1348204863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sellén, David. "Big Data analytics for the forest industry : A proof-of-conceptbuilt on cloud technologies." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28541.

Full text
Abstract:
Large amounts of data in various forms are generated at a fast pace in today´s society. This is commonly referred to as “Big Data”. Making use of Big Data has been increasingly important for both business and in research. The forest industry is generating big amounts of data during the different processes of forest harvesting. In Sweden, forest infor-mation is sent to SDC, the information hub for the Swedish forest industry. In 2014, SDC received reports on 75.5 million m3fub from harvester and forwarder machines. These machines use a global stand-ard called StanForD 2010 for communication and to create reports about harvested stems. The arrival of scalable cloud technologies that com-bines Big Data with machine learning makes it interesting to develop an application to analyze the large amounts of data produced by the forest industry. In this study, a proof-of-concept has been implemented to be able to analyze harvest production reports from the StanForD 2010 standard. The system consist of a back-end and front-end application and is built using cloud technologies such as Apache Spark and Ha-doop. System tests have proven that the concept is able to successfully handle storage, processing and machine learning on gigabytes of HPR files. It is capable of extracting information from raw HPR data into datasets and support a machine learning pipeline with pre-processing and K-Means clustering. The proof-of-concept has provided a code base for further development of a system that could be used to find valuable knowledge for the forest industry.
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Kai-wah, and 李啟華. "Mesh denoising and feature extraction from point cloud data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42664330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Delehag, Lundmark Joel. "Photogrammetry for health monitoring of bridges : Using point clouds for deflection measurements and as-built BIM modelling." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75953.

Full text
Abstract:
Road and railway bridges play a crucial role for the infrastructure network in Sweden to work smoothly and keep the traffic flowing. Damage to a bridge can have catastrophic consequences if they are not corrected properly and in due time. Trafikverket in Sweden is responsible for inspection and maintenance of approximately 20 600 bridges throughout the country. This huge number of bridges require large resources in the form of machinery and experienced bridge inspectors who assess the state of the bridges on the spot. At present, the state of a bridge is to a large extent determined by a visual inspection and by manually taking measurements to assess the condition of the bridge. This approach means that the assessment of the condition of the bridge to a large extent is subjective and shifting between different cases depending on the inspector’s experience. New approaches that both could make it easier for inspectors to make more objective decisions and facilitate and reduce the risk concerning the inspection work are therefore under research. In this thesis Close Range Photogrammetry is evaluated as a mean for assessing deflection on concrete bridges and for creating as-built BIM:s for documentation and visualization of the actual condition of a bridge. To evaluate the technique both laboratory experiments and field work are conducted. Laboratory tests are conducted on concrete slabs that are subjected to pressure to inflict deflection on them. The concrete slabs are photographed using close range photogrammetric techniques for different values of deflection. The photographs are later processed into a point cloud in which measurements of deflection are taken and compared to what is measured using displacement transducers during the tests. The field work conducted is in form of photographing a railway bridge using close range photogrammetry and building a point cloud out of the photographs. This point cloud is then used as a basis for evaluating the process on how a point cloud generated through close range photogrammetry can be used to create as-built Building Information Models. Results from the laboratory experiments show that changes in deflection can be visualized by overlapping point clouds generated at different loading stages using the software Cloud Compare. The distance i.e. the deflection can then be measured in the software. The point cloud generated through the field work resulted in a as-built BIM of the railway bridge containing the basic elements. No hard conclusions can be drawn as to how well the method in this thesis can be used to measure deflection on real concrete bridges. The test basis is to small and the human factor can have affected the results. The results though show that millimeter distances can be measured in the point clouds which indicates that with the right approach, Close Range Photogrammetry can be used to measure deflections with good precision. Point clouds generated through Close Range Photogrammetry works good as a basis for creating as-built BIM:s. The colored point cloud is beneficial over other techniques that are generated in gray scale because it makes it easier to distinguish elements from each other and to detect any deficiencies. To create complete as-built BIM:s more than just a point cloud are needed as it only visualizes the shell of the captured object.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Odat, Zeyad Abdel-Hameed. "Analyses, Mitigation and Applications of Secure Hash Algorithms." Diss., North Dakota State University, 2020. https://hdl.handle.net/10365/32058.

Full text
Abstract:
Cryptographic hash functions are one of the widely used cryptographic primitives with a purpose to ensure the integrity of the system or data. Hash functions are also utilized in conjunction with digital signatures to provide authentication and non-repudiation services. Secure Hash Algorithms are developed over time by the National Institute of Standards and Technology (NIST) for security, optimal performance, and robustness. The most known hash standards are SHA-1, SHA-2, and SHA-3. The secure hash algorithms are considered weak if security requirements have been broken. The main security attacks that threaten the secure hash standards are collision and length extension attacks. The collision attack works by finding two different messages that lead to the same hash. The length extension attack extends the message payload to produce an eligible hash digest. Both attacks already broke some hash standards that follow the Merkle-Damgrard construction. This dissertation proposes methodologies to improve and strengthen weak hash standards against collision and length extension attacks. We propose collision-detection approaches that help to detect the collision attack before it takes place. Besides, a proper replacement, which is supported by a proper construction, is proposed. The collision detection methodology helps to protect weak primitives from any possible collision attack using two approaches. The first approach employs a near-collision detection mechanism that was proposed by Marc Stevens. The second approach is our proposal. Moreover, this dissertation proposes a model that protects the secure hash functions from collision and length extension attacks. The model employs the sponge structure to construct a hash function. The resulting function is strong against collision and length extension attacks. Furthermore, to keep the general structure of the Merkle-Damgrard functions, we propose a model that replaces the SHA-1 and SHA-2 hash standards using the Merkle-Damgrard construction. This model employs the compression function of the SHA-1, the function manipulators of the SHA-2, and the $10*1$ padding method. In the case of big data over the cloud, this dissertation presents several schemes to ensure data security and authenticity. The schemes include secure storage, anonymous privacy-preserving, and auditing of the big data over the cloud.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography