To see the other types of publications on this topic, follow the link: Analytics infrastructure.

Dissertations / Theses on the topic 'Analytics infrastructure'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Analytics infrastructure.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Talevi, Iacopo. "Big Data Analytics and Application Deployment on Cloud Infrastructure." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14408/.

Full text
Abstract:
This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered.
APA, Harvard, Vancouver, ISO, and other styles
2

Stouten, Floris. "Big data analytics attack detection for Critical Information Infrastructure Protection." Thesis, Luleå tekniska universitet, Datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-59562.

Full text
Abstract:
Attacks on critical information infrastructure are increasing in volume and sophistication with destructive consequences according to the 2015 Cyber Supply Chain Security Revisited report from ESG recently (ESG, 2015). In a world of connectivity and data dependency, cyber-crime is on the rise causing many disruptions in our way of living. Our society relies on these critical information infrastructures for our social and economic well-being, and become more complex due to many integrated systems. Over the past years, various research contributions have been made to provide intrusion detection solutions to address these complex attack problems. Even though various research attempts have been made, shortcomings still exists in these solutions to provide attack detection. False positives and false negatives outcomes for attack detection are still known shortcomings that must be addressed. This study contributes research, by finding a solution for the found shortcomings by designing an IT artifact framework based on the Design Science Research Methodology (DSRM). The framework consist of big data analytics technology that provides attack detection. Research outcomes for this study shows a possible solution to the shortcomings by the designed IT artifact framework with use of big data analytics technology. The framework built on open source technology can provide attack detection, and possibly provide a solution to improve the false positives and false negatives for attack detection outcomes. Three main modules have been designed and demonstrated, whereby a hybrid approach for detection is used to address the shortcomings. Therefore, this research can benefit Critical Information Infrastructure Protection (CIIP) in Sweden to detect attacks and can possibly be utilized in various network infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
3

Dahan, Mathieu. "Strategic and analytics-driven inspection operations for critical infrastructure resilience." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123226.

Full text
Abstract:
Thesis: Ph. D. in Civil Engineering and Computation, Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 213-221).<br>Resilience of infrastructure networks is a key requirement for a functioning modern society. These networks work continuously to enable the delivery of critical services such as water, natural gas, and transportation. However, recent natural disasters and cyber-physical security attacks have demonstrated that the lack of effective failure detection and identification capabilities is one of the main contributors of economic losses and safety risks faced by service utilities. This thesis focuses on both strategic and operational aspects of inspection processes for large-scale infrastructure networks, with the goal of improving their resilience to reliability and security failures. We address three combinatorial problems: (i) Strategic inspection for detecting adversarial failures; (ii) Strategic interdiction of malicious network flows; (iii) Analytics-driven inspection for localizing post-disaster failures.<br>We exploit the structural properties of these problems to develop new and practically relevant solutions for inspection of large-scale networks, along with approximation guarantees. Firstly, we address the question of determining a randomized inspection strategy with minimum number of detectors that ensures a target detection performance against multiple adversarial failures in the network. This question can be formulated as a mathematical program with constraints involving the Nash equilibria of a large strategic game. We solve this inspection problem with a novel approach that relies on the submodularity of the detection model and solutions of minimum set cover and maximum set packing problems. Secondly, we consider a generic network security game between a routing entity that sends its flow through the network, and an interdictor who simultaneously interdicts multiple edges.<br>By proving the existence of a probability distribution on a partially ordered set that satisfies a set of constraints, we show that the equilibrium properties of the game can be described using primal and dual solutions of a minimum-cost circulation problem. Our analysis provides a new characterization of the critical network components in strategic flow interdiction problems. Finally, we develop an analytics-driven approach for localizing failures under uncertainty. We utilize the information provided by failure prediction models to calibrate the generic formulation of a team orienteering problem with stochastic rewards and service times. We derive a compact mixed-integer programming formulation of the problem that computes an optimal a-priori routing of the inspection teams. Using the data collected by a major gas utility after an earthquake, we demonstrate the value of predictive analytics for improving their response operations.<br>"The work in this thesis was supported in part by the Singapore National Research Foundation through the Singapore MIT Alliance for Research and Technology (SMART), the DoD Science of Security Research Lablet (SOS), MIT Schoettler Fellowship, FORCES (Foundations Of Resilient CybEr-Physical Systems), which receives support from the National Science Foundation (NSF award numbers CNS- 1238959, CNS-1238962, CNS-1239054, CNS-1239166), NSF CAREER award CNS- 1453126, and the AFRL LABLET - Science of Secure and Resilient Cyber-Physical Systems (Contract ID: FA8750-14-2-0180, SUB 2784-018400)"--Pages 5 and 6<br>by Mathieu Dahan.<br>Ph. D. in Civil Engineering and Computation<br>Ph.D.inCivilEngineeringandComputation Massachusetts Institute of Technology, Department of Civil and Environmental Engineering
APA, Harvard, Vancouver, ISO, and other styles
4

de, Battista Nicholas. "Wireless technology and data analytics for structural health monitoring of civil infrastructure." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/6161/.

Full text
Abstract:
The aim of this research was to investigate how wireless technology, combined with data analytics, can be used for effective structural health monitoring (SHM) of civil infrastructure. Two main applications were investigated, for which wireless sensor networks (WSNs) were integrated into complete SHM solutions: (1) long-term quasi-static monitoring on a suspension bridge, and (2) temporary monitoring of pedestrian bridge vibration. In the first application, a commercial off-the-shelf WSN was used to acquire and transmit data from extensometers measuring the longitudinal deck displacement of the Tamar Bridge in the UK. Six months of displacement data were analysed in conjunction with the ambient and structural temperature data acquired from a separate monitoring system on the bridge. Empirical models were fitted to relate the deck displacement to various combinations of temperatures. Comparisons of each model’s prediction accuracy showed that the practice of estimating a suspension bridge deck’s thermal expansion based solely on the air temperature is overly simplistic. The deck displacement was predicted more accurately by considering instead the temperatures of the deck itself and of the underlying structure. In preparation for the second application, a number of indoor tests and short-term deployments on full-scale structures were carried out using an existing prototype WSN, to assess its suitability for vibration monitoring. Subsequently, an embedded data processing method was developed by adapting various signal processing techniques and combining them in sequence. The method was then programmed on the WSN, which was integrated into autonomous SHM systems deployed to monitor two in-service, multi-span pedestrian bridges in Singapore for two weeks. The wireless sensor nodes periodically acquired ambient vibration response data and processed them in a decentralised manner to extract and transmit useful results pertaining to the bridges’ response and modal properties. These results showed that the dynamic properties of the bridges were not affected significantly by the diurnal usage pattern or by the vibration amplitude. The maximum vibration levels recorded on both bridges were found to be within the limits recommended in design guides. Wireless technology has the potential to make SHM viable for a much broader range of civil structures than it is at the moment. While some WSNs are readily applicable for quasi-static monitoring, considerable development and system integration effort are required to use existing wireless technology in a reliable SHM system for dynamic monitoring.
APA, Harvard, Vancouver, ISO, and other styles
5

Khaghani, Farnaz. "Resilience-based Operational Analytics of Transportation Infrastructure: A Data-driven Approach for Smart Cities." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99206.

Full text
Abstract:
Studying recurrent mobility perturbations, such as traffic congestions, is a major concern of engineers, planners, and authorities as they not only bring about delay and inconvenience but also have consequent negative impacts like greenhouse gas emission, increase in fuel consumption, or safety issues. In this dissertation, we proposed using the resilience concept, which has been commonly used for assessing the impact of extreme events and disturbances on the transportation system, for high-probability low impact (HPLI) events to (a) provide a performance assessment framework for transportation systems' response to traffic congestions, (b) investigate the role of transit modes in the resilience of urban roadways to congestion, and (c) study the impact of network topology on the resilience of roadways functionality performance. We proposed a multi-dimensional approach to characterize the resilience of urban transportation roadways for recurrent congestions. The resilience concept could provide an effective benchmark for comparative performance and identifying the behavior of the system in the discharging process in congestion. To this end, we used a Data Envelopment Analysis (DEA) approach to integrate multiple resilience-oriented attributes to estimate the efficiency (resilience) of the frontier in roadways. Our results from an empirical study on California highways through the PeMS data have shown the potential of the multi-dimensional approach in increasing information gain and differentiating between the severity of congestion across a transportation network. Leveraging this resilience-based characterization of recurrent disruptions, in the second study, we investigated the role of multi-modal resourcefulness of urban transportation systems, in terms of diversity and equity, on the resilience of roadways to daily-based congestions. We looked at the physical infrastructure availability and distribution (i.e. diversity) and accessibility and coverage to capture socio-economic factors (i.e. equity) to more comprehensively understand the role of resourcefulness in resilience. We conducted this investigation by using a GPS dataset of taxi trips in the Washington DC metropolitan area in 2017. Our results demonstrated the strong correlation of trips' resilience with transportation equity and to a lesser extent with transportation diversity. Furthermore, we learned the impact of equity and diversity can mostly be seen at the recovery stage of resilience. In the third study, we looked at another aspect of transportation supply in urban areas, spatial configuration, and topology. The goal of this study was to investigate the role of network topology and configuration on resilience to congestion. We used OSMnx, a toolkit for street network analysis based on the data from OpenStreetMap, to model and analyze the urban roadways network configurations. We further employed a multidimensional visualization strategy using radar charts to compare the topology of street networks on a single graphic. Leveraging the geometric descriptors of radar charts, we used the compactness and Jaccard Index to quantitatively compare the topology profiles. We use the same taxi trips dataset used in the second study to characterize resilience and identify the correlation with network topology. The results indicated a strong correlation between resilience and betweenness centrality, diameter, and Page Rank among other features of a transportation network. We further looked at the capacity of roadways as a common cause for the strong correlation between network features and resilience. We found that the strong correlation of link-related features such as diameter could be due to their role in capacity and have a common cause with resilience.<br>Doctor of Philosophy<br>Transportation infrastructure systems are among the most fundamental facilities and systems in urban areas due to the role they play in mobility, economy, and environmental sustainability. Due to this importance, it is crucial to ensure their resilience to regular disruptions such as traffic congestions as a priority for engineers and policymakers. The resilience of transportation systems has often been studied when disasters or extreme events occur. However, minor disturbances such as everyday operational traffic situations can also play an important part in reducing the efficiency of transportation systems and should be considered in the overall resilience of the systems. Current literature does not consider traffic performance from the lens of resilience despite its importance in evaluating the overall performance of roads. This research addresses this gap by proposing to leverage the concept of resilience for evaluation of roadways performance and identifying the role of urban characteristics in the enhancement of resilience. We first characterized resilience considering the performance of the roadways over time, ranging from the occurrence of disruptions to the time point when the system performance returns to a stable state. Through a case study on some of the major highways in the Los Angeles metropolitan area and by leveraging the data from the Performance Measurement System (PeMS), we have investigated how accounting for a proposed multi-dimensional approach for quantification of resilience could add value to the process of road network performance assessment and the corresponding decision-making. In the second and third parts of this dissertation, we looked at the urban infrastructure elements and how they affect resilience to regular disruptive congestion events. Specifically, in the second study, we focused on alternative transit modes such as bus, metro, or bike presence in the urban areas. We utilized diversity and equity concepts for assessing the opportunities they provide for people as alternative mobility modes. The proposed metrics not only capture the physical attributes of the multi-modal transportation systems (i.e. availability and distribution of transit modes in urban areas) but also consider the socio-economic factors (i.e. the number of people that could potentially use the transit mode). In the third study, we investigated how urban road networks' form and topology (i.e., the structure of roadway networks) could affect its resilience to recurrent congestions. We presented our findings as a case study in the Washington DC area. Results indicated a strong correlation between resilience and resourcefulness as well as topology features. The findings allow decision-makers to make more informed design and operational decisions and better incorporate the urban characteristics during the priority setting process.
APA, Harvard, Vancouver, ISO, and other styles
6

Moscoso-Zea, Oswaldo. "A Hybrid Infrastructure of Enterprise Architecture and Business Intelligence & Analytics to Empower Knowledge Management in Education." Doctoral thesis, Universidad de Alicante, 2019. http://hdl.handle.net/10045/97408.

Full text
Abstract:
The large volumes of data (Big Data) that are generated on a global scale and within organizations along with the knowledge that resides in people and in business processes makes organizational knowledge management (KM) very complex. A right KM can be a source of opportunities and competitive advantage for organizations that use their data intelligently and subsequently generate knowledge with them. Two of the fields that support KM and that have had accelerated growth in recent years are business intelligence (BI) and enterprise architecture (EA). On the one hand, BI allows taking advantage of the information stored in data warehouses using different operations such as slice, dice, roll-up, and drill-down. This information is obtained from the operational databases through an extraction, transformation, and loading (ETL) process. On the other hand, EA allows institutions to establish methods that support the creation, sharing and transfer of knowledge that resides in people and processes through the use of blueprints and models. One of the objectives of KM is to create a culture where tacit knowledge (knowledge that resides in a person) stays in an organization when qualified and expert personnel leave the institution or when changes are required in the organizational structure, in computer applications or in the technological infrastructure. In higher education institutions (HEIs) not having an adequate KM approach to handle data is even a greater problem due to the nature of this industry. Generally, HEIs have very little interdependence between departments and faculties. In other words, there is low standardization, redundancy of information, and constant duplicity of applications and functionalities in the different departments which causes inefficient organizations. That is why the research performed within this dissertation has focused on finding an adequate KM method and researching on the right technological infrastructure that supports the management of information of all the knowledge dimensions such as people, processes and technology. All of this with the objective to discover innovative mechanisms to improve education and the service that HEIs offer to their students and teachers by improving their processes. Despite the existence of some initiatives, and papers on KM frameworks, we were not able to find a standard framework that supports or guides KM initiatives. In addition, KM frameworks found in the literature do not present practical mechanisms to gather and analyze all the knowledge dimensions to facilitate the implementation of KM projects. The core contribution of this thesis is a hybrid infrastructure of KM based on EA and BI that was developed from research using an empirical approach and taking as reference the framework developed for KM. The proposed infrastructure will help HEIs to improve education in a general way by analyzing reliable and cleaned data and integrating analytics from the perspective of EA. EA analytics takes into account the interdependence between the objects that make up the organization: people, processes, applications, and technology. Through the presented infrastructure, the doors are opened for the realization of different research projects that increment the type of knowledge that is generated by integrating the information of the applications found in the data warehouses together with the information of the people and the organizational processes that are found in the EA repositories. In order to validate the proposal, a case study was carried out within a university with promising initial results. As future works, it is planned that different HEIs' activities can be automated through a software development methodology based on EA models. In addition, it is desired to develop a KM system that allows the generation of different and new types of analytics, which would be impossible to obtain with only transactional or multidimensional databases.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Yujin. "Essays in Geospatial Modeling of Urban Green Infrastructure." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1588547971708147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Singh, Viraj. "How can California Best Promote Electric Vehicle Adoption? The Effect of Public Charging Station Availability on EV Adoption." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/pomona_theses/204.

Full text
Abstract:
To promote higher air quality and reduce greenhouse gas emissions, the Californian government is investing heavily in developing public charging infrastructure to meet its electric vehicle adoption goal of five million zero-emission vehicles on the road by 2030. This thesis investigates the effect of public charging infrastructure availability on electric vehicle adoption at the zip code level in California. The analysis considers other factors that may influence electric vehicle adoption such as education level, income, commute time, gas prices, and public transportation rate. The findings suggest that public charging infrastructure availability does significantly positively correlate with electric vehicle registrations. Linear regressions were run using data from the U.S Department of Energy Alternative Fuels Data Center, IHS Markit vehicle registration data, and the US Census Bureau. The findings support continued investment in public charging infrastructure as a means of promoting electric vehicle adoption.
APA, Harvard, Vancouver, ISO, and other styles
9

Peiro, Sajjad Hooman. "Towards Unifying Stream Processing over Central and Near-the-Edge Data Centers." Licentiate thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193582.

Full text
Abstract:
In this thesis, our goal is to enable and achieve effective and efficient real-time stream processing in a geo-distributed infrastructure, by combining the power of central data centers and micro data centers. Our research focus is to address the challenges of distributing the stream processing applications and placing them closer to data sources and sinks. We enable applications to run in a geo-distributed setting and provide solutions for the network-aware placement of distributed stream processing applications across geo-distributed infrastructures.  First, we evaluate Apache Storm, a widely used open-source distributed stream processing system, in the community network Cloud, as an example of a geo-distributed infrastructure. Our evaluation exposes new requirements for stream processing systems to function in a geo-distributed infrastructure. Second, we propose a solution to facilitate the optimal placement of the stream processing components on geo-distributed infrastructures. We present a novel method for partitioning a geo-distributed infrastructure into a set of computing clusters, each called a micro data center. According to our results, we can increase the minimum available bandwidth in the network and likewise, reduce the average latency to less than 50%. Next, we propose a parallel and distributed graph partitioner, called HoVerCut, for fast partitioning of streaming graphs. Since a lot of data can be presented in the form of graph, graph partitioning can be used to assign the graph elements to different data centers to provide data locality for efficient processing. Last, we provide an approach, called SpanEdge that enables stream processing systems to work on a geo-distributed infrastructure. SpenEdge unifies stream processing over the central and near-the-edge data centers (micro data centers). As a proof of concept, we implement SpanEdge by extending Apache Storm that enables it to run across multiple data centers.<br><p>QC 20161005</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Haowen. "Data-driven framework for forecasting sedimentation at culverts." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6892.

Full text
Abstract:
The increasing intensity and frequency of precipitation in recent decades, combined with the human interventions in watersheds, has drastically altered the natural regimes of water and sediment transport in watersheds over the whole contiguous United States. Sediment-transport related concerns include the sustainability of aquatic biology, the stability of the river morphology, and the security and vulnerability of various riverine structures. For the present context, the concerns are related to the acceleration of upland erosion (sediment production) and in-stream sediment-transport processes that eventually lead to sediment accumulation at culverts (structures that pass streams under roadways). This nuisance has become widespread in many transportation agencies in the United States, as it has a direct bearing on maintaining normal culvert operations during extreme flows when these waterway crossings are essential for the communities they serve. Despite the prevalence of culvert sedimentation, current specifications for culvert design do not typically consider aspects of sediment transport and deposition. The overall study objective is to systematically identify the likelihood of culvert sedimentation as a function of stream and culvert geometry, along with landscape characteristics (process drivers of culvert sedimentation) in the culvert drainage area. The ideal approach for predicting sedimentation is to track sediment sources dislocated from the watershed, their overland movement, and their delivery into the streams using physical-based modeling. However, there are considerable knowledge gaps in addressing the sedimentation at culverts as an end-to-end process, especially in connecting the upland with in-stream processes and simulating the sediment deposition at culverts in non-uniform, unsteady flows, while also taking into account the vegetation growth in culverts’ vicinity. It is, therefore, no surprise that existing research, textbooks, and guidelines do not typically provide adequate information on sediment control at culverts. This dissertation presents a generalizable data-driven framework that integrates various machine-learning and visual analytics techniques with GIS in a web-based geospatial platform to explore the complex environmental processes of culvert sedimentation. The framework offers systematic procedures for (1) classifying the culvert sedimentation degree using a time-series of aerial images; (2) identifying key process-drivers from a variety of environmental and culvert structural characteristics through feature selections and interactive visual interfaces; (3) supporting human interactions to perceive empirical relationships between drivers and the culvert sedimentation degree through multivariate Geovisualization and Self-Organizing Map (SOM); and (4) forecasting culvert sedimentation potential across Iowa using machine learning algorithms. Developed using modular design and atop national datasets, the framework is generalizable and extendable, and therefore can be applied to address similar river management issues, such as habitat deterioration and water pollution, at the Contiguous US scale. The platform developed through this Ph.D. study offers a web-based problem-solving environment for a) managing inventory and retrieving culvert structural information; b) integrating diverse culvert-related datasets (e.g., culvert inventory, hydrological and land use data, and observations on the degree of sedimentation in the vicinity of culverts) in a digital repository; c) supporting culvert field inspections and real-time data collection through mobile devices; and d) hosting the data-driven framework for exploring culvert sedimentation drivers and forecasting culvert sedimentation potential across Iowa. Insights provided through the data-driven framework can be applied to support decisions for culvert management and sedimentation mitigation, as well as to provide suggestions on parameter selections for the design of these structures.
APA, Harvard, Vancouver, ISO, and other styles
11

Ask, Jacob. "Selected Trends and Space Technologies Expected to Shape the Next Decade of SSC Services." Thesis, KTH, Rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261779.

Full text
Abstract:
Since the early 2000s the space industry has undergone significant changes such as the advent of reusable launch vehicles and an increase of commercial opportunities. This new space age is characterized by a dynamic entrepreneurial climate, lowered barriers to access space and the emergence of new markets. New business models are being developed by many actors and the merging of space and other sectors continues, facilitating innovative and disruptive opportunities. Already established companies are adapting in various ways as efforts to stay relevant are gaining attention. The previous pace of development that was exclusively determined by governmental programs are now largely set by private and commercial ventures. Relating to all trends, new technologies and driving forces in the space industry is no trivial matter. By analyzing and examining identified trends and technologies the author has attempted to discern those that will have a significant impact on the industrial environment during the next decade. Market assessments have been summarized and interviews have been carried out. Discussions and conclusions relating to the services provided by the Swedish Space Corporation are presented. This report is intended to update the reader on the current status of the space industry, introduce concepts and provide relevant commentary on many important trends.<br>Sedan början av 2000-talet har det skett markanta förändringar inom rymdindustrin, såsom utvecklingen av återanvändningsbara raketer och en ökad mängd kommersiella möjligheter. Denna nya rymder a karaktäriseras av ett dynamiskt klimat för entreprenörer, minskande barriärer för att etablera rymdverksamhet och uppkomsten av nya marknader. Nya affärsmodeller utvecklas och integrering mellan rymden och andra industrier fortsätter, vilket ger utrymme för utveckling av innovativa och disruptiva idéer. Redan etablerade företag anpassar sig till förändringarna på olika sätt och ansträngningar för att bibehålla relevans prioriteras. Utvecklingstakten inom branschen var tidigare dominerad av statliga program men är nu alltmer influerad av privata och kommersiella satsningar. Att relatera till ny teknik, nuvarande trender och drivkrafter inom rymdindustrin är Jacob Ask is pursuing a Master of Science degree in Aerospace Engineering at KTH Royal Institute of Technology in Stockholm, Sweden. Christer Fuglesang is a professor in Space Travel, director of KTH Space Center and responsible for the Aerospace Engineering master program. He serves as the examiner for this master thesis project. Linda Lyckman is the Head of Business &amp; Technology Innovation at SSC and supervisor for this master thesis project. komplext. Genom att undersöka och analysera identifierade trender och teknologier ämnar författaren urskilja de som kan komma att påverka industrin i störst utsträckning under det kommande decenniet. Bedömningar av marknadsmöjligheter och intervjuer har genomförts och i denna rapport presenteras ¨aven diskussioner och slutsatser relaterade till den typ av tjänster som Swedish Space Corporation erbjuder. Denna rapport har för avsikt att uppdatera läsaren om delar av den aktuella nulägesanalysen inom rymdindustrin, introducera koncept och ge relevanta kommentarer om viktiga trender.
APA, Harvard, Vancouver, ISO, and other styles
12

Borke, Lukas. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.

Full text
Abstract:
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst.<br>With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
APA, Harvard, Vancouver, ISO, and other styles
13

Gasiea, Yousef Ali. "An analytical decision approach to rural telecommunication infrastructure selection." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/an-analytical-decision-approach-to-rural-telecommunication-infrastructure-selection(9238e16c-71e6-4b5c-b9c6-0b824bd0e3ed).html.

Full text
Abstract:
Telecommunications infrastructure is recognised as the fundamental factor for economic and social development for it is the platform of communication and transaction within and beyond geographical boundaries. It is a necessity for social benefits, growth, connection and competition, more in the rural communities in developing countries. Its acquisition entails great investment, considering the emergence of various technologies and thereby making the selection a critical task. The research described in this thesis is concerned with a comprehensive examination and analytical procedures on the selection of technologies, for rural telecommunications infrastructure. A structured systematic approach is deemed necessary to reduce the time and effort in the decision-making process. A literature review was carried out to explore the knowledge in the areas of Multi-Criteria Decision-Making (MCDM) approaches, with particular focus on the analytical decision processes. The findings indicate that, the Analytic Hierarchy Process (AHP)/AnalyticNetwork Process (ANP) are powerful decision methods capable of modelling such acomplex problem. Primarily, an AHP model is formulated, however, since the problem at hand involves many interactions and dependencies, a more holistic method is required to overcome its shortcomings by allowing for dependencies and feedback within the structure. Hence, the ANP is adopted and its network is established to represent the problem, making way to telecommunications experts to provide their judgements on the elements within the structure. The data collected are used to estimate the relative influence from which the overall synthesise is derived, forming a general ANP model for such a rural telecommunications selection problem. To provide a more wide-ranging investigation regarding selecting a potential rural telecommunications infrastructure, another systematic analysis that utilises a BOCR-based (Benefits, Opportunities, Costs and Risks) ANP was conducted. The obtained results indicate that Microwave technology is the most preferred alternative within the context of the developing countries. Sensitivity analysis was performed to show robustness of the obtained results. This framework provides the structure and the flexibility required for such decisions. It enables decision makers to examine the strengths and weaknesses of the problem, by comparing several technology options, with respect to appropriate gauge for judgement. Moreover, using the ANP, the criteria for such a technology selection task were clearly identified and the problem was structured systematically. A case study was carried out in Libya involving its main telecommunications infrastructure provider to demonstrate how such rural technology selection decisions can be made within a specific developing country's rural area. Based on the results of this case study that were in agreement with the focus group's expectations, it can be concluded that the application of the ANP in the selection of telecommunications technology, is indeed beneficial. In addition, it is believed that telecommunications planners could, by the use of data pertaining to another rural area, utilise the developed model to propose appropriate solutions. If new criteria and/or alternatives emerge to satisfy changing business needs, they can also be included in the ANP model.
APA, Harvard, Vancouver, ISO, and other styles
14

Tran, Khanh-Toan. "Efficient complex service deployment in cloud infrastructure." Thesis, Evry-Val d'Essonne, 2013. http://www.theses.fr/2012EVRY0038/document.

Full text
Abstract:
Le but du travail réalisé dans cette thèse est de fournir aux fournisseurs de services une solution qui est capable de non seulement déployer les services dans le cloud de façon économique, automatique, mais aussi à grande échelle. La première contribution traite le problème de la construction d’un nouveau service demandé par le client à partir de services disponibles dans le cloud de manière à satisfaire les exigences en termes de qualité de service ainsi qu’en termes de coût. Nous présentons ce problème par un modèle analytique et proposons un algorithme heuristique dont la performance est améliorée de 20-30% par rapport aux autres approches. La seconde contribution est une solution pour déployer les services dans le cloud en considérant les demandes des utilisateurs finaux. Pour assurer qualité de services dans une grande échelle, le service demandé est dupliqué et distribué dans le réseau; chacun de ses réplicas servira les utilisateurs à proximité. Le plan d’approvisionnement selon lequel le service est dupliqué dépend de sa demande, ce qui ne cesse pas de changer en quantité ainsi qu’en distribution, ce qui rend le problème plus compliqué. Nous proposons une solution qui est capable de s’adapter aux changements dans le réseau, y compris ceux des demandes de ses utilisateurs. Enfin, nous proposons un système basé sur OpenStack qui permet de déployer les services complexes dans un cloud qui couvre différente locations (cloud multi-site). A partir d’une demande du client, le système automatiquement calcule le plan d’approvisionnement optimal et le déploie en respectant les contraintes du client<br>The purpose of the work in this thesis is to provide the Service Provider a solution which is capable of deploying complex services in a cloud automatically and cost-effectively. The first contribution allows the Service Provider to construct complex services requested by the clients from basic services at his disposal. The construction must be efficient in terms of execution time and operation cost while respecting the client’s constraints. We present an analytic model for this problem and propose a heuristic solution which performance is 20-30% better than other approaches. The second contribution solves the problem of deploying the services while considering end-users’ demands. To ensure the quality of services provided to end-users, not only one instance but a set of service replicas is deployed in the network. How the service is duplicated and distributed depends on the demands of its end-users that change constantly in quantity as well as distribution, which complicates the problem. Thus the provisioning mechanism has to be capable of adapt to the change in the network, including the change in end-users’ demands. Our third contribution is a system using OpenStack which allows Service Provider to deploy complex services in a cloud that spans over different locations (multi-site cloud). Given a client’s request, the system automatically calculates the optimal provisioning plan and deploys it while respecting the client’s constraints
APA, Harvard, Vancouver, ISO, and other styles
15

Szimba, Eckhard. "Interdependence between transport infrastructure projects : an analytical framework applied to priority transport infrastructure projects of the European Union /." Baden-Baden : Nomos, 2008. http://www.gbv.de/dms/zbw/551048557.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Szimba, Eckhard. "Interdependence between transport infrastructure projects an analytical framework applied to priority transport infrastructure projects of the European Union." Baden-Baden Nomos, 2006. http://d-nb.info/986532800/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mahal, Mohammed. "Fatigue Behaviour of RC beams Strengthened with CFRP : Analytical and Experimental investigations." Doctoral thesis, Luleå tekniska universitet, Byggkonstruktion och -produktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26738.

Full text
Abstract:
Repeated cyclic loading of reinforced concrete (RC) structures such as bridges can cause reduced service life and structure failure due to fatigue even when the stress ranges applied to the structural components are very low. These problems can be mitigated by using fiber-reinforced polymer (FRP) composites to increase the structures’ load carrying capacity and fatigue life or service life. Strengthening of this sort may be a suitable way to prolong the service life of concrete structures. FRP strengthening involves externally bonding a plate, sheet or rod of the strengthening material to the surface of the concrete member or placing the strengthening element in grooves cut into the member’s surface. The bonding of plates or sheets to the surface is often referred to as EBR (externally bonded reinforcement) whereas the placement of strengthening bars in grooves carved into the member’s surface is referred to as NSM (Near Surface Mounted) reinforcement. When this research project was initiated, it was not clear whether EBR or NSM strengthening was more effective at alleviating the effects of fatigue loading, and there were many aspects of their use that warranted further investigation. The main objectives of the work presented in this thesis were to study the behaviour of materials and structures under fatigue loading, to assess the structural challenges presented by fatigue loading of members strengthened with EBR plates or NSM bars, and to identify analytical models suitable for the design and analysis of FRP-strengthening elements and strengthened concrete members. The scientific approach adopted in this work is based on experimental fatigue loading tests of RC beams strengthened with EBR plates and NSM bars together with the development and assessment of analytical methods for describing the fatigue behaviour of tested strengthened beams and numerical models for predicting the behaviour of bond joints under fatigue loading. The analytical models were then verified against experimental results. The theoretical and experimental studies were supported by a state-of-the-art literature review that was conducted to gather existing knowledge concerning FRP strengthening of RC members and their fatigue behaviour at the material and structural levels<br>Godkänd; 2015; 20150218 (mohsal); Nedanstående person kommer att disputera för avläggande av teknologie doktorsexamen. Namn: Mohammed Salih Mohammed Mahal Ämne: Konstruktionsteknik/Structural Engineering Avhandling: Fatigue Behaviour of RC Beams Strengthened with CFRP – Analytical and Experimental Investigations Opponent: Professor Pilate Moyo, Department of Civil Engineering, University of cape Town, South Africa Ordförande: Professor Björn Täljsten, Avd för byggkonstruktion och produktion, Institutionen för samhällsbyggnad och naturresurser, Luleå tekniska universitet Tid: Fredag den 27 mars 2015, kl 10.00 Plats: F1031, Luleå tekniska universitet
APA, Harvard, Vancouver, ISO, and other styles
18

Zuo, Ting. "Synthetic Modeling Analytics of Bike-Transit Integration Over Auto-Dependent Infrastructural System." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613751016160793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

MUKHERJEE, ANINDO. "AN INTEGRATED ARCHITECTURE FOR MULTI-HOP INFRASTRUCTURE WIRELESS NETWORKS." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1155834305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Galal, Hana Sherin. "Integrating sustainability in municipal wastewater infrastructure decision-analysis using the analytic hierarchy process." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44590.

Full text
Abstract:
New regulations from the Canadian Council of Ministers of Environment, released in 2009, require all wastewater treatment plants in Canada to produce effluent of secondary treatment levels. To comply with the new law, many Canadian municipalities using primary treatment plants must retrofit or renew their old systems. There is an increasing pressure from stakeholder groups and policy makers to select new infrastructure using triple-bottom-line (economic, environmental and social) analyses. The present study aims to illuminate how differing preferences among experts from different stakeholder groups influence what is considered to be the ‘most sustainable' wastewater treatment system. Through the use of policy documents, academic literature, and the use of AHP (a decision support tool: Analytic Hierarchy Process) an objectives hierarchy was constructed. The objectives hierarchy was made up of four criteria and 13 indicators. Five wastewater experts were asked to use pair-wise comparisons to score the indicators and criteria of the constructed objectives hierarchy and provide their opinions on the same. In addition, four low foot-print wastewater treatment alternatives were selected for review. One of the participants was asked to rank the four alternatives with regards to their performance on the selected indicators. This ranking, in combination with the rankings of the indicators and criteria, previously made by the five experts, were used to indicate the preferred alternatives for each of the separate participants. Then, the overall prioritization of the alternatives was used to carry out a sensitivity analysis. In terms of results, this study of sustainability indicators for wastewater treatment selection showed that the most contentious indicators among those studied were Initial Costs and Long Term Costs, Effluent Quality and Aesthetics. Additionally, the study showed that the Sequencing Batch Reactor was identified as the ‘most sustainable’ alternative by the average scores of all five participants and separately by four of the five participants.
APA, Harvard, Vancouver, ISO, and other styles
21

Oseku-Afful, Thomas. "The use of Big Data Analytics to protect Critical Information Infrastructures from Cyber-attacks." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-59779.

Full text
Abstract:
Unfortunately, cyber-attacks, which are the consequence of our increasing dependence on digital technology, is a phenomenon that we have to live with today. As technology becomes more advanced and complex, so have the types of malware that are used in these cyber-attacks. Currently, targeted cyber-attacks directed at CIIs such as financial institutions and telecom companies are on the rise. A particular group of malware known as APTs, which are used for targeted attacks, are very difficult to detect and prevent due to their sophisticated and stealthy nature. These malwares are able to attack and wreak havoc (in the targeted system) within a matter of seconds; this is very worrying because traditional cyber security defence systems cannot handle these attacks. The solution, as proposed by some in the industry, is the use of BDA systems. However, whilst it appears that BDA has achieved greater success at large companies, little is known about success at smaller companies. Also, there is scarcity of research addressing how BDA is deployed for the purpose of detecting and preventing cyber-attacks on CII. This research examines and discusses the effectiveness of the use of BDA for detecting cyber-attacks and also describes how such a system is deployed. To establish the effectiveness of using a BDA, a survey by questionnaire was conducted. The target audience of the survey were large corporations that were likely to use such systems for cyber security. The research concludes that a BDA system is indeed a powerful and effective tool, and currently the best method for protecting CIIs against the range of stealthy cyber-attacks. Also, a description of how such a system is deployed is abstracted into a model of meaningful practice.
APA, Harvard, Vancouver, ISO, and other styles
22

Suthakar, Uthayanath. "A scalable data store and analytic platform for real-time monitoring of data-intensive scientific infrastructure." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15788.

Full text
Abstract:
Monitoring data-intensive scientific infrastructures in real-time such as jobs, data transfers, and hardware failures is vital for efficient operation. Due to the high volume and velocity of events that are produced, traditional methods are no longer optimal. Several techniques, as well as enabling architectures, are available to support the Big Data issue. In this respect, this thesis complements existing survey work by contributing an extensive literature review of both traditional and emerging Big Data architecture. Scalability, low-latency, fault-tolerance, and intelligence are key challenges of the traditional architecture. However, Big Data technologies and approaches have become increasingly popular for use cases that demand the use of scalable, data intensive processing (parallel), and fault-tolerance (data replication) and support for low-latency computations. In the context of a scalable data store and analytics platform for monitoring data-intensive scientific infrastructure, Lambda Architecture was adapted and evaluated on the Worldwide LHC Computing Grid, which has been proven effective. This is especially true for computationally and data-intensive use cases. In this thesis, an efficient strategy for the collection and storage of large volumes of data for computation is presented. By moving the transformation logic out from the data pipeline and moving to analytics layers, it simplifies the architecture and overall process. Time utilised is reduced, untampered raw data are kept at storage level for fault-tolerance, and the required transformation can be done when needed. An optimised Lambda Architecture (OLA), which involved modelling an efficient way of joining batch layer and streaming layer with minimum code duplications in order to support scalability, low-latency, and fault-tolerance is presented. A few models were evaluated; pure streaming layer, pure batch layer and the combination of both batch and streaming layers. Experimental results demonstrate that OLA performed better than the traditional architecture as well the Lambda Architecture. The OLA was also enhanced by adding an intelligence layer for predicting data access pattern. The intelligence layer actively adapts and updates the model built by the batch layer, which eliminates the re-training time while providing a high level of accuracy using the Deep Learning technique. The fundamental contribution to knowledge is a scalable, low-latency, fault-tolerant, intelligent, and heterogeneous-based architecture for monitoring a data-intensive scientific infrastructure, that can benefit from Big Data, technologies and approaches.
APA, Harvard, Vancouver, ISO, and other styles
23

Halkola, E. (Eija). "Participation in infrastructuring the future school:a nexus analytic inquiry." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526210322.

Full text
Abstract:
Abstract In information systems (IS) research, there is increasing interest in understanding complex and large-scale efforts. This study examines the complexity involved in infrastructuring within a novel context: the educational network of a Finnish city. A unique aspect is this study’s focus on children’s participation in infrastructuring. It contributes to the existing body of literature by addressing the concepts of discourses in place, interaction order, and historical body, as drawn from nexus analysis. It offers these concepts as theoretical tools for understanding the complexity of infrastructuring, and furthers exploration of user participation in IS research through a careful analysis of actor participation in the infrastructuring venture in question. The findings of this study foreground a multitude of actors, both adults and children, as well as their various activities and the versatility of the involved objects of design. The central social actors include educational officials, schools (teachers, headmasters, and pupils), researchers, and local and global companies. Infrastructuring in this research includes planned activities concerning developed solutions, but also emergent activities for adapting planned solutions to local schools’ everyday practices, revealing the intimate intertwining of practices and technologies. In particular, the past temporal horizon and shared histories of the communities involved are highlighted in terms of the concept of the historical body. The analysis on the historical bodies of the actors foregrounded the local aspects that were appreciated, but also challenged in infrastructuring. The concept of interaction order was found to be useful for analyzing the heterogeneity, multivoicedness, tensions between local and global dimensions, and power aspects inherent in infrastructuring As a practical implication, the nexus analytic concepts of historical body and interaction order are suggested as means of better understanding local settings and the power relationships of various actors, and are also useful for practitioners preparing for infrastructuring<br>Tiivistelmä Kiinnostus monimutkaisiin ja laaja-alaisiin hankkeisiin on lisääntynyt tietojenkäsittelytieteiden tutkimuksessa. Tämä tutkimus tarkastelee infrastrukturoitumiseen liittyvää monitahoisuutta uudessa yhteydessä, erään suomalaisen kaupungin kouluverkon kannalta. Uutena näkökulmana tässä tutkimuksessa on lasten osallistuminen infrastrukturoitumiseen. Teoreettisiksi työkaluiksi infrastrukturoitumisen monitahoisuuden ymmärtämiseksi tämä tutkimus tarjoaa neksusanalyyttisiä käsitteitä ’discourses in place’, ’interaction order’ ja ’historical body’. Tässä tutkimuksessa toteutettu analyysi erilaisten toimijoiden osallistumisesta kyseisen hankkeen infrastrukturoitumiseen edistää käyttäjien osallistumiseen liittyvää tietojenkäsittelytieteen tutkimusta. Uusina tuloksina tässä tutkimuksessa ovat toimijoiden - sekä aikuisten että lasten - moninaisuus, heidän erilaiset toimintatapansa sekä suunnittelukohteiden monipuolisuus. Keskeisiä toimijoita hankkeessa olivat opetustoimen virkamiehet, koulut (opettajat, rehtorit sekä oppilaat), tutkijat sekä paikalliset ja kansainväliset yritykset. Infrastrukturoituminen tässä hankkeessa sisälsi sekä suunniteltuja ratkaisuja että uusia soveltavia toimintoja, joilla ratkaisut sovitettiin paikallisiin, koulujen jokapäiväisiin käytänteisiin. Käytänteiden ja teknologioiden väistämätön kietoutuminen toisiinsa tuli siten myös esille tässä tutkimuksessa. Erityisesti mennyt ajallinen ulottuvuus sekä yhteisöjen jaettu historia korostuivat neksusanalyyttisen ‘historical body’ -käsitteen kautta. Toimijoiden aiempien kokemusten tarkastelu tämän käsitteen avulla toi tutkimuksessa esille myös ne paikalliset tekijät, joita infrastrukturoitumisessa arvostettiin, mutta toisaalta myös haastettiin. Neksusanalyyttinen ’interaction order’ -käsite osoittautui hyödylliseksi infrastrukturoitumiseen liittyvän monimuotoisuuden, osallistujien moninaisuuden, paikallisten ja globaalien ulottuvuuksien sekä valtasuhteiden tarkastelemiseksi. Käytännön toimijoille neksusanalyyttisten käsitteiden (’historical body’ ja ’interaction order’) soveltamista suositellaan paikallisten toimintapuitteiden ja eri toimijoiden valtasuhteiden ymmärtämiseen paremmin. Näiden käsitteiden käyttö on hyödyllistä myös käytännön toimijoille järjestettäessä infrastrukturoitumiseen osallistumista
APA, Harvard, Vancouver, ISO, and other styles
24

Thorpe, David Stuart. "A process for the management of physical infrastructure." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36067/7/36067_Digitsed_Thesis.pdf.

Full text
Abstract:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
APA, Harvard, Vancouver, ISO, and other styles
25

Wagner, Edward Dishman. "Public Key Infrastructure (PKI) And Virtual Private Network (VPN) Compared Using An Utility Function And The Analytic Hierarchy Process (AHP)." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/32685.

Full text
Abstract:
This paper compares two technologies, Public Key Infrastructure (PKI) and Virtual Private Network (VPN). PKI and VPN are two approaches currently in use to resolve the problem of securing data in computer networks. Making this comparison difficult is the lack of available data. Additionally, an organization will make their decision based on circumstances unique to their information security needs. Therefore, this paper will illustrate a method using a utility function and the Analytic Hierarchy Process (AHP) to determine which technology is better under a hypothetical set of circumstances. This paper will explain each technology, establish parameters for a hypothetical comparison, and discuss the capabilities and limitations of both technologies.<br>Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
26

Lindfeldt, Olov. "Railway operation analysis : Evaluation of quality, infrastructure and timetable on single and double-track lines with analytical models and simulation." Doctoral thesis, KTH, Trafik och Logistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12727.

Full text
Abstract:
This thesis shows the advantages of simple models for analysis of railway operation. It presents two tools for infrastructure and timetable planning. It shows how the infrastructure can be analysed through fictive line designs, how the timetable can be treated as a variable and how delays can be used as performance measures. The thesis also gives examples of analyses of complex traffic situations through simulation experiments. Infrastructure configuration, timetable design and delays play important roles in the competitiveness of railway transportation. This is especially true on single-track lines where the run times and other timetable related parameters are severely restricted by crossings (train meetings). The first half of this thesis focuses on the crossing time, i.e. the time loss that occurs in crossing situations. A simplified analytical model, SAMFOST, has been developed to calculate the crossing time as a function of infrastructure configuration, vehicle properties, timetable and delays for two crossing trains. Three measures of timetable flexibility are proposed and they can be used to evaluate how infrastructure configuration, vehicle properties, punctuality etc affect possibilities to alter the timetable. Double-track lines operated with mixed traffic show properties similar to those of single-tracks. In this case overtakings imply scheduled delays as well as risk of delay propagation. Two different methods are applied for analysis of double-tracks: a combinatorial, mathematical model (TVEM) and simulation experiments. TVEM, Timetable Variant Evaluation Model, is a generic model that systematically generates and evaluates timetable variants. This method is especially useful for mixed traffic operation where the impact of the timetable is considerable. TVEM may also be used for evaluation of different infrastructure designs. Analyses performed in TVEM show that the impact on capacity from the infrastructure increases with speed differences and frequency of service for the passenger trains, whereas the impact of the timetable is strongest when the speed differences are low and/or the frequency of passenger services is low. Simulation experiments were performed to take delays and perturbations into account. A simulation model was set up in the micro simulation tool RailSys and calibrated against real operational data. The calibrated model was used for multi-factor analysis through experiments where infrastructure, timetable and perturbation factors were varied according to an experimental design and evaluated through response surface methods. The additional delay was used as response variable. Timetable factors, such as frequency of high-speed services and freight train speed, turned out to be of great importance for the additional delay, whereas some of the perturbation factors, i.e. entry delays, only showed a minor impact. The infrastructure factor, distance between overtaking stations, showed complex relationships with several interactions, principally with timetable factors.<br>QC20100622<br>Framtida infrastruktur och kvalitet i tågföring
APA, Harvard, Vancouver, ISO, and other styles
27

Hossain, Mohammad. "Performance evaluation of public-private partnerships (PPPs) in developing countries: A case study of Bangladesh." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/385936.

Full text
Abstract:
Since the emergence in the early 1990s, PPP options have become increasingly popular to the governments of both developed and developing countries. On average, US$ 95b are invested annually in the developing countries in the form of PPP options up until 2017. However, a mixed result is documented with respect to their performances. PPP arrangements include multiple stakeholders that have diverse interests associated with their particular affiliations, and accordingly the performance expectations of these stakeholders also differ. Traditional approaches to performance evaluation are unable to capture all of the expectations to be included in the process of PPP project evaluation. Hence, using appropriate performance indicators and analysing their relative importance in influencing the performance score of particular projects remains unexplored in the developing country context. Against this backdrop, this study examines current practices of PPP performance evaluation, develops a framework of weighted performance indicators for developing countries and applies the model in a number of PPP projects in Bangladesh. A mixedmethod approach has been used, which includes the analytical hierarchy process (AHP) for establishing weights of the key performance areas (KPAs) and associated indicators and a case study method for applying the developed model to selected PPP projects in Bangladesh. Results show that ‘financing’, ‘planning and initiation’ and ‘transparency and accountability’ are the most important KPAs in evaluating PPP performances in Bangladesh and ‘feasibility analysis’, ‘life cycle evaluation and monitoring’ and ‘optimal risk allocation’ are the most significant performance indicators. Unlike traditional performance evaluation methods, a prioritised set of performance indicators and KPAs for the PPPs of Bangladesh has been identified. The findings also reveal that sincere government commitment is relatively more important for the success of PPPs than the enactment of enabling legislation in the context of developing countries. This suggests more efforts are required to be employed by the host government to build confidence in the private partner selected for engagement in PPP arrangements. Furthermore, a framework for performance evaluation of power sector PPPs, based on the KPAs, has been proposed. This could be used for evaluating the performance of power PPPs in a more objective and systematic way in Bangladesh and other South Asian countries. Finally, the weighted process applied to the various performance indicators provides an improved understanding of the relative significance of KPAs and their component indicators. Attaching weights to the KPAs and performance indicators of PPPs, and applying those weights to derive individual project scores in a developing country context, especially in Bangladesh, represents an innovation and thus a contribution to the PPP performance literature. Awareness of the outcomes of the weighted performance evaluation process developed in this study could help project implementers and regulators prioritise their attention and resource allocation decisions related to achievement of performance improvement on the more significant key performance areas. The weighted process is expected to contribute to reducing biases of either perceived Likert scaled scores or only the weightings in PPP performance evaluation.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>Dept Account,Finance & Econ<br>Griffith Business School<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
28

Sagnol, Loba. "Experimental and analytical study of the reinforcement of pavements by glass fibre grids." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAD042.

Full text
Abstract:
Cette thèse traite de l’effet des grilles en fibre de verre, utilisées pour renforcer les structures routières, sur la liaison entre deux couches d’enrobés bitumineux, la durée de vie en fatigue et le module de rigidité des ´éprouvettes cylindriques renforcées, ainsi que sur les déflexions mesurées sur une section de route renforcée in situ. Des essais de cisaillement (LEUTNER) ainsi que de module et de fatigue (ITT) ont été conduit sur des éprouvettes renforcées et non renforcées en utilisant différentes grilles, différentes émulsion ainsi que différentes quantités d’émulsion. Pour ces essais, une surface de test a été construite in-situ, de laquelle les ´éprouvettes ont ´été extraites. Une section de route in-situ a ´également été construite, renforcée avec 3 différentes grilles et avec deux sections de références. Les déflexions de la chaussée ont été déterminées avant et après les travaux. Une modélisation de la structure a été faites basée sur les résultats des mesures de déflexion<br>This PhD-study evaluates the impact of glass fibre grids, used to reinforced asphalt structures, on the bonding between two asphalt layers, the fatigue life and the stiffness modulus of reinforced cylindrical specimens as well as on the deflections measured on a reinforced in-situ road section. Shear tests (LEUTNER) as well as modulus-tests and fatigue-tests (ITT) were conducted on reinforced and unreinforced specimens, using different grids, different emulsions and different emulsion quantities. For this tests, an outdoor test-surface was constructed, from which the specimens were extracted. A in-situ road test section was also constructed, reinforced with 3 different grids and having two reference sections. The deflections of the road were determined before and after the construction works. A modelisation of the structure, based on the deflection measurements, was made
APA, Harvard, Vancouver, ISO, and other styles
29

Демидова, Р. Э., та R. E. Demidova. "Автоматизация бизнес-процесса по работе с клиентами с применением CRM-системы : магистерская диссертация". Master's thesis, б. и, 2020. http://hdl.handle.net/10995/93437.

Full text
Abstract:
В современном мире рынок фотографических услуг развивается семимильными шагами. Фотосалоны и фотостудии особенно актуальны среди начинающих бизнесменов. Чтобы повысить продажи, многие компании все чаще предлагают возможность записать клиента онлайн с помощью CRM. CRM-система актуальна по разным причинам: развитие бизнес-процессов, роли интернета и использование мобильных устройств в бизнесе, развитие и удешевление технологий обработки данных и ИТ-инфраструктуры. И, конечно же, большую роль сыграл тот факт, что технические возможности CRM-систем достигли совершенства. Они позволяют отследить путь покупателя с момента первого контакта с ним для перепродажи. Она создает технологию продаж, связывая рекламу, аналитические инструменты и саму систему продаж в единую систему. Результаты выпускной квалификационной работы будут использованы в фотостудии при внедрении CRM-системы AppEvent.<br>In the modern world, the photographic services market is developing by leaps and bounds. Photo salons and photo studios are especially relevant among novice businessmen. To boost sales, many companies are increasingly offering the option to sign up a customer online using CRM. The CRM system is relevant for various reasons: the development of business processes, the role of the Internet and the use of mobile devices in business, the development and reduction of the cost of data processing technologies and IT infrastructure. And, of course, the fact that the technical capabilities of CRM systems have reached perfection played an important role. They allow you to track the path of the buyer from the moment of the first contact with him for resale. It creates sales technology by linking advertising, analytical tools and the sales system itself into a single system. The results of the final qualifying work will be used in the photo studio when implementing the AppEvent CRM system.
APA, Harvard, Vancouver, ISO, and other styles
30

Masotto, Nicola. "La valutazione ambientale nelle trasformazioni territoriali in ambito alpino: applicazione del Metodo AHP." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3423142.

Full text
Abstract:
The activities carried out for this research have been developed following a methodological pathway that starts from a cultural question, i.e which planning "strategies" can be adopted in an alpine geographic area undergoing high depopulation trends despite a certain degree of economic maturity, such as in the case of Belluno province. To this end, all possible "policies" have been analysed with the goal to change this criticality, starting from the consideration that the geographical isolation of this territory is caused by the absence of an alpine road pass to the North and the infrastructural deficit of the whole Province. The main issues addressed are listed here below: • Accessibility, to understand how it can influence territorial disparities; • The social demands of the territory of Belluno Province with reference to the accessibility to the North; • The transport scenarios that are being strategically developed in relation to national and international connections (those we could define as "new communication strategies"); • The elaboration and application of a new evaluation model as a decision support, called Analytic Hierarchy Process (AHP), within a process of Strategic Environmental Assessment (SEA). In particular, the last aspect has been developed by identifying an evaluation model, among the many others in literature, which could ensure a better effectiveness of the SEA process, a general method to evaluate the sustainability of programming processes (as the "policies" of territorial development) and planning. In this sense the AHP has been considered, and it has been developed not only to test its effectiveness as an evaluation instrument, able to produce important results to support decisions, but also to verify whether this model can be used in the general SEA process. This experimentation of integrating the AHP model into the SEA has thus shown that the AHP can increase the effectiveness of the SEA (if the AHP model is applied during the ex ante stage of the SEA), by improving its strategic significance, in the sustainability checks of large-scale planning and programming actions.<br>Le attività condotte ai fini di questa ricerca sono state sviluppate seguendo un percorso metodologico che parte da un quesito di carattere culturale, ovvero quali "strategie" pianificatorie si possono adottare in un ambito geografico alpino oggetto di un elevato fenomeno di spopolamento, nonostante una certa maturità economica, come il caso della provincia di Belluno. A tal fine sono state indagate le possibili "politiche" da sviluppare per modificare tale criticità, partendo dalla constatazione che l'isolamento geografico di questo territorio è causato dall'assenza di un valico alpino a Nord e dal deficit infrastrutturale dell'intera provincia. I temi affrontati sono i seguenti: • l'accessibilità, per comprendere quanto essa possa influire sulle sperequazioni territoriali; • le domande sociali del territorio bellunese interessate al tema dell'accessibilità verso Nord; • gli scenari trasportistici che si stanno configurando a livello strategico riguardo alle connessioni nazionali ed internazionali (ossia quelle che potremmo definire come "nuove geografie delle comunicazioni"); • la costruzione e l'applicazione di un modello di valutazione di supporto alle decisioni, denominato Analytic Hierarchy Process (AHP), all'interno di un processo di Valutazione Ambientale Strategica (VAS). L'ultimo aspetto, in particolare, è stato sviluppato individuando un modello valutativo, tra i molti presenti in letteratura, che permettesse di migliorare l'efficacia del processo di VAS, metodo generale per valutare la sostenibilità dei processi programmatori (come le "politiche" di sviluppo territoriale) e pianificatori. In questo senso si è posta l'attenzione sull'AHP che è stata sviluppata non solo per testarne l'efficacia come strumento valutativo, ovvero capace di fornire risultati significativi nel supporto alle decisioni, ma anche per verificare se detto modello può essere collocato nel generale processo di VAS. Tale sperimentazione di integrazione del modello AHP nella VAS, ha quindi permesso di dimostrare che l'AHP può rendere maggiormente efficace la VAS (se il modello AHP viene applicato nella fase ex ante della VAS), aumentandone il significato strategico, nella verifica della sostenibilità di azioni pianificatorie e programmatorie a scala vasta.
APA, Harvard, Vancouver, ISO, and other styles
31

Ribeiro, Washington Fábio de Souza. "Um modelo de maturidade organizacional em um conjunto de processos para a biblioteca ITIL v3." Universidade Católica de Brasília, 2018. https://bdtd.ucb.br:8443/jspui/handle/tede/2487.

Full text
Abstract:
Submitted by Biblioteca Digital de Teses e Dissertações (sdi@ucb.br) on 2018-11-08T19:17:12Z No. of bitstreams: 1 WashingtonFábiodeSouzaRibeiroDissertacao2018.pdf: 19934045 bytes, checksum: 1021542a94b950733ab9dfbbda43901b (MD5)<br>Approved for entry into archive by Sara Ribeiro (sara.ribeiro@ucb.br) on 2018-11-09T16:07:12Z (GMT) No. of bitstreams: 1 WashingtonFábiodeSouzaRibeiroDissertacao2018.pdf: 19934045 bytes, checksum: 1021542a94b950733ab9dfbbda43901b (MD5)<br>Made available in DSpace on 2018-11-09T16:07:12Z (GMT). No. of bitstreams: 1 WashingtonFábiodeSouzaRibeiroDissertacao2018.pdf: 19934045 bytes, checksum: 1021542a94b950733ab9dfbbda43901b (MD5) Previous issue date: 2018-07-26<br>With the growing need for an adequate structure of information technology by organizations, regardless of their size, IT service management is increasingly important, and should be effective and, above all, efficient in pursuit of cost reduction combined with improvement the delivery of services. In this way, deploying models and frameworks of best IT management practices is, today, an essential activity of companies that seek excellence. Another equally important activity is assessing the maturity of IT management processes. The ITIL (IT Infrastructure Library) library is designed for IT service managers to increase the effectiveness of their management as well as the assertiveness of their decisions, by better managing their environments. This research aims to contribute to the area of IT service management, generating inputs for IT managers' decision making in the development of a process-based organizational maturity model for ITIL v3. The proposed MA_ITIL model is composed of three layers based on the AHP (Analytic Hierarchy Process) method, in accordance with the international ISO / IEC 33004 process maturity standard of 2015, which replaced ISO / IEC 15504- 4 of 2003, thus addressing new definitions between organizational maturity of processes and capacity utilization. The research was developed in five stages, which were: to review the literature, to elaborate a maturity model, to elaborate a research survey based on the proposed model, to apply the questionnaire to ITIL v3 specialists and finally to describe the comments and conclusions about the study, so that IT service managers can have mechanisms for managing and measuring the degree of maturity of their ITIL v3 processes, which will assist in the management of technological services.<br>Com a crescente necessidade de estrutura adequada de tecnologia da informação por parte das organizações, independentemente do seu porte, o gerenciamento de serviços de TI é cada vez mais importante, devendo ser eficaz e, principalmente, eficiente na busca da redução de custos aliada à melhoria da entrega de serviços. Desta forma, implantar modelos e frameworks de melhores práticas de gestão de TI é, hoje, uma atividade essencial das organizações que buscam a excelência. Outra atividade igualmente importante é a avaliação da maturidade dos processos de gestão de TI. A biblioteca ITIL (IT Infrastructure Library) foi desenvolvida para gestores de serviços de TI aumentarem a efetividade da sua gestão assim como a assertividade de suas decisões, gerenciando de forma mais adequada seus ambientes. Essa pesquisa tem a finalidade de contribuir para a área de gerenciamento de serviços de TI, gerando insumos para tomadas de decisões dos gestores de TI no desenvolvimento de um modelo de maturidade organizacional baseado em processos para a ITIL v3. O modelo MA_ITIL proposto é composto por três camadas tendo como base o método AHP (Analytic Hierarchy Process), em conformidade com a norma internacional de maturidade organizacional de processos ISO/IEC 33004 de 2015, que veio a substituir a norma ISO/IEC 15504-4 de 2003, abordando então novas definições entre maturidade organizacional de processos e capacidade de utilização. A pesquisa foi desenvolvida em cinco etapas, que foram: revisar a literatura, elaborar um modelo de maturidade, elaborar uma pesquisa survey baseado no modelo proposto, aplicar o questionário para especialistas ITIL v3 e por fim foi descrito os comentários e conclusões sobre o estudo, de modo que gestores de serviços de TI possam ter mecanismos de administração e medição do grau de maturidade de seus processos ITIL v3, que auxiliarão na gestão de serviços tecnológicos.
APA, Harvard, Vancouver, ISO, and other styles
32

"Energy Analytics for Infrastructure: An Application to Institutional Buildings." Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.45586.

Full text
Abstract:
abstract: Commercial buildings in the United States account for 19% of the total energy consumption annually. Commercial Building Energy Consumption Survey (CBECS), which serves as the benchmark for all the commercial buildings provides critical input for EnergyStar models. Smart energy management technologies, sensors, innovative demand response programs, and updated versions of certification programs elevate the opportunity to mitigate energy-related problems (blackouts and overproduction) and guides energy managers to optimize the consumption characteristics. With increasing advancements in technologies relying on the ‘Big Data,' codes and certification programs such as the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), and the Leadership in Energy and Environmental Design (LEED) evaluates during the pre-construction phase. It is mostly carried out with the assumed quantitative and qualitative values calculated from energy models such as Energy Plus and E-quest. However, the energy consumption analysis through Knowledge Discovery in Databases (KDD) is not commonly used by energy managers to perform complete implementation, causing the need for better energy analytic framework. The dissertation utilizes Interval Data (ID) and establishes three different frameworks to identify electricity losses, predict electricity consumption and detect anomalies using data mining, deep learning, and mathematical models. The process of energy analytics integrates with the computational science and contributes to several objectives which are to 1. Develop a framework to identify both technical and non-technical losses using clustering and semi-supervised learning techniques. 2. Develop an integrated framework to predict electricity consumption using wavelet based data transformation model and deep learning algorithms. 3. Develop a framework to detect anomalies using ensemble empirical mode decomposition and isolation forest algorithms. With a thorough research background, the first phase details on performing data analytics on the demand-supply database to determine the potential energy loss reduction potentials. Data preprocessing and electricity prediction framework in the second phase integrates mathematical models and deep learning algorithms to accurately predict consumption. The third phase employs data decomposition model and data mining techniques to detect the anomalies of institutional buildings.<br>Dissertation/Thesis<br>Doctoral Dissertation Civil, Environmental and Sustainable Engineering 2017
APA, Harvard, Vancouver, ISO, and other styles
33

(9746357), Justin Anthony Mahlberg. "Evaluating Vehicle Data Analytics for Assessing Road Infrastructure Functionality." Thesis, 2020.

Find full text
Abstract:
The Indiana Department of Transportation (INDOT) manages and maintains over 3,000 miles of interstates across the state. Assessing lane marking quality is an important part of agency asset tracking and typically occurs annually. The current process requires agency staff to travel the road and collect representative measurements. This is quite challenging for high volume multi-lane facilities. Furthermore, it does not scale well to the additional 5,200 centerline miles of non-interstate routes. <div><br></div><div>Modern vehicles now have technology on them called “Lane Keep Assist” or LKA, that monitor lane markings and notify the driver if they are deviating from the lane. This thesis evaluates the feasibility of monitoring when the LKA systems can and cannot detect lane markings as an alternative to traditional pavement marking asset management techniques. This information could also provide guidance on what corridors are prepared for level 3 autonomous vehicle travel and which locations need additional attention. </div><div><br></div><div>In this study, a 2019 Subaru Legacy with LKA technology was utilized to detect pavement markings in both directions along Interstates I-64, I-65, I-69, I-70, I-74, I90, I-94 and I-465 in Indiana during the summer of 2020. The data was collected in the right most lane for all interstates except for work zones that required temporary lane changes. The data was collected utilizing two go-pro cameras, one facing the dashboard collecting LKA information and one facing the roadway collecting photos of the user’s experience. Images were taken at 0.5 second frequency and were GPS tagged. Data collection occurred on over 2,500 miles and approximately 280,000 images were analyzed. The data provided outputs of: No Data, Excluded, Both Lanes Not Detected, Right Lane Not Detected, Left Lane Not Detected, and Both Lanes Detected. </div><div><br></div><div>The data was processed and analyzed to create spatial plots signifying locations where markings were detectable and locations where markings were undetected. Overall, across 2,500 miles of travel (right lane only), 77.6% of the pavement markings were classified as both detected. The study found</div><div><br></div><div>• 2.6% the lane miles were not detected on both the left and right side </div><div>• 5.2% the lane miles were not detected on the left side </div><div>• 2.0% the lane miles were not detected on the right side 8 </div><div><br></div><div>Lane changes, inclement weather, and congestion caused 12.5% of the right travel lane miles to be excluded. The methodology utilized in this study provides an opportunity to complement the current methods of evaluating pavement marking quality by transportation agencies. </div><div><br></div><div>The thesis concludes by recommending large scale harvesting of LKA from a variety of vendors so that complete lane coverage during all weather and light conditions can be collected so agencies have an accurate assessment of how their pavement markings perform with modern LKA technology. Not only will this assist in identifying areas in need of pavement marking maintenance, but it will also provide a framework for agencies and vehicle OEM’s to initiate dialog on best practices for marking lines and exchanging information.</div>
APA, Harvard, Vancouver, ISO, and other styles
34

Jang, Jinwoo. "Development of Data Analytics and Modeling Tools for Civil Infrastructure Condition Monitoring Applications." Thesis, 2016. https://doi.org/10.7916/D82N52HN.

Full text
Abstract:
This dissertation focuses on the development of data analytics approaches to two distinct important condition monitoring applications in civil infrastructure: structural health monitoring and road surface monitoring. In the first part, measured vibration responses of a major long-span bridge are used to identify its modal properties. Variations in natural frequencies over a daily cycle have been observed with measured data, which are probably due to environmental effects such as temperature and traffic. With a focus on understanding the relationships between natural frequencies and temperatures, a controlled simulation-based study is conducted with the use of a full-scale finite element (FE) model and four regression models. In addition to the temperature effect study, the identified modal properties and the FE model are used to explore both deterministic and probabilistic model updating approaches. In the deterministic approach (sensitivity-based model updating), the regularization technique is applied to deal with a trade-off between natural frequency and mode shape agreements. Specific nonlinear constraints on mode shape agreements are suggested here. Their capabilities to adjust mode shape agreements are validated with the FE model. To the best of the author's knowledge, the sensitivity-based clustering technique, which enables one to determine efficient updating parameters based on a sensitivity analysis, has not previously been applied to any civil structure. Therefore, this technique is adapted and applied to a full-scale bridge model for the first time to highlight its capability and robustness to select physically meaningful updating parameters based on the sensitivity of natural frequencies with respect to both mass and stiffness-related physical parameters. Efficient and physically meaningful updating parameters are determined by the sensitivity-based clustering technique, resulting in an updated model that has a better agreement with measured data sets. When it comes to the probabilistic approach, the application of Bayesian model updating to large-scale civil structures based on real data is very rare and challenging due to the high level of uncertainties associated with the complexity of a large-scale model and variations in natural frequencies and mode shapes identified from real measured data. In this dissertation, the full-scale FE model is updated via the Bayesian model updating framework in an effort to explore the applicability of Bayesian model updating to a more complex and realistic problem. Uncertainties of updating parameters, uncertainty reductions due to information provided by data sets, and uncertainty propagations to modal properties of the FE model are estimated based on generated posterior samples. In the second part of this dissertation, a new innovative framework is developed to collect pavement distress data via multiple vehicles. Vehicle vibration responses are used to detect isolated pavement distress and rough road surfaces. GPS positioning data are used to localize identified road conditions. A real-time local data logging algorithm is developed to increase the efficiency of data logging in each vehicle client. Supervised machine learning algorithms are implemented to classify measured dynamic responses into three categories. Since data are collected from multiple vehicles, the trajectory clustering algorithm is introduced to integrate various trajectories to provide a compact format of information about road surface conditions. The suggested framework is tested and evaluated in real road networks.
APA, Harvard, Vancouver, ISO, and other styles
35

"How to Think About Resilient Infrastructure Systems." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.49314.

Full text
Abstract:
abstract: Resilience is emerging as the preferred way to improve the protection of infrastructure systems beyond established risk management practices. Massive damages experienced during tragedies like Hurricane Katrina showed that risk analysis is incapable to prevent unforeseen infrastructure failures and shifted expert focus towards resilience to absorb and recover from adverse events. Recent, exponential growth in research is now producing consensus on how to think about infrastructure resilience centered on definitions and models from influential organizations like the US National Academy of Sciences. Despite widespread efforts, massive infrastructure failures in 2017 demonstrate that resilience is still not working, raising the question: Are the ways people think about resilience producing resilient infrastructure systems? This dissertation argues that established thinking harbors misconceptions about infrastructure systems that diminish attempts to improve their resilience. Widespread efforts based on the current canon focus on improving data analytics, establishing resilience goals, reducing failure probabilities, and measuring cascading losses. Unfortunately, none of these pursuits change the resilience of an infrastructure system, because none of them result in knowledge about how data is used, goals are set, or failures occur. Through the examination of each misconception, this dissertation results in practical, new approaches for infrastructure systems to respond to unforeseen failures via sensing, adapting, and anticipating processes. Specifically, infrastructure resilience is improved by sensing when data analytics include the modeler-in-the-loop, adapting to stress contexts by switching between multiple resilience strategies, and anticipating crisis coordination activities prior to experiencing a failure. Overall, results demonstrate that current resilience thinking needs to change because it does not differentiate resilience from risk. The majority of research thinks resilience is a property that a system has, like a noun, when resilience is really an action a system does, like a verb. Treating resilience as a noun only strengthens commitment to risk-based practices that do not protect infrastructure from unknown events. Instead, switching to thinking about resilience as a verb overcomes prevalent misconceptions about data, goals, systems, and failures, and may bring a necessary, radical change to the way infrastructure is protected in the future.<br>Dissertation/Thesis<br>Doctoral Dissertation Civil, Environmental and Sustainable Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles
36

Yarish, David. "Utilizing Crowd Sourced Analytics for Building Smarter Mobile Infrastructure and Achieving Better Quality of Experience." Thesis, 2015. http://hdl.handle.net/1828/7000.

Full text
Abstract:
There is great power in knowledge. Having insight into and predicting network events can be both informative and profitable. This thesis aims to assess how crowd-sourced network data collected on smartphones can be used to improve the quality of experience for users of the network and give network operators insight into how the networks infrastructure can also be improved. Over the course of a year, data has been collected and processed to show where networks have been performing well and where they are under-performing. The results of this collection aim to show that there is value in the collection of this data, and that this data cannot be adequately obtained without a device side presence. The various graphs and histograms demonstrate that the quantities of measurements and speeds recorded vary by both the location and time of day. It is these variations that cannot be determined via traditional network-side measurements. During the course of this experiment, it was observed that certain times of day have much greater numbers of people using the network and it is likely that the quantities of users on the network are correlated with the speeds observed at those times. Places of gathering such as malls and public areas had a higher user density, especially around noon which could is a normal time when people would take a break from the work day. Knowing exactly where and when an Access Point (AP) is utilized is important information when trying to identify how users are utilizing the network.<br>Graduate<br>davidyarish@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
37

Mihailescu, Madalin. "Low-cost Data Analytics for Shared Storage and Network Infrastructures." Thesis, 2013. http://hdl.handle.net/1807/35909.

Full text
Abstract:
Data analytics used to depend on specialized, high-end software and hardware platforms. Recent years, however, have brought forth the data-flow programming model, i.e., MapReduce, and with it a flurry of sturdy, scalable open-source software solutions for analyzing data. In essence, the commoditization of software frameworks for data analytics is well underway. Yet, up to this point, data analytics frameworks are still regarded as standalone, em dedicated components; deploying these frameworks requires companies to purchase hardware to meet storage and network resource demands, and system administrators to handle management of data across multiple storage systems. This dissertation explores the low-cost integration of frameworks for data analytics within existing, shared infrastructures. The thesis centers on smart software being the key enabler for holistic commoditization of data analytics. We focus on two instances of smart software that aid in realizing the low-cost integration objective. For an efficient storage integration, we build MixApart, a scalable data analytics framework that removes the dependency on dedicated storage for analytics; with MixApart, a single, consolidated storage back-end manages data and services all types of workloads, thereby lowering hardware costs and simplifying data management. We evaluate MixApart at scale with micro-benchmarks and production workload traces, and show that MixApart provides faster or comparable performance to an analytics framework with dedicated storage. For an effective sharing of the networking infrastructure, we implement OX, a virtual machine management framework that allows latency-sensitive web applications to share the data center network with data analytics through intelligent VM placement; OX further protects all applications from hardware failures. The two solutions allow the reuse of existing storage and networking infrastructures when deploying analytics frameworks, and substantiate our thesis that smart software upgrades can enable the end-to-end commoditization of analytics.
APA, Harvard, Vancouver, ISO, and other styles
38

Arantes, Janine Aldous. "Big data, black boxes and bias: the algorithmic identity and educational practice." Thesis, 2021. http://hdl.handle.net/1959.13/1430134.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>This dissertation adds to a burgeoning conversation in education about the implications of commercial platforms being embedded in classrooms and educational practice. Drawing on a postdigital Deleuzian perspective, the study explores how Australian K-12 teachers are negotiating their educational practice as part of a broader data-driven infrastructure which includes predictive analytics and algorithmic bias. It does this by considering the changing role of the teacher’s digital profile through a transdisciplinary lens derived from Education, Media and Communications, and Learning Analytics. Drawing on twelve months of data generation, consisting of an online survey (N=129), two phases of interviews (N=40) with 23 educators from across Australia, and a platform analysis (Edmodo), the study illuminates a startling correlation between the commercial profiling of teachers and relatively intangible workplace hazards. The findings show that teachers are negotiating commercial platforms as a form of psychosocial risk in the workplace, yet not discussing their concerns due to fears of workplace victimization. As such, the study uses the findings to offer theoretical and practical approaches, aimed at improving workplace conditions for teachers. Introducing the eMorpheus Theory - a series of practical recommendations for teachers, schools and Australian Departments of Education, the study details a National Strategy in K-12 Educational Settings, suggests Policy and Legislation, and advises methods for Co-regulation and Self-regulation of commercial platforms and data stewardship in schools. The study concludes by detailing recommendations for further research as a result of the workplace issues identified in Australian educational settings.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Ming-Chien. "Fluorescent Reagents to Improve the Analytical Infrastructure of Capillary Electrophoretic Separations." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-11195.

Full text
Abstract:
Two types of fluorescent molecules had been designed and synthesized to improve the analytical infrastructure of capillary electrophoretic separations. First, a hydrophilic version of the permanently cationic acridine-based fluorophore, HEG2Me2-DAA was synthesized. HEG2Me2-DAA has a lambda^ex max of 490 nm which matches the 488 nm line of the commonly used argon ion laser. The emission spectra of HEG2Me2-DAA are pH-independent. HEG2Me2-DAA was used in capillary electrophoresis with an aqueous background electrolyte and was found to be free of the detrimental peak tailing of the acridine orange-based fluorophore that was caused by adsorption on the inner wall of the fused silica capillary. Bovine serum albumin was labeled with excess of the designed amine reactive reagent and the lowest concentration at which the tagged bovine serum albumin was tested was 15 nM. Chicken ovalbumin was also labeled with FL-CA-PFP and analyzed by capillary isoelectric focusing (cIEF) with LIF detection. The pI values of the tagged proteins shifted in the alkaline direction by about 0.02 compared to the pI values of the non-tagged proteins. A tri-functional probe intended to enable selective enrichment and selective detection of a variety of molecules (e.g., natural products, pharmaceuticals, inhibitors, etc.) was also designed and synthetized by combining FL-CA with biotin and an azide group in a "proof-of-principle" level experiment. In cIEF, the profile of the pH gradient can only be determined with the help of pI markers. A large set of pyrene-based fluorescent pI markers was rationally designed to cover the pI range 3 to 10. To prove the feasibility of the proposed synthetic approach, the subgroup of the pI markers having the greatest structural complexity was synthesized and characterized. The classical zone electrophoretic pI determination methods failed due to severe chromatographic retention of the APTS based pI markers on the capillary wall. Exploratory work was done to design a new pI value determination method that combines the advantages of the immobilized pH gradient technology of the OFFGEL instrument and the carrier-ampholyte-based IEF technology. The method aspects of cIEF have also been improved in this work. The new segmented loading method yielded a more linear pH gradient than the previously known cIEF methods. To exploit a unique property of the newly developed fluorescent pI markers, we used them as pyrene-based ampholytic carbohydrate derivatizing reagents. The pI4 carbohydrate derivatization reagent proved advantageous over 8-aminopyrene-1,3,6-trisulfonic acid (APTS): the pI4 conjugates have higher molar absorbance at 488 nm than the APTS conjugates and become detectable in positive ion mode of MS affording better detection sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
40

"Security Analysis of Interdependent Critical Infrastructures: Power, Cyber and Gas." Doctoral diss., 2018. http://hdl.handle.net/2286/R.I.51602.

Full text
Abstract:
abstract: Our daily life is becoming more and more reliant on services provided by the infrastructures power, gas , communication networks. Ensuring the security of these infrastructures is of utmost importance. This task becomes ever more challenging as the inter-dependence among these infrastructures grows and a security breach in one infrastructure can spill over to the others. The implication is that the security practices/ analysis recommended for these infrastructures should be done in coordination. This thesis, focusing on the power grid, explores strategies to secure the system that look into the coupling of the power grid to the cyber infrastructure, used to manage and control it, and to the gas grid, that supplies an increasing amount of reserves to overcome contingencies. The first part (Part I) of the thesis, including chapters 2 through 4, focuses on the coupling of the power and the cyber infrastructure that is used for its control and operations. The goal is to detect malicious attacks gaining information about the operation of the power grid to later attack the system. In chapter 2, we propose a hierarchical architecture that correlates the analysis of high resolution Micro-Phasor Measurement Unit (microPMU) data and traffic analysis on the Supervisory Control and Data Acquisition (SCADA) packets, to infer the security status of the grid and detect the presence of possible intruders. An essential part of this architecture is tied to the analysis on the microPMU data. In chapter 3 we establish a set of anomaly detection rules on microPMU data that flag "abnormal behavior". A placement strategy of microPMU sensors is also proposed to maximize the sensitivity in detecting anomalies. In chapter 4, we focus on developing rules that can localize the source of an events using microPMU to further check whether a cyber attack is causing the anomaly, by correlating SCADA traffic with the microPMU data analysis results. The thread that unies the data analysis in this chapter is the fact that decision are made without fully estimating the state of the system; on the contrary, decisions are made using a set of physical measurements that falls short by orders of magnitude to meet the needs for observability. More specifically, in the first part of this chapter (sections 4.1- 4.2), using microPMU data in the substation, methodologies for online identification of the source Thevenin parameters are presented. This methodology is used to identify reconnaissance activity on the normally-open switches in the substation, initiated by attackers to gauge its controllability over the cyber network. The applications of this methodology in monitoring the voltage stability of the grid is also discussed. In the second part of this chapter (sections 4.3-4.5), we investigate the localization of faults. Since the number of PMU sensors available to carry out the inference is insufficient to ensure observability, the problem can be viewed as that of under-sampling a "graph signal"; the analysis leads to a PMU placement strategy that can achieve the highest resolution in localizing the fault, for a given number of sensors. In both cases, the results of the analysis are leveraged in the detection of cyber-physical attacks, where microPMU data and relevant SCADA network traffic information are compared to determine if a network breach has affected the integrity of the system information and/or operations. In second part of this thesis (Part II), the security analysis considers the adequacy and reliability of schedules for the gas and power network. The motivation for scheduling jointly supply in gas and power networks is motivated by the increasing reliance of power grids on natural gas generators (and, indirectly, on gas pipelines) as providing critical reserves. Chapter 5 focuses on unveiling the challenges and providing solution to this problem.<br>Dissertation/Thesis<br>Doctoral Dissertation Electrical Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles
41

LÁCHOVÁ, Veronika. "Plán na znovuobnovení kritické infrastruktury na místní úrovni." Master's thesis, 2008. http://www.nusl.cz/ntk/nusl-49428.

Full text
Abstract:
Critical infrastructure (CI) is one of the most important branches in crisis man-agement. In recent years, CI demonstrated its importance on many occasions. My work is focused on the analysis of risks which can be a possible threat to CI. This is because the most important part of securing CI is the prevention of, and prepara-tion for, interruptions or damage. The risks in this part were specified using a FRAP analysis. I also identify the main reciprocal dependencies of all CI sectors to demonstrate which sector is the most important one. Finally, I specify the process of recovering or restoring these CI sectors. A new method was intentionally used for analytical-synthetic models. This method, in conjunction with the use of a FRAP analysis, is new to the branch of crisis management and was used for first time. The main aim of my work is to improve knowledge about CI at a regional level {--} city of České Budějovice. I submit new ways of identifying risks and resolv-ing problems during the recovery phase {--} specialized in CI.
APA, Harvard, Vancouver, ISO, and other styles
42

Pinto, Samantha Theresa. "Optimizing Airport Runway Performance by Managing Pavement Infrastructure." Thesis, 2012. http://hdl.handle.net/10012/6903.

Full text
Abstract:
The research described herein is composed of four major areas of practice. It examines the overall performance of runways and provides tools designed to improve current runway operations and management with particular emphasis on contaminated surfaces. Presented in this thesis is an overview of how to design airport pavements in order to achieve optimal friction by specifically focusing on material selection and construction techniques for rigid and flexible pavements. Rubber buildup and the impact rubber accumulation has on decreasing runway friction, particularly in a range of climatic conditions, is discussed. Four commonly used rubber removal techniques are presented and evaluated. Through this research, an analytical hierarchy process (AHP) decision making protocol was developed for incorporation into airport pavement management systems (APMS). Runway surface condition reporting practices used at the Region of Waterloo International Airport are evaluated and recommendations for improving current practices are identified. Runway surface condition reporting can be improved by removing subjectivity, reporting conditions to pilots in real time, standardizing terminology and measurement techniques, and including runway pictures or sketches to identify contaminant locations where possible. Reports should be incorporated and stored in the APMS. Aircraft braking systems and their effects on landing distances under contaminated conditions are discussed. This thesis presents a proposed solution for monitoring and measuring contaminated runway surfaces and identifying the risks associated with aircraft landing through using the Braking Availability Tester (BAT). Also proposed in this thesis is a testing framework for validating the Braking Availability Tester. The proposed BAT measures interaction between aircraft antiskid braking systems and runway contaminants to determine landing distances more accurately. Finally, this thesis includes a discussion explaining how pavement design, contaminant removal, results from friction tests, and results from the BAT can be incorporated into airport pavement management systems. APMS data can be analyzed to economically optimize and prioritize scheduling of pavement maintenance, preservation and rehabilitation treatments to maintain a high level of service, thereby contributing to runway safety and optimization.
APA, Harvard, Vancouver, ISO, and other styles
43

Masotto, Nicola. "La Valutazione Ambientale nelle Trasformazioni Territoriali in Ambito Alpino: Applicazione del Metodo AHP." Doctoral thesis, 2018. http://hdl.handle.net/11577/3262836.

Full text
Abstract:
Le attività condotte ai fini di questa ricerca sono state sviluppate seguendo un percorso metodologico che parte da un quesito di carattere culturale, ovvero quali "strategie" pianificatorie si possono adottare in un ambito geografico alpino oggetto di un elevato fenomeno di spopolamento, nonostante una certa maturità economica, come il caso della provincia di Belluno. A tal fine sono state indagate le possibili "politiche" da sviluppare per modificare tale criticità, partendo dalla constatazione che l’isolamento geografico di questo territorio è causato dall’assenza di un valico alpino a Nord e dal deficit infrastrutturale dell’intera provincia. I temi affrontati sono i seguenti: l’accessibilità, per comprendere quanto essa possa influire sulle sperequazioni territoriali; le domande sociali del territorio bellunese interessate al tema dell’accessibilità verso nord; gli scenari trasportistici che si stanno configurando a livello strategico riguardo alle connessioni nazionali ed internazionali (ossia quelle che potremmo definire come "nuove geografie delle comunicazioni"); la costruzione e l’applicazione di un modello di valutazione di supporto alle decisioni, denominato Analytic Hierarchy Process (AHP), all’interno di un processo di Valutazione Ambientale Strategica (VAS). L’ultimo aspetto, in particolare, è stato sviluppato individuando un modello valutativo, tra i molti presenti in letteratura, che permettesse di migliorare l’efficacia del processo di VAS, metodo generale per valutare la sostenibilità dei processi programmatori (come le "politiche" di sviluppo territoriale) e pianificatori. In questo senso si è posta l’attenzione sull’AHP che è stata sviluppata non solo per testarne l’efficacia come strumento valutativo, ovvero capace di fornire risultati significativi nel supporto alle decisioni, ma anche per verificare se detto modello può essere collocato nel generale processo di VAS. Tale sperimentazione di integrazione del modello AHP nella VAS, ha quindi permesso di dimostrare che l’AHP può rendere maggiormente efficace la VAS (se il modello AHP viene applicato nella fase ex ante della VAS), aumentandone il significato strategico, nella verifica della sostenibilità di azioni pianificatorie e programmatorie a scala vasta.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography