To see the other types of publications on this topic, follow the link: Analytics.

Dissertations / Theses on the topic 'Analytics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Weida, Petr. "Využití Google Analytics v eshopu." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-85117.

Full text
Abstract:
The main aim of this work is describing and explaining the possibilities of web analytics as a tool for effective evaluation of the web and campaigns performance. The benefit appreciate everybody, who makes decision on further web development and its marketing based on measured data. The work is divided into three main parts. The first describes how set up Google Analytics for measuring the web, the second part shows how read and interpret the measured data. In the third practical part are both applied for analyzing prohifi.cz shop.
APA, Harvard, Vancouver, ISO, and other styles
2

Kruse, Gustav, Lotta Åhag, Samuel Dahlback, and Albin Åbrink. "Seco Analytics." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414862.

Full text
Abstract:
Forecasting is a powerful tool that can enable companies to save millions in revenue every year if the forecast is good enough. The problem lies in the good enough part. Many companies today use Excel topredict their future sales and trends. While this is a start it is far from optimal. Seco Analytics aim to solve this issue by forecasting in an informative and easy manner. The web application uses the ARIMA analysis method to accurately calculate the trend given any country and product area selection. It also features external data that allow the user to compare internal data with relevant external data such as GDP and calculate the correlation given the countries and product areas selected. This thesis describes the developing process of the application Seco Analytics.
APA, Harvard, Vancouver, ISO, and other styles
3

Santiteerakul, Wasana. "Trajectory Analytics." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801885/.

Full text
Abstract:
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
APA, Harvard, Vancouver, ISO, and other styles
4

Mužík, Zbyněk. "Web Analytics." Master's thesis, Vysoká škola ekonomická v Praze, 2006. http://www.nusl.cz/ntk/nusl-295.

Full text
Abstract:
Práce se zabývá problematikou měření ukazatelů souvisejících s provozem webových stránek a aplikací a technologickými prostředky k tomu sloužícími ? Web Analytics (WA). Hlavním cílem práce je otestovat a porovnat vybrané zástupce těchto nástrojů a podrobit je srovnání podle objektivních kriterií, dále také kritické zhodnocení možností WA nástrojů obecně. V první části se práce zaměřuje na popis různých způsobů měření provozu na WWW a definuje související metriky. Poskytuje také přehled dostupných WA nástrojů. Následně je vytvořen hodnotící model pro WA nástroje a podle něj je ohodnoceno šest zástupců těchto nástrojů. Hodnocení má podobu uživatelského testování na datech ze dvou reálných webových stránek. Majitelům těchto dvou webových stránek je učiněno doporučení pro volbu vhodného WA nástroje na základě jejich preferencí. Dalším výstupem práce jsou reporty, vygenerované testovanými nástroji, popisující aktivity na zkoumaných webových stránkách.
APA, Harvard, Vancouver, ISO, and other styles
5

Casari, Alice. "Business Analytics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3846/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sosa, Fidel M. Eng Massachusetts Institute of Technology. "TaleBlazer analytics : automated anonymous analytics of mobile users' behavior." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91872.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 75).
TaleBlazer is an augmented-reality platform that lets users create location-based games for their mobile devices. In order to determine the efficacy and use cases for TaleBlazer games, it is necessary to capture data about user behavior. This thesis presents TaleBlazer Analytics, an automated system which collects and analyzes mobile users' behavior in TaleBlazer games. It details the development of the TaleBlazer Analytics system, comprised of the backend data collection service and the front-end data analysis user interface.
by Fidel Sosa.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Carle, William R. II. "Active Analytics: Adapting Web Pages Automatically Based on Analytics Data." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/629.

Full text
Abstract:
Web designers are expected to perform the difficult task of adapting a site’s design to fit changing usage trends. Web analytics tools give designers a window into website usage patterns, but they must be analyzed and applied to a website's user interface design manually. A framework for marrying live analytics data with user interface design could allow for interfaces that adapt dynamically to usage patterns, with little or no action from the designers. The goal of this research is to create a framework that utilizes web analytics data to automatically update and enhance web user interfaces. In this research, we present a solution for extracting analytics data via web services from Google Analytics and transforming them into reporting data that will inform user interface improvements. Once data are extracted and summarized, we expose the summarized reports via our own web services in a form that can be used by our client side User Interface (UI) framework. This client side framework will dynamically update the content and navigation on the page to reflect the data mined from the web usage reports. The resulting system will react to changing usage patterns of a website and update the user interface accordingly. We evaluated our framework by assigning navigation tasks to users on the UNF website and measuring the time it took them to complete those tasks, one group with our framework enabled, and one group using the original website. We found that the group that used the modified version of the site with our framework enabled was able to navigate the site more quickly and effectively.
APA, Harvard, Vancouver, ISO, and other styles
8

Dibrova, Alisa. "Web analytics. Website analysis with Google Analytics and Yandex Metrics." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22200.

Full text
Abstract:
The scope of my research is web analytics. This paper describes the process of usability analysis of the website belonging to a company Sharden Hus situated in Stockholm. From the many existing tools of web analysis I chose two the most popular ones, Google Analytics and Yandex Metrics. In similar projects that I have read, the website redesign was based on both quantitative, statistical, and qualitative (user interviews, user tests) data. In contrast to the previously carried out projects on websites improvement with the help of similar tools, I decided to base the changes on the website only on quantitative data obtained with Google and Yandex counters. This was done in order to determine whether and how Google and Yandex tools can improve the website performance. And to see if web analytics counters may provide with sufficient statistical data enough for it's correct interpretation by a web analytics designer which would lead to the improvement of the web site performance.The results of my study showed that Google and Yandex counters isolated from qualitative methods can improve the website performance. In particular, the number of visits from the territory of Sweden was increased to almost double; the overall bounce rate reduced; the number of visits to the page containing order forms significantly increased.
APA, Harvard, Vancouver, ISO, and other styles
9

Edris, Sarah R. "Improving TaleBlazer analytics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106026.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 61).
TaleBlazer is a platform for creating and playing augmented reality location-based mobile games. TaleBlazer Analytics is an automated system for collecting and analyzing anonymized player data from these games. This thesis presents additions and improvements made to TaleBlazer Analytics to allow for a more in-depth view of data from individual games, as well as aggregated across games. The updated system will ultimately help researchers, game designers, partner organizations, and the TaleBlazer development team in better understanding how users play TaleBlazer games.
by Sarah R. Edris.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Nau, Alexandra. "Social Media Analytics." Universität Leipzig, 2018. https://ul.qucosa.de/id/qucosa%3A31862.

Full text
Abstract:
Die Arbeit untersucht insgesamt 25 kostenfreie Social Media Analytics-Werkzeuge und liefert einen Beitrag zu einer systematischen Beurteilung dieser Anwendungssystemklasse im Rahmen des Social Customer Relationship Managements.:1 Einleitung 1.1 Motivation 1.2 Problemstellung 1.3 Vorgehen 2 Grundlagen 2.1 Social Media 2.2 Social CRM 2.3 Social Media-Analys 2.4 Softwareanalyse 2.5 Prototyping 3 Analyse von Social Media-Analyse-Tools 3.1 Kurzvorstellung der einzelnen Tools 3.2 Kernfunktionalitäten kostenfreier SMA-Anwendungen 3.3 Realisierbare Anwendungsfälle im SCRM 3.4 Vergleich mit Funktionalitäten einer kostenpflichtigen SMA-Anwendung 3.5 Betrachtung von Unterschieden 4 Entwicklung einer Auswahlhilfe 4.1 Vorüberlegungen 4.2 Implementierung 4.3 Beschreibung 5 Erkenntnisse 5.1 Ergebnisse 5.2 Defizite 6 Ausblick
APA, Harvard, Vancouver, ISO, and other styles
11

Nagin, Gleb. "Competing on analytics." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-164067.

Full text
Abstract:
Business analytics refers to the skills, technologies, applications and practisies for continuous iterative exploration and investigation of past business performance to gain insight and drive business planning. Business analytics focuses on developing new insights and understanding of business performance based on data and statistical methods. Business intelligence traditionally focuses on using a consistent set of metrics to both measure past performance and guide business planning, which is also based on data and statistical methods. Analytics makes extensive use of data, statistical or quantitative analyses, explanatory and predictive modeling, and fact based management to drive decision, making. Analytics may be used as input for human decisions or may drive fully automated decisions. In other words, querying, reporting, OLAP, and alert tools can answer questions such as what happened, how many, how often, where the problem is, and what actions are needed. Business analytics can answer questions like why is this happening, what if these trends continue, what will happen next, what is the best that can happen (optimize). Example of application analytics in different areas: banks use data analyses to differentiate among customers based on credit risk, usage of other characteristics with appropriate product offering. Harrah's company (from 2010 renamed and called Caesers entertaiment. Gaming corporation that owns and operate over 50 casinos, hotels, and seven golf courses under several brands), uses analytics for customer loyalty programs. Deere & Company (manufacturer of agricultural machinery like tractors, combine harvesters, sprayers and other) saved more than $1 billion by employing and implementing a new analytical tool to better optimize inventory. We can mention areas where within analytics are basic domain model: sales/retail business, financial services, risk & credit, marketing, fraud, pricing, telecommunications, supply chain, transportation and many others.
APA, Harvard, Vancouver, ISO, and other styles
12

Poppe, Olga. "Event stream analytics." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/530.

Full text
Abstract:
Advances in hardware, software and communication networks have enabled applications to generate data at unprecedented volume and velocity. An important type of this data are event streams generated from financial transactions, health sensors, web logs, social media, mobile devices, and vehicles. The world is thus poised for a sea-change in time-critical applications from financial fraud detection to health care analytics empowered by inferring insights from event streams in real time. Event processing systems continuously evaluate massive workloads of Kleene queries to detect and aggregate event trends of interest. Examples of these trends include check kites in financial fraud detection, irregular heartbeat in health care analytics, and vehicle trajectories in traffic control. These trends can be of any length. Worst yet, their number may grow exponentially in the number of events. State-of-the-art systems do not offer practical solutions for trend analytics and thus suffer from long delays and high memory costs. In this dissertation, we propose the following event trend detection and aggregation techniques. First, we solve the trade-off between CPU processing time and memory usage while computing event trends over high-rate event streams. Namely, our event trend detection approach guarantees minimal CPU processing time given limited memory. Second, we compute online event trend aggregation at multiple granularity levels from fine (per matched event), to medium (per event type), to coarse (per pattern). Thus, we minimize the number of aggregates – reducing both time and space complexity compared to the state-of-the-art approaches. Third, we share intermediate aggregates among multiple event sequence queries while avoiding the expensive construction of matched event sequences. In several comprehensive experimental studies, we demonstrate the superiority of the proposed strategies over the state-of-the-art techniques with respect to latency, throughput, and memory costs.
APA, Harvard, Vancouver, ISO, and other styles
13

Endert, Alex. "Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model Steering." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/28265.

Full text
Abstract:
User interaction in visual analytic systems is critical to enabling visual data exploration. Through interacting with visualizations, users engage in sensemaking, a process of developing and understanding relationships within datasets through foraging and synthesis. For example, two-dimensional layouts of high-dimensional data can be generated by dimension reduction models, and provide users with an overview of the relationships between information. However, exploring such spatializations can require expertise with the internal mechanisms and parameters of these models. The core contribution of this work is semantic interaction, capable of steering such models without requiring expertise in dimension reduction models, but instead leveraging the domain expertise of the user. Semantic interaction infers the analytical reasoning of the user with model updates, steering the dimension reduction model for visual data exploration. As such, it is an approach to user interaction that leverages interactions designed for synthesis, and couples them with the underlying mathematical model to provide computational support for foraging. As a result, semantic interaction performs incremental model learning to enable synergy between the user's insights and the mathematical model. The contributions of this work are organized by providing a description of the principles of semantic interaction, providing design guidelines through the development of a visual analytic prototype, ForceSPIRE, and the evaluation of the impact of semantic interaction on the analytic process. The positive results of semantic interaction open a fundamentally new design space for designing user interactions in visual analytic systems. This research was funded in part by the National Science Foundation, CCF-0937071 and CCF-0937133, the Institute for Critical Technology and Applied Science at Virginia Tech, and the National Geospatial-Intelligence Agency contract #HMI1582-05-1-2001.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Dakela, Sibongiseni. "Web analytics strategy: a model for adopting and implementing advanced Web Analytics." Doctoral thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/10288.

Full text
Abstract:
Includes bibliographical references (leaves 290-306).
Web Analytics (WA) is an evaluative technique originating from and driven by business in its need to get more value out of understanding the usage of its Web sites and strategies therein. It is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimising Web usage for the online visitor, the online customer and the business with Web site presence. Current WA practice is criticised because it involves mostly raw statistics and therefore the practice tends to be inconsistent and misleading. Using grounded action research, personal observations and a review of online references, the study reviews the current state of WA to to propose an appropriate model and guidelines for a Web Analytics adoption and implementation in an electronic commerce organisation dealing with online marketing.
APA, Harvard, Vancouver, ISO, and other styles
15

Koza, Jacob. "Active Analytics: Suggesting Navigational Links to Users Based on Temporal Analytics Data." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/892.

Full text
Abstract:
Front-end developers are tasked with keeping websites up-to-date while optimizing user experiences and interactions. Tools and systems have been developed to give these individuals granular analytic insight into who, with what, and how users are interacting with their sites. These systems maintain a historical record of user interactions that can be leveraged for design decisions. Developing a framework to aggregate those historical usage records and using it to anticipate user interactions on a webpage could automate the task of optimizing web pages. In this research a system called Active Analytics was created that takes Google Analytics historical usage data and provides a dynamic front-end system for automatically updating web page navigational elements. The previous year’s data is extracted from Google Analytics and transformed into a summarization of top navigation steps. Once stored, a responsive front-end system selects from this data a timespan of three weeks from the previous year: current, previous and next. The most frequently reached pages, or their parent pages, will have their navigational UI elements highlighted on a top-level or landing page to attempt to reduce the effort to reach those pages. The Active Analytics framework was evaluated by eliciting volunteers by randomly assigning two versions of a site, one with the framework, one without. It was found that users of the framework-enabled site were able to navigate a site more easily than the original.
APA, Harvard, Vancouver, ISO, and other styles
16

Carpani, Valerio. "CNN-based video analytics." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
The content of this thesis illustrates the six months work done during my internship at TKH Security Solutions - Siqura B.V. in Gouda, Netherlands. The aim of this thesis is to investigate on convolutional neural networks possible usage, from two different point of view: first we propose a novel algorithm for person re-identification, second we propose a deployment chain, for bringing research concepts to product ready solutions. In existing works, the person re-identification task is assumed to be independent of the person detection task. In this thesis instead, we consider the two tasks as linked. In fact, features produced by an object detection convolutional neural network (CNN) contain useful information, which is not being used by current re-identification methods. We propose several solutions for learning a metric on CNN features to distinguish between different identities. Then the best of these solutions is compared with state of the art alternatives on the popular Market-1501 dataset. Results show that our method outperforms them in computational efficiency, with only a reasonable loss in accuracy. For this reason, we believe that the proposed method can be more appropriate than current state of the art methods in situations where the computational efficiency is critical, such as embedded applications. The deployment chain we propose in this thesis has two main goals: it must be flexible for introducing new advancement in networks architecture, and it must be able to deploy neural networks both on server and embedded platforms. We tested several frameworks on several platforms and we ended up with a deployment chain that relies on the open source format ONNX.
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Quoc Do. "Approximate Data Analytics Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234219.

Full text
Abstract:
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: • StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. • IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. • PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. • ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
APA, Harvard, Vancouver, ISO, and other styles
18

(UPC), Universidad Peruana de Ciencias Aplicadas. "Digital Analytics - SI367 201801." Universidad Peruana de Ciencias Aplicadas (UPC), 2018. http://hdl.handle.net/10757/623256.

Full text
Abstract:
Curso de especialidad de Digital Analytics en la carrera de Ingeniería de Sistemas de Información de carácter práctico dirigido a los estudiantes del 6to ciclo, que busca desarrollar las compentencias general de razonamiento cuantitativo y la específica Identifica el impacto de las soluciones de Ingeniería en el contexto global, económico y del entorno de la sociedads acorde al ABET-Student Outcome(h). Primero debemos tener claro que los líderes empresariales, compiten por convertir la información extraída de los datos en resultados significativos. Aquellos de mayor éxito aplican análisis en toda su organización para tomar decisiones más inteligentes, actuar rápidamente y optimizar los resultados. Por consiguiente en este curso aprenderá a analizar los datos disponibles de la actividad online de una organización, convertirlos en conclusiones de valor para el negocio, aprenderá cómo los analistas describen, predicen e informan las decisiones de negocios. Conocerá cómo medir efectivamente un sitio web y sacar provecho de las diferentes herramientas, entender las diferencias de métricas, KPI¿s, las categorías de informes. Desarrollará una mentalidad analítica que le ayudará a tomar decisiones estratégicas basadas en datos. El curso propone ser una disciplina que brinda insights accionables, para que pueda aplicar lo aprendido en sus emprendimientos.
APA, Harvard, Vancouver, ISO, and other styles
19

Johnson, Kris (Kris Dianne). "Analytics for online markets." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98571.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 147-153).
Online markets are becoming increasingly important in today's world as more people gain access to the internet. Furthermore, the explosion of data that is collected via these online markets provides us with new opportunities to use analytics techniques to design markets and optimize tactical decisions. In this thesis, we focus on two types of online markets -- peer-to-peer networks and online retail markets -- to show how using analytics can make a valuable impact. We first study scrip systems which provide a non-monetary trade economy for exchange of resources; their most common application is in governing online peer-to-peer networks. We model a scrip system as a stochastic game and study system design issues on selection rules to match trade partners over time. We show the optimality of one particular rule in terms of maximizing social welfare for a given scrip system that guarantees players' incentives to participate, and we investigate the optimal number of scrips to issue under this rule. In the second part, we partner with Rue La La, an online retailer in the online flash sales industry where they offer extremely limited-time discounts on designer apparel and accessories. One of Rue La La's main challenges is pricing and predicting demand for products that it has never sold before. To tackle this challenge, we use machine learning techniques to predict demand of new products and develop an algorithm to efficiently solve the subsequent multi-product price optimization. We then create and implement this algorithm into a pricing decision support tool for Rue La La's daily use. We conduct a controlled field experiment which estimates an increase in revenue of the test group by approximately 10%. Finally, we extend our work with Rue La La to address a more dynamic setting where a retailer may choose to change the price of a product throughout the course of the selling season. We have developed an algorithm that extends the well-known multi-armed bandit algorithm called Thompson Sampling to consider a retailer's limited inventory constraints. Our algorithm has promising numerical performance results when compared to other algorithms developed for the same setting.
by Kris Johnson.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Canon, Moreno Javier Mauricio 1977. "Leading data analytics transformations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111472.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 77-79).
The phenomenal success of big technology companies founded with a strong emphasis on data, has epitomized the rise of the new "digital economy." Large traditional organizations, that were not long ago "on top of the world" are now at a crossroads. Their business models seem threatened by newcomers as they face pressure to "transform" and "modernize." Publicity has reinforced the perception that data can now be exploited and turned into a source of competitive advantage. In this context, data analytics presumably offers a vehicle to hasten this transformation. Who are the individuals leading these transformation efforts? Where do they come from? What are their challenges and perspectives? This thesis attempted to answer these questions and by doing so, uncover the "faces behind the leadership titles." Interviews of 33 individuals leading data analytics in large traditional organizations and under different capacities, (i.e., at the C-Suite, at the senior leadership level and in middle management) had a few elements in common: They articulated the difficulty of change, and the significant challenges in balancing strategic design with political savviness and cultural awareness. At their core, these are true leadership stories. Change management processes and the "Three Perspectives on Organizations" framework offer mechanisms to better understand the root causes for inhibitors of transformation and provide a path to guide data analytics initiatives. Whether data analytics proves to be a "passing fad" or not, by now, it has served as a catalyst for large traditional organizations to embark on transformation initiatives and reexamine ways to remain relevant. Leadership stories will most certainly abound as these organizations attempt to find ways to survive and prosper in what is now the "digital age."
by Javier Mauricio Canon Moreno.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
21

Louth, Richard James. "Essays in quantitative analytics." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Steyn, H. J. "Advanced analytics strategy formulation." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/96091.

Full text
Abstract:
Thesis (MCom)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Despite the high potential impact of advanced analytics on the performance of businesses around the world, its uptake and application in an integrated and strategically aligned manner has been limited. This problem is more pronounced with specific reference to optimization. Optimization methods lag behind other analytical methods such as data visualization and predictive models in terms of their level of adoption in organizations. This research suggests that part of the problem of limited application and integration lies in an overall inability of companies to develop and implement an effective advanced analytics strategy. The primary objective of this research is therefore to establish an approach for the development of an advanced analytics strategy for a company. Due to the absence of well described examples or published research on the subject it was necessary to generate insight and knowledge using a research approach that allowed for the development, testing, and improvement of a strategy over multiple cycles. Such a research approach presented itself in the form of action research. An initial advanced analytics strategy was developed for one of the subsidiary companies in a group of companies. The subsidiary company specializes in the importation, distribution, and marketing of industrial fasteners and has branches throughout South Africa. The strategy document was presented to the senior decision makers in the holding company for evaluation. The feedback from the evaluation was used to formulate changes to the initial strategy aimed at improving its alignment with the decision makers’ thinking on advanced analytics and increasing the probability of its implementation. The suggested changes from the first research cycle were used to define the second cycle strategy framework. The second cycle strategy framework included a strategy development process that consisted of three main steps: • Establishing business focus and relevance which included an assessment of the value creating potential of the business, identifying and prioritizing of value creating opportunities, and an assessment of key underlying decision processes, • Developing business relevant concept applications which included determining their potential value impact and creating a ranked pipeline of decision optimization applications. • Selecting concept applications and moving them into production. The strategy development process was informed by a number of different models, methods and frameworks. The most important model was a detailed valuation model of the company. The valuation model proved to be invaluable in identifying those aspects of the business where an improvement will result in the highest potential increase in shareholder value. The second cycle strategy framework will be used to develop an improved version of the advanced analytics strategy for the researched company. Moreover, the generic nature of the framework will allow for it to be used in the development of advanced analytics strategies for other companies.
AFRIKAANSE OPSOMMING: Ten spyte van die potensieel omvangryke impak van gevorderde analitiese tegnieke op die prestasie van besighede wˆereldwyd, is die toepassing en strategiese integrasie daarvan beperk. Hierdie probleem is nog meer sigbaar wanneer die aanwending van optimeringsmetodes oorweeg word. Die mate waarin optimeringsmetodes deur besighede aangewend word, is heelwat laer as ander analitiese metodes soos data visualisering en vooruitskattingsmodelle. Hierdie navorsing plaas ’n groot gedeelte van die probleem voor die deur van besighede se onvermo ¨e om effektiewe gevorderde analitiese strategie¨e te ontwikkel en te implementeer. Die primˆere doel van die navorsing is gevolglik om ’n benadering tot die ontwikkeling van ’n analitiese strategie vir ’n maatskappy voor te stel. In die lig van die afwesigheid van gepubliseerde voorbeelde of soortgelyke navorsing op hierdie onderwerp moes insig en kennis gevolglik bekom word deur die aanwending van ’n navorsingsbenadering wat die navorser in staat gestel het om ’n voorgestelde strategie te ontwikkel, te toets en te verbeter oor verskeie navorsingsiklusse. Die navorsingsbenadering wat gebruik is staan bekend as aksienavorsing. Die eerste gevorderde analitiese strategie is onwikkel vir een van die filiaalmaatskappye in ’n maatskappygroep. Die filiaalmaatskappy spesialiseer in die invoer, verspreiding, en bemarking van industri¨ele hegstukke en het takke regoor Suid Afrika. Die strategie dokument is voorgelˆe aan die senior besluitnemers van die houermaatskappy vir oorweging. Op grond van hul terugvoer is veranderings aan die strategie aangebring ten einde hul benadering tot gevorderde analitiese tegnieke te akkommodeer en om die waarskynlikheid van implementering daarvan te verhoog. Die voorgestelde veranderings is gebruik om ’n strategiese raamwerk vir die tweede navorsingsiklus te definieer. Hierdie raamwerk sluit ’n strategiese ontwikkelingsproses in wat bestaan uit drie hoofstappe: • Vestiging van besigheidsfokus en relevansie wat insluit ’n oorweging van die waardeskeppingsvermo ¨e van die maatskappy, identifisering en prioritisering van waardeskeppingsgeleenthede en die oorweging van die onderliggende besluitnemingsprosesse, • Ontwikkeling van besigheidsrelevante konsep oplossings wat insluit die bepaling van die potensi¨ele waarde impak en die skepping van ’n ranglys van besluitoptimeringsoplossings, en • Die verskuiwing van geselekteerde oplossings na ’n produksie omgewing. Die strategiese ontwikkelingsproses maak gebruik van verskeie modelle, metodes en raamwerke. Die belangrikste model was ’n gedetaileerde waardasiemodel van die maatskappy. Die waardasiemodel was instrumenteel in die idenfikasie van die aspekte van die maatskappy waar ’n verbetering die grootste bydrae kan maak tot die skepping van aandeelhouerswaarde. Die tweede siklus strategiese raamwerk sal aangewend word om ’n verbeterde analitiese strategie vir die nagevorsde maatskappy te ontwikkel. Die generiese aard van die raamwerk sal ’n gebruiker daarvan in staat stel om ’n gevorderde analitiese strategie vir ander maatskappye te ontwikkel.
APA, Harvard, Vancouver, ISO, and other styles
23

Mai, Feng. "Essays in Business Analytics." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439295906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pesantez, Narvaez Jessica Estefania. "Risk Analytics in Econometrics." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/671864.

Full text
Abstract:
This thesis addresses the framework of risk analytics as a compendium of four main pillars: (i) big data, (ii) intensive programming, (iii) advanced analytics and machine learning, and (iv) risk analysis. Under the latter mainstay, this PhD dissertation reviews potential hazards known as “extreme events” that could negatively impact the wellbeing of people, profitability of firms, or the economic stability of a country, but which also have been underestimated or incorrectly treated by traditional modelling techniques. The objective of this thesis is to develop econometric and machine learning algorithms that can improve the predictive capacity of those extreme events and improve the comprehension of the phenomena contrary to some modern advanced methods which are black boxes in terms of interpretation. This thesis presents seven chapters that provide a methodological contribution to the existing literature by building techniques that transform the new valuable insights of big data into more accurate predictions that support decisions under risk, and increase robustness for more reliable and real results. This PhD thesis focuses uniquely on extremal events which are trigged into a binary variable, mostly known as class-imbalanced data and rare events in binary response, in other words, whose classes that are not equally distributed. The scope of research tackle real cases studies in the field of risk and insurance, where it is highly important to specify a level of claims of an event in order to foresee its impact and to provide a personalized treatment. After Chapter 1 corresponding to the introduction, Chapter 2 proposes a weighting mechanism to incorporated in the weighted likelihood estimation of a generalized linear model to improve the predictive performance of the highest and lowest deciles of prediction. Chapter 3 proposes two different weighting procedures for a logistic regression model with complex survey data or specific sampling designed data. Its objective is to control the randomness of data and provide more sensitivity to the estimated model. Chapter 4 proposes a rigorous review of trials with modern and classical predictive methods to uncover and discuss the efficiency of certain methods over others, and which and how gaps in machine learning literature can be addressed efficiently. Chapter 5 proposes a novel boosting-based method that overcomes certain existing methods in terms of predictive accuracy and also, recovers some interpretation of the model with imbalanced data. Chapter 6 develops another boosting-based algorithm which is able to improve the predictive capacity of rare events and get approximated as a generalized linear model in terms of interpretation. And finally, Chapter 7 includes the conclusions and final remarks. The present thesis highlights the importance of developing alternative modelling algorithms that reduces uncertainty, especially when there are potential limitations that impede to know all the previous factors that influence on the presence of a rare event or imbalanced-data phenomenon. This thesis merges two important approaches in modelling predictive literature as they are: “econometrics” and “machine learning”. All in all, this thesis contributes to enhance the methodology of how empirical analysis in many experimental and non-experimental sciences have being doing so far.
APA, Harvard, Vancouver, ISO, and other styles
25

Soukup, Petr. "High-Performance Analytics (HPA)." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165252.

Full text
Abstract:
The aim of the thesis on the topic of High-Performance Analytics is to gain a structured overview of solutions of high performance methods for data analysis. The thesis introduction concerns with definitions of primary and secondary data analysis, and with the primary systems which are not appropriate for analytical data analysis. The usage of mobile devices, modern information technologies and other factors caused a rapid change of the character of data. The major part of this thesis is devoted particularly to the historical turn in the new approaches towards analytical data analysis, which was caused by Big Data, a very frequent term these days. Towards the end of the thesis there are discussed the system sources which greatly participate in the new approaches to the analytical data analysis as well as in the technological solutions of High Performance Analytics themselves. The second, practical part of the thesis is aimed at a comparison of the performance in conventional methods for data analysis and in one of the high performance methods of High Performance Analytics (more precisely, with In-Memory Analytics). Comparison of individual solutions is performed in identical environment of High Performance Analytics server. The methods are applied to a certain sample whose volume is increased after every round of executed measurement. The conclusion evaluates the tests results and discusses the possibility of usage of the individual High Performance Analytics methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Ahsan, Ramoza. "Time Series Data Analytics." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/529.

Full text
Abstract:
Given the ubiquity of time series data, and the exponential growth of databases, there has recently been an explosion of interest in time series data mining. Finding similar trends and patterns among time series data is critical for many applications ranging from financial planning, weather forecasting, stock analysis to policy making. With time series being high-dimensional objects, detection of similar trends especially at the granularity of subsequences or among time series of different lengths and temporal misalignments incurs prohibitively high computation costs. Finding trends using non-metric correlation measures further compounds the complexity, as traditional pruning techniques cannot be directly applied. My dissertation addresses these challenges while meeting the need to achieve near real-time responsiveness. First, for retrieving exact similarity results using Lp-norm distances, we design a two-layered time series index for subsequence matching. Time series relationships are compactly organized in a directed acyclic graph embedded with similarity vectors capturing subsequence similarities. Powerful pruning strategies leveraging the graph structure greatly reduce the number of time series as well as subsequence comparisons, resulting in a several order of magnitude speed-up. Second, to support a rich diversity of correlation analytics operations, we compress time series into Euclidean-based clusters augmented by a compact overlay graph encoding correlation relationships. Such a framework supports a rich variety of operations including retrieving positive or negative correlations, self correlations and finding groups of correlated sequences. Third, to support flexible similarity specification using computationally expensive warped distance like Dynamic Time Warping we design data reduction strategies leveraging the inexpensive Euclidean distance with subsequent time warped matching on the reduced data. This facilitates the comparison of sequences of different lengths and with flexible alignment still within a few seconds of response time. Comprehensive experimental studies using real-world and synthetic datasets demonstrate the efficiency, effectiveness and quality of the results achieved by our proposed techniques as compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Fors, Anton, and Emelie Ohlson. "Business analytics in traditional industries – tackling the new age of data and analytics." Thesis, Högskolan i Borås, Akademin för bibliotek, information, pedagogik och IT, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-10450.

Full text
Abstract:
Decision-making is no longer based on human preferences and expertise alone. The era of big data brings up new challenges with business analytics for organizations that want a competitive advantage. Previous research shows that a lot of studies have been made on why this era is now crucial to organizations but not how they can adapt it. In this case study there is a glimpse of how a traditional organization with an old mindset can catch up on the new technological advantages. The purpose of this study is to understand how a traditional company in Sweden is affected by analytics and if it is valuable to them.For us to be able to create our theoretical framework, we based our on peer-reviewed material but also technological and science blogs from key experts in the field. The material examines the most essential and crucial elements within the area of business analytics and data management. The theoretical framework has guided our work when formulating and refining the research question and the interview questions.The results of the study clearly show that our case is on the right track with new development and projects, but there are still a lot of milestones to achieve before these are fulfilled. Issues within the company have to be solved and there is a need to modify and change the culture in the organization to a more data-driven decisive culture. The study gives a clear insight into the challenges that organizations have to face and overcome before making radical changes.
APA, Harvard, Vancouver, ISO, and other styles
28

Gustafsson, Daniel. "Business Intelligence, Analytics and Human Capital: Current State of Workforce Analytics in Sweden." Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-6034.

Full text
Abstract:
The way organizations make decisions today is very often purely based on intuition or gut-feeling. It does not matter whether decisions are of high risk for the company’s future or not, managers golden-gut is the only thing that determines whether invest- ments should be made or not. Analytics is the opposite of this intuition-based decision making. If taken seriously, almost all decisions in organizations are made on facts that are analytically derived from massive amount of data from internal and external sources such as customer relationship systems to social networks. Business leaders are becoming more aware of analytically based decisions, and some use it more than others. Analytics is usually practiced in finance, customer relationships or marketing. There is, however, one area where analytics is practiced by a small number of companies, and that is on the organization’s workforce. The workforce is usually seen as one of the most complicated areas to practice analytics. An employee is, of course, more com- plicated than a product. Despite this fact, companies usually forget that conducting analytics on employees is very similar to conducting analytics on customers, which has been practiced for many decades. Some organizations are showing great success with applications of Workforce Analytics (WA). Most of these organizations are located in the US or outside of Sweden. This thesis has conducted research on to what extent Workforce Analytics is practiced in Sweden. Empirical findings show that some com- panies use WA in Sweden. The practice is not of highest sophistication of WA. Also, they show aspiration towards the idea of WA and some are locally conducting various of applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Hejl, Radomír. "Analytika obsahových webů." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-124785.

Full text
Abstract:
The thesis deals with web analytics of content based websites. Its primary aim is to design metrics of a web analysis and range of the metrics. This allows a proprietor of the content based websites to evaluate the state of the web and also its changes. The following is a practical example of handling website metrics and how to evaluate a web redesign with the help of these metrics. The first and second chapter lists literature of web analysis and specifies a purpose of the thesis and its target group. In the paragraphs that follow, I explain the theoretical starting-points and major concepts in further detail. In the third chapter I describe the main targets of content based websites because con-sequently defined metrics should reflect these targets and aim for them. Then I underline some specific problems of content based websites analysis. The fifth chapter forms the crux of this work. First, I define right metrics and then present the very design of metrics for analysis of content based websites. The proposed metrics describe interpretation of values, possibilities of segmentation and also relation to other metrics. In the fifth chapter there is an example of some metrics applied to real data of two content based websites with a description of how to work with these metrics.
APA, Harvard, Vancouver, ISO, and other styles
30

Wex, Felix [Verfasser], and Dirk [Akademischer Betreuer] Neumann. "Coordination strategies and predictive analytics in crisis management = Koordinationsstrategien und Predictive Analytics im Krisenmanagement." Freiburg : Universität, 2013. http://d-nb.info/1114829102/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Akula, Venkata Ganesh Ashish. "Implementation of Advanced Analytics on Customer Satisfaction Process in Comparison to Traditional Data Analytics." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1555612496986004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Barracu, Maria Antonietta. "Tecniche, metodologie e strumenti per la Web Analytics, con particolare attenzione sulla Video Analytics." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1919/.

Full text
Abstract:
In questa tesi viene affrontato il tema del tracciamento video, analizzando le principali tecniche, metodologie e strumenti per la video analytics. L'intero lavoro, è stato svolto interamente presso l'azienda BitBang, dal reperimento di informazioni e materiale utile, fino alla stesura dell'elaborato. Nella stessa azienda ho avuto modo di svolgere il tirocinio, durante il quale ho approfondito gli aspetti pratici della web e video analytics, osservando il lavoro sul campo degli specialisti del settore e acquisendo familiarità con gli strumenti di analisi dati tramite l'utilizzo delle principali piattaforme di web analytics. Per comprendere a pieno questo argomento, è stato necessario innanzitutto conoscere la web analytics di base. Saranno illustrate quindi, le metodologie classiche della web analytics, ovvero come analizzare il comportamento dei visitatori nelle pagine web con le metriche più adatte in base alle diverse tipologie di business, fino ad arrivare alla nuova tecnica di tracciamento eventi. Questa nasce subito dopo la diffusione nelle pagine dei contenuti multimediali, i quali hanno portato a un cambiamento nelle modalità di navigazione degli utenti e, di conseguenza, all'esigenza di tracciare le nuove azioni generate su essi, per avere un quadro completo dell'esperienza dei visitatori sul sito. Non sono più sufficienti i dati ottenuti con i tradizionali metodi della web analytics, ma è necessario integrarla con tecniche nuove, indispensabili se si vuole ottenere una panoramica a 360 gradi di tutto ciò che succede sul sito. Da qui viene introdotto il tracciamento video, chiamato video analytics. Verranno illustrate le principali metriche per l'analisi, e come sfruttarle al meglio in base alla tipologia di sito web e allo scopo di business per cui il video viene utilizzato. Per capire in quali modi sfruttare il video come strumento di marketing e analizzare il comportamento dei visitatori su di esso, è necessario fare prima un passo indietro, facendo una panoramica sui principali aspetti legati ad esso: dalla sua produzione, all'inserimento sulle pagine web, i player per farlo, e la diffusione attraverso i siti di social netwok e su tutti i nuovi dispositivi e le piattaforme connessi nella rete. A questo proposito viene affrontata la panoramica generale di approfondimento sugli aspetti più tecnici, dove vengono mostrate le differenze tra i formati di file e i formati video, le tecniche di trasmissione sul web, come ottimizzare l'inserimento dei contenuti sulle pagine, la descrizione dei più famosi player per l'upload, infine un breve sguardo sulla situazione attuale riguardo alla guerra tra formati video open source e proprietari sul web. La sezione finale è relativa alla parte più pratica e sperimentale del lavoro. Nel capitolo 7 verranno descritte le principali funzionalità di due piattaforme di web analytics tra le più utilizzate, una gratuita, Google Analytics e una a pagamento, Omniture SyteCatalyst, con particolare attenzione alle metriche per il tracciamento video, e le differenze tra i due prodotti. Inoltre, mi è sembrato interessante illustrare le caratteristiche di alcune piattaforme specifiche per la video analytics, analizzando le più interessanti funzionalità offerte, anche se non ho avuto modo di testare il loro funzionamento nella pratica. Nell'ultimo capitolo vengono illustrate alcune applicazioni pratiche della video analytics, che ho avuto modo di osservare durante il periodo di tirocinio e tesi in azienda. Vengono descritte in particolare le problematiche riscontrate con i prodotti utilizzati per il tracciamento, le soluzioni proposte e le questioni che ancora restano irrisolte in questo campo.
APA, Harvard, Vancouver, ISO, and other styles
33

Fotrousi, Farnaz, and Katayoun Izadyan. "Analytics-based Software Product Planning." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5053.

Full text
Abstract:
Context. Successful software product management concerns about developing right software products for right markets at the right time. The product manager, who carries responsibilities of planning, requires but does not always have access to high-quality information for making the best possible planning decisions. The following master thesis concentrates on proposing a solution that supports planning of a software product by means of analytics. Objectives. The aim of the master thesis is to understand potentials of analytics in product planning decisions in a SaaS context. This thesis focuses on SaaS based analytics used for portfolio management, product roadmapping, and release planning and specify how the analytics can be utilized for planning of a software product. Then the study devises an analytics-based method to enable software product planning. Methods. The current study was designed with a mixed methodology approach, which includes the literature review and survey researches as well as case study under the framework of the design science. Literature review was conducted to identify product planning decisions and the measurements that support them. A total of 17 interview based surveys were conducted to investigate the impact of analytics on product planning decisions in product roadmapping context. The result of the interviews ended in an analytics-based planning method provided under the framework of design science. The designed analytics-based method was validated by a case study in order to measure the effectiveness of the solution. Results. The identified product planning decisions were summarized and categorized into a taxonomy of decisions divided by portfolio management, roadmapping, and release planning. The identified SaaS-based measurements were categorized into six categories and made a taxonomy of measurements. The result of the survey illustrated that importance functions of the measurement- categories are not much different for planning-decisions. In the interviews 61.8% of interviewees selected “very important” for “Product”, 58.8% for “Feature”, and 64.7% for “Product healthiness” categories. For “Referral sources” category, 61.8% of responses have valuated as “not important”. Categories of “Technologies and Channels” and “Usage Pattern” have been rated majorly “important” by 47.1% and 32.4% of the corresponding responses. Also the results showed that product use, feature use, users of feature use, response time, product errors, and downtime are the first top measurement- attributes that a product manager prefers to use for product planning. Qualitative results identified “product specification, product maturity and goal” as the effected factors on analytics importance for product planning and in parallel specified strengths and weaknesses of analytical planning from product managers’ perspectives. Analytics-based product planning method was developed with eleven main process steps, using the measurements and measurement scores resulted from the interviews, and finally got validated in a case. The method can support all three assets of product planning (portfolio management, roadmapping, and release planning), however it was validated only for roadmapping decisions in the current study. SaaS-based analytics are enablers for the method, but there might be some other analytics that can assist to take planning decisions as well. Conclusion. The results of the interviews on the roadmapping decisions indicated that different planning decisions consider similar importance for measurement-categories to plan a software product. Statistics about feature use, product use, response time, users, error and downtime have been recognized as the most important measurements for planning. Analytics increase knowledge about product usability and functionality, and also can assist to improve problem handling and client-side technologies. But it has limitations regarding to receiving formed-based customer feedback, handling development technologies and also interpreting some measurements in practice. Immature products are not able to use analytics. To create, remove, or enhance a feature, the data trend provides a wide view of feature desirability in the current or even future time and clarifies how these changes can impact decision making. Prioritizing features can be performed for the features in the same context by comparing their measurement impacts. The analytics-based method covers both reactive and proactive planning.
APA, Harvard, Vancouver, ISO, and other styles
34

Saha, Shishir Kumar, and Mirza Mohymen. "Analytics for Software Product Planning." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3227.

Full text
Abstract:
Context. Software product planning involves product lifecycle management, roadmapping, release planning and requirements engineering. Requirements are collected and used together with criteria to define short-term plans, release plans and long-term plans, roadmaps. The different stages of the product lifecycle determine whether a product is mainly evolved, extended, or simply maintained. When eliciting requirements and identifying criteria for software product planning, the product manager is confronted with statements about customer interests that do not correspond to their needs. Analytics summarize, filter, and transform measurements to obtain insights about what happened, how it happened, and why it happened. Analytics have been used for improving usability of software solutions, monitoring reliability of networks and for performance engineering. However, the concept of using analytics to determine the evolution of a software solution is unexplored. In a context where a misunderstanding of users’ need can easily lead the effective product design to failure, the support of analytics for software product planning can contribute to fostering the realization of which features of the product are useful for the users or customers. Objective. In observation of a lack of primary studies, the first step is to apply analytics of software product planning concept in the evolution of software solutions by having an understanding of the product usage measurement. For this reason, this research aims to understand relevant analytics of users’ interaction with SaaS applications. In addition, to identify an effective way to collect right analytics and measure feature usage with respect to page-based analytics and feature-based analytics to provide decision-support for software product planning. Methods. This research combines a literature review of the state-of-the-art to understand the research gap, related works and to find out relevant analytics for software product planning. A market research is conducted to compare the features of different analytics tools to identify an effective way to collect relevant analytics. Hence, a prototype analytics tool is developed to explore the way of measuring feature usage of a SaaS website to provide decision-support for software product planning. Finally, a software simulation is performed to understand the impact of page clutter, erroneous page presentation and feature spread with respect to page-based analytics and feature-based analytics. Results. The literature review reveals the studies which describe the related work on relevant categories of software analytics that are important for measuring software usage. A software-supported approach, developed from the feature comparison results of different analytics tools, ensures an effective way of collecting analytics for product planners. Moreover, the study results can be used to understand the impact of page clutter, erroneous page representation and feature spread with respect to page-based analytics and feature-based analytics. The study reveals that the page clutter, erroneous page presentation and feature spread exaggerate feature usage measurement with the page-based analytics, but not with the feature-based analytics. Conclusions. The research provided a wide set of evidence fostering the understanding of relevant analytics for software product planning. The results revealed the way of measuring the feature usage to SaaS product managers. Furthermore, feature usage measurement of SaaS websites can be recognized, which helps product managers to understand the impact of page clutter, erroneous page presentation and feature spread between page-based and feature-based analytics. Further case study can be performed to evaluate the solution proposals by tailoring the company needs.
+46739480254
APA, Harvard, Vancouver, ISO, and other styles
35

Erlandsson, Niklas. "Game Analytics och Big Data." Thesis, Mittuniversitetet, Avdelningen för arkiv- och datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-29185.

Full text
Abstract:
Game Analytics är ett område som vuxit fram under senare år. Spelutvecklare har möjligheten att analysera hur deras kunder använder deras produkter ned till minsta knapptryckning. Detta kan resultera i stora mängder data och utmaning ligger i att lyckas göra något vettigt av sitt data. Utmaningarna med speldata beskrivs ofta med liknande egenskaper som används för att beskriva Big Data: volume, velocity och variability. Detta borde betyda att det finns potential för ett givande samarbete. Studiens syfte är att analysera och utvärdera vilka möjligheter Big Data ger att utveckla området Game Analytics. För att uppfylla syftet genomförs en litteraturstudie och semi-strukturerade intervjuer med individer aktiva inom spelbranschen. Resultatet visar att källorna är överens om att det finns värdefull information bland det data som kan lagras, framförallt i de monetära, generella och centrala (core) till spelet värdena. Med mer avancerad analys kan flera andra intressanta mönster grävas fram men ändå är det övervägande att hålla sig till de enklare variablerna och inte bry sig om att gräva djupare. Det är inte för att datahanteringen skulle bli för omständlig och svår utan för att analysen är en osäker investering. Även om någon tar sig an alla utmaningar speldata ställer fram finns det en osäkerhet på informationens tillit och användbarheten hos svaren. Framtidsvisionerna inom Game Analytics är blygsamma och inom den närmsta framtiden är det nästan bara effektiviseringar och en utbredning som förutspås vilket inte direkt ställer några nya krav på datahanteringen.
Game Analytics is a research field that appeared recently. Game developers have the ability to analyze how customers use their products down to every button pressed. This can result in large amounts of data and the challenge is to make sense of it all. The challenges with game data is often described with the same characteristics used to define Big Data: volume, velocity and variability. This should mean that there is potential for a fruitful collaboration. The purpose of this study is to analyze and evaluate what possibilities Big Data has to develop the Game Analytics field. To fulfill this purpose a literature review and semi-structured interviews with people active in the gaming industry were conducted. The results show that the sources agree that valuable information can be found within the data you can store, especially in the monetary, general and core values to the specific game. With more advanced analysis you may find other interesting patterns as well but nonetheless the predominant way seems to be sticking to the simple variables and staying away from digging deeper. It is not because data handling or storing would be tedious or too difficult but simply because the analysis would be too risky of an investment. Even if you have someone ready to take on all the challenges game data sets up, there is not enough trust in the answers or how useful they might be. Visions of the future within the field are very modest and the nearest future seems to hold mostly efficiency improvements and a widening of the field, making it reach more people. This does not really post any new demands or requirements on the data handling.
APA, Harvard, Vancouver, ISO, and other styles
36

Nguyen, Quyen Do. "Anomaly handling in visual analytics." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-122307-132119/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lundblad, Patrik. "Applied Geovisual Analytics and Storytelling." Doctoral thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91357.

Full text
Abstract:
'Geovisual Analytics' represents a cross-disciplinary research that looks for innovative methods to interactively visualize and solve large spatio-temporal related visualization problems for multivariate data through a visual discovery, reasoning and collaborative process. The name emphasizes the link with the well-known research discipline of Visual Analytics and could be viewed as sub-area with its specific focus on space and time posing specific research problems. This thesis focuses on the design, implementation and evaluation of interactive analytical spatio-temporal and multivariate representations demonstrated in several application scenarios which contributes to our understanding of how technology, people and spatial representations of information work effectively together. Data are analysed through the use of coordinated and time-linked views controlled by a time slider. Trends are detected through several visual representations simultaneously, each of which is best suited to highlight different patterns and can help stimulate the analytical visual thinking process so characteristic for geovisual analytics reasoning. Interactive features include tooltips, brushing, highlight, visual inquiry, and conditioned statistics filter mechanisms that can discover outliers and simultaneously update all views. To support knowledge capture and the communication and publishing of gained insights from the data exploration process, a visual storytelling concept with snapshots is introduced where the author can capture the visual data exploration process and share it. Snapshots are memorized interactive visualization views that are captured and later on recreated, so that the reader of the story can see the same mental interactive scenario as the author of the story. These snapshots are then part of a story where the author writes an explanatory text and uses the snapshots to highlight key words. These highlights will allow the reader to recreate the data views used by the author and will guide the reader to the visual discoveries made. The contributions of this thesis are divided into two parts, where the first part includes applications based on geovisual analytics methods for exploring complex weather data and finding patterns and relationships within the data. Earlier, the use of data visualization has been very limited and the introduction of geovisual analytics and the techniques used have significantly improved the visual analysis process as well as increased the flexibility. The results of this research are today used by SMHI to improve optimization and safety of voyages and monitoring of the weather along Swedish roads. Formative evaluations were performed with domain analysts with the purpose to explore qualitative usability issues with respect to visual representations and interactive representation. Furthermore, this thesis contributes with a visual storytelling approach which aims at giving domain experts novel methods for capturing and sharing information discoveries in a way that the reader can follow the process of visual exploration. This approach has been tested and verified within the domain of public statistics, where national and regional statistics is published to the public through the use of embedded interactive visualizations and a story that can engage the reader. The concept of visual storytelling has also been introduced to educators, where stories are used as interactive teaching material for students to make national and regional statistics interactive and visually understandable to the students. It will also challenge the students to investigate new theories and then communicate them visually.
APA, Harvard, Vancouver, ISO, and other styles
38

Doucet, Rachel A., Deyan M. Dontchev, Javon S. Burden, and Thomas L. Skoff. "Big data analytics test bed." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37615.

Full text
Abstract:
Approved for public release; distribution is unlimited
The proliferation of big data has significantly expanded the quantity and breadth of information throughout the DoD. The task of processing and analyzing this data has become difficult, if not infeasible, using traditional relational databases. The Navy has a growing priority for information processing, exploitation, and dissemination, which makes use of the vast network of sensors that produce a large amount of big data. This capstone report explores the feasibility of a scalable Tactical Cloud architecture that will harness and utilize the underlying open-source tools for big data analytics. A virtualized cloud environment was built and analyzed at the Naval Postgraduate School, which offers a test bed, suitable for studying novel variations of these architectures. Further, the technologies directly used to implement the test bed seek to demonstrate a sustainable methodology for rapidly configuring and deploying virtualized machines and provides an environment for performance benchmark and testing. The capstone findings indicate the strategies and best practices to automate the deployment, provisioning and management of big data clusters. The functionality we seek to support is a far more general goal: finding open-source tools that help to deploy and configure large clusters for on-demand big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
39

Hassan, Waqas. "Video analytics for security systems." Thesis, University of Sussex, 2013. http://sro.sussex.ac.uk/id/eprint/43406/.

Full text
Abstract:
This study has been conducted to develop robust event detection and object tracking algorithms that can be implemented in real time video surveillance applications. The aim of the research has been to produce an automated video surveillance system that is able to detect and report potential security risks with minimum human intervention. Since the algorithms are designed to be implemented in real-life scenarios, they must be able to cope with strong illumination changes and occlusions. The thesis is divided into two major sections. The first section deals with event detection and edge based tracking while the second section describes colour measurement methods developed to track objects in crowded environments. The event detection methods presented in the thesis mainly focus on detection and tracking of objects that become stationary in the scene. Objects such as baggage left in public places or vehicles parked illegally can cause a serious security threat. A new pixel based classification technique has been developed to detect objects of this type in cluttered scenes. Once detected, edge based object descriptors are obtained and stored as templates for tracking purposes. The consistency of these descriptors is examined using an adaptive edge orientation based technique. Objects are tracked and alarm events are generated if the objects are found to be stationary in the scene after a certain period of time. To evaluate the full capabilities of the pixel based classification and adaptive edge orientation based tracking methods, the model is tested using several hours of real-life video surveillance scenarios recorded at different locations and time of day from our own and publically available databases (i-LIDS, PETS, MIT, ViSOR). The performance results demonstrate that the combination of pixel based classification and adaptive edge orientation based tracking gave over 95% success rate. The results obtained also yield better detection and tracking results when compared with the other available state of the art methods. In the second part of the thesis, colour based techniques are used to track objects in crowded video sequences in circumstances of severe occlusion. A novel Adaptive Sample Count Particle Filter (ASCPF) technique is presented that improves the performance of the standard Sample Importance Resampling Particle Filter by up to 80% in terms of computational cost. An appropriate particle range is obtained for each object and the concept of adaptive samples is introduced to keep the computational cost down. The objective is to keep the number of particles to a minimum and only to increase them up to the maximum, as and when required. Variable standard deviation values for state vector elements have been exploited to cope with heavy occlusion. The technique has been tested on different video surveillance scenarios with variable object motion, strong occlusion and change in object scale. Experimental results show that the proposed method not only tracks the object with comparable accuracy to existing particle filter techniques but is up to five times faster. Tracking objects in a multi camera environment is discussed in the final part of the thesis. The ASCPF technique is deployed within a multi-camera environment to track objects across different camera views. Such environments can pose difficult challenges such as changes in object scale and colour features as the objects move from one camera view to another. Variable standard deviation values of the ASCPF have been utilized in order to cope with sudden colour and scale changes. As the object moves from one scene to another, the number of particles, together with the spread value, is increased to a maximum to reduce any effects of scale and colour change. Promising results are obtained when the ASCPF technique is tested on live feeds from four different camera views. It was found that not only did the ASCPF method result in the successful tracking of the moving object across different views but also maintained the real time frame rate due to its reduced computational cost thus indicating that the method is a potential practical solution for multi camera tracking applications.
APA, Harvard, Vancouver, ISO, and other styles
40

Nguyen, Quyen Do. "Anomaly Handling in Visual Analytics." Digital WPI, 2007. https://digitalcommons.wpi.edu/etd-theses/1144.

Full text
Abstract:
"Visual analytics is an emerging field which uses visual techniques to interact with users in the analytical reasoning process. Users can choose the most appropriate representation that conveys the important content of their data by acting upon different visual displays. The data itself has many features of interest, including clusters, trends (commonalities) and anomalies. Most visualization techniques currently focus on the discovery of trends and other relations, where uncommon phenomena are treated as outliers and are either removed from the datasets or de-emphasized on the visual displays. Much less work has been done on the visual analysis of outliers, or anomalies. In this thesis, I will introduce a method to identify the different levels of “outlierness” by using interactive selection and other approaches to process outliers after detection. In one approach, the values of these outliers will be estimated from the values of their k-Nearest Neighbors and replaced to increase the consistency of the whole dataset. Other approaches will leave users with the choice of removing the outliers from the graphs or highlighting the unusual patterns on the graphs if points of interest lie in these anomalous regions. I will develop and test these anomaly handling methods within the XMDV Tool."
APA, Harvard, Vancouver, ISO, and other styles
41

Rawlani, Praynaa. "Graph analytics on relational databases." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100670.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 99-100).
Graph analytics has become increasing popular in the recent years. Conventionally, data is stored in relational databases that have been refined over decades, resulting in highly optimized data processing engines. However, the awkwardness of expressing iterative queries in SQL makes the relational query-processing model inadequate for graph analytics, leading to many alternative solutions. Our research explores the possibility of combining a more natural query model with relational databases for graph analytics. In particular, we bring together a graph-natural vertex-centric query interface to highly optimized column-oriented relational databases, thus providing the efficiency of relational engines and ease-of-use of new graph systems. Throughout the thesis, we used stochastic gradient descent, a loss-minimization algorithm applied in many machine learning and graph analytics queries, as the example iterative algorithm. We implemented two different approaches for emulating a vertex-centric interface on a leading column-oriented database, Vertica: disk-based and main-memory based. The disk-based solution stores data for each iteration in relational tables and allows for interleaving SQL queries with graph algorithms. The main-memory approach stores data in memory, allowing faster updates. We applied optimizations to both implementations, which included refining logical and physical query plans, applying algorithm-level improvements and performing system-specific optimizations. The experiments and results show that the two implementations provide reasonable performance in comparison with popular graph processing systems. We present a detailed cost analysis of the two implementations and study the effect of each individual optimization on the query performance.
by Praynaa Rawlani.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
42

Fagnan, David Erik. "Analytics for financing drug development." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98572.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 133-139).
Financing drug development has a particular set of challenges including long development times, high chance of failure, significant market valuation uncertainty, and high costs of development. The earliest stages of translational research pose the greatest risks, which have been termed the "valley of death" as a result of a lack of funding. This thesis focuses on an exploration of financial engineering techniques aimed at addressing these concerns. Despite the recent financial crisis, many suggest that securitization is an appropriate tool for financing such large social challenges. Although securitization has been demonstrated effectively at later stages of drug development for drug royalties of approved drugs, it has yet to be utilized at earlier stages. This thesis starts by extending the model of drug development proposed by Fernandez et al. (2012). These extensions significantly influence the resulting performance and optimal securitization structures. Budget-constrained venture firms targeting high financial returns are incentivized to fund only the best projects, thereby potentially stranding less-attractive projects. Instead, such projects have the potential to be combined in larger portfolios through techniques such as securitization which reduce the cost of capital. In addition to modeling extensions, we provide examples of a model calibrated to orphan drugs, which we argue are particularly suited to financial engineering techniques. Using this model, we highlight the impact of our extensions on financial performance and compare with previously published results. We then illustrate the impact of incorporating a credit enhancement or guarantee, which allows for added flexibility of the capital structure and therefore greater access to lower costing capital. As an alternative to securitization, we provide some examples of a structured equity approach, which may allow for increased access to or efficiency of capital by matching investor objectives. Finally, we provide examples of optimizing the Sortino ratio through constrained Bayesian optimization.
by David Erik Fagnan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Stone, T. R. "Computational analytics for venture finance." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1453383/.

Full text
Abstract:
This thesis investigates the application of computational analytics to the domain of venture finance – the deployment of capital to high-risk ventures in pursuit of maximising financial return. Traditional venture finance is laborious and highly inefficient. Whilst high street banks approve (or reject) personal loans in a matter of minutes It takes an early-stage venture capital (VC) firm months to put a term sheet in front of a fledgling new venture. Whilst these are fundamentally different forms of finance (longer return period, larger investments, different risk profiles) a more data-informed and analytical approach to venture finance is foreseeable. We have surveyed existing software tools in relation to the venture capital investment process and stage of investment. We find that analytical tools are nascent and use of analytics in industry is limited. To date only a small handful of venture capital firms have publicly declared their use of computational analytical methods in their decision making and investment selection process. This research has been undertaken with several industry partners including venture capital firms, seed accelerators, universities and other related organisations. Within our research we have developed a prototype software tool NVANA: New Venture Analytics – for assessing new ventures and screening prospective deal flow. Over £20,000 in early-stage funding was distributed with hundreds of new ventures assessed using the system. Both the limitations of our prototype and extensions are discussed. We have focused on computational analytics in the context of three sub-components of the NVANA system. Firstly, improving the classification of private companies using supervised and multi-label classification techniques to develop a novel form of industry classification. Secondly, we have investigated the potential to benchmark private company performance based upon a company's ``digital footprint''. Finally, the novel application of collaborative filtering and content-based recommendation techniques to the domain of venture finance. We conclude by discussing the future potential for computational analytics to increase efficiency and performance within the venture finance domain. We believe there is clear scope for assisting the venture capital investment process. However, we have identified limitations and challenges in terms of access to data, stage of investment and adoption by industry.
APA, Harvard, Vancouver, ISO, and other styles
44

Naumanen, Hampus, Torsten Malmgård, and Eystein Waade. "Analytics tool for radar data." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353857.

Full text
Abstract:
Analytics tool for radar data was a project that started when radar specialists at Saab needed to modernize their tools that analyzes binary encoded radar data. Today, the analysis is accomplished using inadequate and ineffective applications not designed for that purpose, and consequently this makes the analysis tedious and more difficult compared to using an appropriate interface. The applications had limitations regarding different radar systems too, which restricted their usage significantly. The solution was to design a new software that imports, translates and visualizes the data independent of the radar system. The software was developed with several parts that communicates with each other to translate a binary file. A binary file consists of a series of bytes containing the information of the targets and markers separating the revolutions of the radar. The byte stream is split according to the ASTERIX protocol that defines the length of each Data Item and the extracted positional values are stored in arrays. The code is then designed to convert the positional values to cartesian coordinates and plot them on the screen. The software has implemented features such as play, pause, reverse and a plotting history that allows the user to analyze the data in a simple and user-friendly manner. There are also numerous ways the software could be extended. The code is constructed in such a way that new features can be implemented for additional analytical abilities without affecting the components already designed.
APA, Harvard, Vancouver, ISO, and other styles
45

Komolafe, Tomilayo A. "Data Analytics for Statistical Learning." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/87468.

Full text
Abstract:
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process. However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data. Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure. Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data. Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: • In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. • In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection o I address the research area of statistical modeling in two ways: -There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups -In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: -A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements -Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process
PHD
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process. In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection I address the research area of statistical modeling in two ways: There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Xintian. "Towards large-scale network analytics." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343680930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hahmann, Martin, Claudio Hartmann, Lars Kegel, Dirk Habich, and Wolfgang Lehner. "Big by blocks: Modular Analytics." De Gruyter, 2016. https://tud.qucosa.de/id/qucosa%3A72848.

Full text
Abstract:
Big Data and Big Data analytics have attracted major interest in research and industry and continue to do so. The high demand for capable and scalable analytics in combination with the ever increasing number and volume of application scenarios and data has lead to a large and intransparent landscape full of versions, variants and individual algorithms. As this zoo of methods lacks a systematic way of description, understanding is almost impossible which severely hinders effective application and efficient development of analytic algorithms. To solve this issue we propose our concept of modular analytics that abstracts the essentials of an analytic domain and turns them into a set of universal building blocks. As arbitrary algorithms can be created from the same set of blocks, understanding is eased and development benefits from reusability.
APA, Harvard, Vancouver, ISO, and other styles
48

Valério, Miguel Gomes Lage. "Dicoogle analytics for business intelligence." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17573.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
As últimas décadas têm sido caracterizadas pelo aumento do número de estudos imagiológicos produzidos, elementos fundamentais no diagnóstico e tratamento médico. Estes são armazenado em repositórios dedicados e são consumidos em estações de visualização que utilizam processos de comunicação normalizados. Os repositórios de imagem médica armazenam não só imagem médica, mas também uma grande variedade de metadados que têm bastante interesse em cenários de investigação clínica e em processos de auditoria que visam melhorar a qualidade de serviço prestado. Tendo em atenção a tremenda quantidade de estudos produzidos atualmente nas instituições de saúde, verificamos que os métodos convencionais são ineficientes na exploração desses dados, obrigando as instituições a recorrer a plataformas de Inteligência Empresarial e técnicas analíticas aplicadas. Neste contexto, esta dissertação teve como objetivo desenvolver uma plataforma que permite explorar todos os dados armazenados num repositório de imagem médica. A solução permite trabalhar em tempo real sobre os repositórios e não perturba os fluxos de trabalho instituídos. Em termos funcionais, oferece um conjunto de técnicas de análise estatística e de inteligência empresarial que estão acessíveis ao utilizador através de uma aplicação Web. Esta disponibiliza um extenso painel de visualização, gráficos e relatórios, que podem ser complementados com componentes de mineração de dados. O sistema permite ainda definir uma multitude de consultas, filtros e operandos através do uso de uma interface gráfica intuitiva.
In the last decades, the amount of medical imaging studies and associated metadata available has been rapidly increasing. These are mostly used to support medical diagnosis and treatment. Nonetheless, recent initiatives claim the usefulness of these studies to support research scenarios and to improve the medical institutions business practices. However, their continuous production, as well as the tremendous amount of associated data, make their analysis difficult by conventional workflows devised up until this point. Current medical imaging repositories contain not only the images themselves, but also a wide-range of valuable metadata. This creates an opportunity for the development of Business Intelligence and analytics techniques applied to this Big Data scenario. The exploration of such technologies has the potential of further increasing the efficiency and quality of the medical practice. This thesis developed a novel automated methodology to derive knowledge from multimodal medical imaging repositories that does not disrupt the regular medical practice. The developed methods enable the application of statistical analysis and business intelligence techniques directly on top of live institutional repositories. The resulting application is a Web-based solution that provides an extensive dashboard, including complete charting and reporting options, combined with data mining components. Furthermore, the system enables the operator to set a multitude of queries, filters and operands through the use of an intuitive graphical interface.
APA, Harvard, Vancouver, ISO, and other styles
49

Zahradník, Jan. "Využití Google Analytics v eshopu." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-162548.

Full text
Abstract:
The present thesis focuses on a specific e-shop operating mainly within the Czech market and the ways it uses one of the most significant tools of web analysis -- Google Analytics. The aim of the thesis is an analysis of the key areas, i.e. visitor analysis, visitor sourcing analysis and content analysis. The problematic areas are based on these as well as recommendations and suggestions that should help, once these have been applied, improve the service quality leading to increased revenue and better competitiveness within the market.
APA, Harvard, Vancouver, ISO, and other styles
50

Miloš, Marek. "Nástroje pro Big Data Analytics." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199274.

Full text
Abstract:
The thesis covers the term for specific data analysis called Big Data. The thesis firstly defines the term Big Data and the need for its creation because of the rising need for deeper data processing and analysis tools and methods. The thesis also covers some of the technical aspects of Big Data tools, focusing on Apache Hadoop in detail. The later chapters contain Big Data market analysis and describe the biggest Big Data competitors and tools. The practical part of the thesis presents a way of using Apache Hadoop to perform data analysis with data from Twitter and the results are then visualized in Tableau.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography