To see the other types of publications on this topic, follow the link: Analytics Application.

Dissertations / Theses on the topic 'Analytics Application'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Analytics Application.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Talevi, Iacopo. "Big Data Analytics and Application Deployment on Cloud Infrastructure." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14408/.

Full text
Abstract:
This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered.
APA, Harvard, Vancouver, ISO, and other styles
2

Altskog, Tomas. "Customized Analytics Software : Investigating efficient development of an application." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-27967.

Full text
Abstract:
Google Analytics is the most widely used web traffic analytics program in the world with a wide array of functionality which serve several different purposes for its users. However the cost of training employees in the usage of Google Analytics can be expensive and time consuming due to the generality of the software. The purpose of this thesis is to explore an alternative solution to hav- ing employees learn the default Google Analytics interface and thus possibly re- ducing training expenses. A prototype written in the Java programming lan- guage is developed which implements the MVC and facade software patterns for the purpose of making the development process more efficient. It contains a feature for retrieving custom reports from Google Analytics using Google’s Core Reporting API in addition to two web pages are integrated into the proto- type using the Google Embed API. In the result the prototype is used along with the software estimation method COCOMO to make an estimation of the amount of effort required to develop a similar program. This is done by counting the prototype’s source lines of code manually, following the guidelines given by the COCOMO manual, and then implementing the result in the COCOMO estima- tion formula. The count of lines of code for the entire prototype is 567 and the count which considers reused code is 466. The value retrieved from the formula is 1.61±0.14 person months for the estimation of the entire program and 1.31± 0.16 for a program with reused code. The conclusion of the thesis is that the res- ult from the estimation has several weaknesses and further research is necessary in order to improve the accuracy of the result.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Hock Guan. "A study on predictive analytics application to ship machinery maintenance." Thesis, Monterey California. Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37659.

Full text
Abstract:
Approved for public release; distribution is unlimited
Engine failures on ships are expensive, and affect operational readiness critically due to long turn-around times for maintenance. Prior to the engine failures, there are signs of engine characteristic changes, for example, exhaust gas temperature (EGT), to indicate that the engine is acting abnormally. This is used as a precursor towards the modeling of failures. There is a threshold limit of 520 degree Celsius for the EGT prior to the need for human intervention. With this knowledge, the use of time series forecasting technique, to predict the crossing over of threshold, is appropriate to model the EGT as a function of its operating running hours and load. This allows maintenance to be scheduled just in time. When there is a departure of result from the predictive model, Cumulative Sum (CUSUM) Control charts can then be used to monitor the change early before an actual problem arises. This paper discusses and demonstrates the proof of principle for one engine and a particular operating profile of a commercial vessel with the use of predictive analytics. The realization with time series forecasting coupled with CUSUM control chart allows this approach to be extended to other attributes beyond EGT.
APA, Harvard, Vancouver, ISO, and other styles
4

Mathonat, Romain. "Rule discovery in labeled sequential data : Application to game analytics." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI080.

Full text
Abstract:
Exploiter des jeux de données labelisés est très utile, non seulement pour entrainer des modèles et mettre en place des procédures d'analyses prédictives, mais aussi pour améliorer la compréhension d'un domaine. La découverte de sous-groupes a été l'objet de recherches depuis deux décennies. Elle consiste en la découverte de règles couvrants des ensembles d'objets ayant des propriétés intéressantes, qui caractérisent une classe cible donnée. Bien que de nombreux algorithmes de découverte de sous-groupes aient été proposés à la fois dans le cas des données transactionnelles et numériques, la découverte de règles dans des données séquentielles labelisées a été bien moins étudiée. Dans ce contexte, les stratégies d'exploration exhaustives ne sont pas applicables à des cas d'application rééls, nous devons donc nous concentrer sur des approches heuristiques. Dans cette thèse, nous proposons d'appliquer des modèles de bandit manchot ainsi que la recherche arborescente de Monte Carlo à l'exploration de l'espace de recherche des règles possibles, en utilisant un compromis exploration-exploitation, sur différents types de données tels que les sequences d'ensembles d'éléments, ou les séries temporelles. Pour un budget temps donné, ces approches trouvent un ensemble des top-k règles decouvertes, vis-à-vis de la mesure de qualité choisie. De plus, elles ne nécessitent qu'une configuration légère, et sont indépendantes de la mesure de qualité utilisée. A notre connaissance, il s'agit de la première application de la recherche arborescente de Monte Carlo au cas de la fouille de données séquentielles labelisées. Nous avons conduit des études appronfondies sur différents jeux de données pour illustrer leurs plus-values, et discuté leur résultats quantitatifs et qualitatifs. Afin de valider le bon fonctionnement d'un de nos algorithmes, nous proposons un cas d'utilisation d'analyse de jeux vidéos, plus précisémment de matchs de Rocket League. La decouverte de règles intéressantes dans les séquences d'actions effectuées par les joueurs et leur exploitation dans un modèle de classification supervisée montre l'efficacité et la pertinence de notre approche dans le contexte difficile et réaliste des données séquentielles de hautes dimensions. Elle permet la découverte automatique de techniques de jeu, et peut être utilisée afin de créer de nouveaux modes de jeu, d'améliorer le système de classement, d'assister les commentateurs de "e-sport", ou de mieux analyser l'équipe adverse en amont, par exemple
It is extremely useful to exploit labeled datasets not only to learn models and perform predictive analytics but also to improve our understanding of a domain and its available targeted classes. The subgroup discovery task has been considered for more than two decades. It concerns the discovery of rules covering sets of objects having interesting properties, e.g., they characterize a given target class. Though many subgroup discovery algorithms have been proposed for both transactional and numerical data, discovering rules within labeled sequential data has been much less studied. In that context, exhaustive exploration strategies can not be used for real-life applications and we have to look for heuristic approaches. In this thesis, we propose to apply bandit models and Monte Carlo Tree Search to explore the search space of possible rules using an exploration-exploitation trade-off, on different data types such as sequences of itemset or time series. For a given budget, they find a collection of top-k best rules in the search space w.r.t chosen quality measure. They require a light configuration and are independent from the quality measure used for pattern scoring. To the best of our knowledge, this is the first time that the Monte Carlo Tree Search framework has been exploited in a sequential data mining setting. We have conducted thorough and comprehensive evaluations of our algorithms on several datasets to illustrate their added-value, and we discuss their qualitative and quantitative results. To assess the added-value of one or our algorithms, we propose a use case of game analytics, more precisely Rocket League match analysis. Discovering interesting rules in sequences of actions performed by players and using them in a supervised classification model shows the efficiency and the relevance of our approach in the difficult and realistic context of high dimensional data. It supports the automatic discovery of skills and it can be used to create new game modes, to improve the ranking system, to help e-sport commentators, or to better analyse opponent teams, for example
APA, Harvard, Vancouver, ISO, and other styles
5

Reising, Justin. "Function Space Tensor Decomposition and its Application in Sports Analytics." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etd/3676.

Full text
Abstract:
Recent advancements in sports information and technology systems have ushered in a new age of applications of both supervised and unsupervised analytical techniques in the sports domain. These automated systems capture large volumes of data points about competitors during live competition. As a result, multi-relational analyses are gaining popularity in the field of Sports Analytics. We review two case studies of dimensionality reduction with Principal Component Analysis and latent factor analysis with Non-Negative Matrix Factorization applied in sports. Also, we provide a review of a framework for extending these techniques for higher order data structures. The primary scope of this thesis is to further extend the concept of tensor decomposition through the use of function spaces. In doing so, we address the limitations of PCA to vector and matrix representations and the CP-Decomposition to tensor representations. Lastly, we provide an application in the context of professional stock car racing.
APA, Harvard, Vancouver, ISO, and other styles
6

Berky, Levente. "Vizualizace dat pro Ansible Automation Analytics." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445590.

Full text
Abstract:
Tato práce se zaměřuje na vytvoření webové komponenty k vykreslení grafů ze strukturovanýho datovýho formátu (dále jen schéma) a vytvoření uživatelského rozhraní pro editaci schématu pro Ansible Automation Analytics. Práce zkoumá aktuální implementaci Ansible Automation Analytics a odpovídající API. Dále zkoumá vhodné knihovny pro vykreslování grafů a popisuje základy použitých technologií. Praktická část popisuje požadavky na komponentu a popisuje vývoj a implementaci pluginu. Dále práce popisuje proces testování a~plány budoucího vývoje pluginu.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Liangwei. "Big Data Analytics for Fault Detection and its Application in Maintenance." Doctoral thesis, Luleå tekniska universitet, Drift, underhåll och akustik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-60423.

Full text
Abstract:
Big Data analytics has attracted intense interest recently for its attempt to extract information, knowledge and wisdom from Big Data. In industry, with the development of sensor technology and Information & Communication Technologies (ICT), reams of high-dimensional, streaming, and nonlinear data are being collected and curated to support decision-making. The detection of faults in these data is an important application in eMaintenance solutions, as it can facilitate maintenance decision-making. Early discovery of system faults may ensure the reliability and safety of industrial systems and reduce the risk of unplanned breakdowns. Complexities in the data, including high dimensionality, fast-flowing data streams, and high nonlinearity, impose stringent challenges on fault detection applications. From the data modelling perspective, high dimensionality may cause the notorious “curse of dimensionality” and lead to deterioration in the accuracy of fault detection algorithms. Fast-flowing data streams require algorithms to give real-time or near real-time responses upon the arrival of new samples. High nonlinearity requires fault detection approaches to have sufficiently expressive power and to avoid overfitting or underfitting problems. Most existing fault detection approaches work in relatively low-dimensional spaces. Theoretical studies on high-dimensional fault detection mainly focus on detecting anomalies on subspace projections. However, these models are either arbitrary in selecting subspaces or computationally intensive. To meet the requirements of fast-flowing data streams, several strategies have been proposed to adapt existing models to an online mode to make them applicable in stream data mining. But few studies have simultaneously tackled the challenges associated with high dimensionality and data streams. Existing nonlinear fault detection approaches cannot provide satisfactory performance in terms of smoothness, effectiveness, robustness and interpretability. New approaches are needed to address this issue. This research develops an Angle-based Subspace Anomaly Detection (ABSAD) approach to fault detection in high-dimensional data. The efficacy of the approach is demonstrated in analytical studies and numerical illustrations. Based on the sliding window strategy, the approach is extended to an online mode to detect faults in high-dimensional data streams. Experiments on synthetic datasets show the online extension can adapt to the time-varying behaviour of the monitored system and, hence, is applicable to dynamic fault detection. To deal with highly nonlinear data, the research proposes an Adaptive Kernel Density-based (Adaptive-KD) anomaly detection approach. Numerical illustrations show the approach’s superiority in terms of smoothness, effectiveness and robustness.
APA, Harvard, Vancouver, ISO, and other styles
8

Rezai, Arash. "Evaluation of development methods for mobile applications : Soundhailer’s site and iOS application." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191124.

Full text
Abstract:
To remain competitive and successful in today’s globalized market, companies need a strategy to ensure that they are constantly at the leading edge in terms of products and services. The implementation of a mobile application is one approach to fulfill this requirement. This report describes an overview of the topic, by introducing briefly today’s development tools for mobile application development and subsequently focusing on the Soundhailer application, as the application done by the author. The problem in focus is to find out whether a native or web-based application is preferred for an iOS application production strategy for a start-up company. Moreover, the report delivers an insight into a well-structured method that works good for setting up measuring points for a website, also Soundhailer’s, and the factual realization of a development tool for iOS development. This insight is based on a lot of help from a former student of the Royal Institute of Technology, who has had some previous experience within the area. To show prospective similarities and differences between theory and reality, the experiences are subsequently compared to the theoretical part. Finally, the results are critically discussed. Two versions of the application were developed, both a native version and a web-based version, and the results show that both native and web-based applications can be convenient solutions for companies to implement and use. The results also provide a foundation upon which others can build and better understand how an iOS application is used and developed.
För att förbli konkurrenskraftiga och framgångsrika i dagens globaliserade marknad, behöver företagen en strategi för att se till att de ständigt är i framkant när det gäller produkter och tjänster. Att framställa en mobilapplikation är ett av många sätt för att nå upp till detta krav. Denna rapport ger en överblick över ämnet genom att först gå igenom dagens utvecklingsverktyg för mobilapplikationer och därefter fokusera på företaget Soundhailers mobilapplikation, eftersom denne har utvecklats av undertecknad. Problemet i fokus består av att ta reda på om en hårdvarukodad eller webbaserad applikation är att föredra för produktionsstrategin av en iOSapplikation för ett start-up-företag. Dessutom ger rapporten en inblick i en välstrukturerad metod som fungerar bra för att inrätta mätpunkter för en webbplats, med fokus på Soundhailers webbplats, samt det faktiska genomförandet av ett utvecklingsverktyg för iOS-utveckling. Denna insikt bygger på en hel del hjälp från en före detta elev på Kungliga Tekniska Högskolan som har tidigare erfarenheter inom området. För att sedan visa potentiella likheter och skillnader mellan teori och verklighet jämförs erfarenheterna med den teoretiska delen. Slutligen diskuteras resultaten kritiskt. Två versioner av applikationen har utvecklats, både en hårdvarukodad version och en webbaserad version, och resultaten visar att både hårdvarukodade och webbaserade applikationer kan vara praktiska lösningar som företag kan implementera och använda sig av. Resultaten ger också en grund på vilken andra kan bygga vidare på samt en bättre förståelse för hur en iOSapplikation kan användas och utvecklas
APA, Harvard, Vancouver, ISO, and other styles
9

Raveneau, Vincent. "Interaction in Progressive Visual Analytics : an application to progressive sequential pattern mining." Thesis, Nantes, 2020. http://www.theses.fr/2020NANT4022.

Full text
Abstract:
Le paradigme de Progressive Visual Analytics (PVA) a été proposé en réponse aux difficultés rencontrées par les Visual Analytics lors du traitement de données massives ou de l’utilisation d’algorithmes longs, par l’usage de résultats intermédiaires et par l’interaction entre humain et algorithmes en cours d’exécution. Nous nous intéressons d’abord à la notion d’“interaction”, mal définie en PVA, dans le but d’établir une vision structurée de ce qu’est l’interaction avec un algorithme en PVA. Nous nous intéressons ensuite à la conception et à l’implémentation d’un système et d’un algorithme progressif de fouille de motifs séquentiels, qui permettent d’explorer à la fois les motifs et les données sous-jacentes, en nous concentrant sur les interactions entre analyste et algorithme. Nos travaux ouvrent des perspectives concernant 1/ l’assistance de l’analyste dans ses interactions avec un algorithme dans un contexte de PVA; 2/ une exploration poussée des interactions en PVA; 3/ la création d’algorithmes nativement progressifs, ayant la progressivité et les interactions au cœur de leur conception
The Progressive Visual Analytics (PVA) paradigm has been proposed to alleviate difficulties of Visual Analytics when dealing with large datasets or time-consuming algorithms, by using intermediate results and interactions between the human and the running algorithm. Our work is twofold. First, by considering that the notion of “interaction” was not well defined for PVA, we focused on providing a structured vision of what interacting with an algorithm in PVA means. Second, we focused on the design and implementation of a progressive sequential pattern mining algorithm and system, allowing to explore both the patterns and the underlying data, with a focus on the analyst/algorithm interactions. The perspectives opened by our work deal with 1/ assisting analysts in their interactions with algorithm in PVA settings; 2/ further exploring interaction in PVA ; 3/ creating natively progressive algorithms, for which progressiveness and interaction are at the core of the design
APA, Harvard, Vancouver, ISO, and other styles
10

Abounia, Omran Behzad. "Application of Data Mining and Big Data Analytics in the Construction Industry." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu148069742849934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Alsadhan, Majed. "An application of topic modeling algorithms to text analytics in business intelligence." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/17580.

Full text
Abstract:
Master of Science
Department of Computing and Information Sciences
Doina Caragea
William H. Hsu
In this work, we focus on the task of clustering businesses in the state of Kansas based on the content of their websites and their business listing information. Our goal is to cluster the businesses and overcome the challenges facing current approaches such as: data noise, low number of clustered businesses, and lack of evaluation approach. We propose an LSA-based approach to analyze the businesses’ data and cluster those businesses by using Bisecting K-Means algorithm. In this approach, we analyze the businesses’ data by using LSA and produce businesses’ representations in a reduced space. We then use the businesses’ representations to cluster the businesses by applying the Bisecting K-Means algorithm. We also apply an existing LDA-based approach to cluster the businesses and compare the results with our proposed LSA-based approach at the end. In this work, we evaluate the results by using a human-expert-based evaluation procedure. At the end, we visualize the clusters produced in this work by using Google Earth and Tableau. According to our evaluation procedure, the LDA-based approach performed slightly bet- ter then the LSA-based approach. However, with the LDA-based approach, there were some limitations which are: low number of clustered businesses, and not being able to produce a hierarchical tree for the clusters. With the LSA-based approach, we were able to cluster all the businesses and produce a hierarchical tree for the clusters.
APA, Harvard, Vancouver, ISO, and other styles
12

Alvarsson, Andreas. "The development of a sports statistics web application : Sports Analytics and Data Models for a sports data web application." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138504.

Full text
Abstract:
Sports and technology have always co-operated to bring better and more specific sports statistics. The collection of sports game data as well as the ability to generate valuable sports statistics of it is growing. This thesis investigates the development of a sports statistics application that should be able to collect sports game data, structure the data according to suitable data models and show statistics in a proper way. The application was set to be a web application that was developed using modern web technologies. This purpose led to a comparison of different software stack solutions and web frameworks. A theoretical study of sports analytics was also conducted, which gave a foundation for how sports data could be stored and how valuable sports statistics could be generated. The resulting design of the prototype for the sports statistics application was evaluated. Interviews with persons working in sports contexts evaluated the prototype to be both user-friendly, functional and fulfilling the purpose to generate valuable statistics during sport games.
APA, Harvard, Vancouver, ISO, and other styles
13

Cui, Henggang. "Exploiting Application Characteristics for Efficient System Support of Data-Parallel Machine Learning." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/908.

Full text
Abstract:
Large scale machine learning has many characteristics that can be exploited in the system designs to improve its efficiency. This dissertation demonstrates that the characteristics of the ML computations can be exploited in the design and implementation of parameter server systems, to greatly improve the efficiency by an order of magnitude or more. We support this thesis statement with three case study systems, IterStore, GeePS, and MLtuner. IterStore is an optimized parameter server system design that exploits the repeated data access pattern characteristic of ML computations. The designed optimizations allow IterStore to reduce the total run time of our ML benchmarks by up to 50×. GeePS is a parameter server that is specialized for deep learning on distributed GPUs. By exploiting the layer-by-layer data access and computation pattern of deep learning, GeePS provides almost linear scalability from single-machine baselines (13× more training throughput with 16 machines), and also supports neural networks that do not fit in GPU memory. MLtuner is a system for automatically tuning the training tunables of ML tasks. It exploits the characteristic that the best tunable settings can often be decided quickly with just a short trial time. By making use of optimization-guided online trial-and-error, MLtuner can robustly find and re-tune tunable settings for a variety of machine learning applications, including image classification, video classification, and matrix factorization, and is over an order of magnitude faster than traditional hyperparameter tuning approaches.
APA, Harvard, Vancouver, ISO, and other styles
14

Zillies, Jan. "Gelatin Nanoparticles for Targeted Oligonucleotide Delivery to Kupffer Cells - Analytics, Formulation Development, Practical Application." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-66165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yilmaz, Bertan. "Customer Analytics and Cluster Analysis : A Clustering Application for CustomerSegmentation Based on CX Data." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-377522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lopez, Betsy Diamar Balbin, Jimmy Alexander Armas Aguirre, Diego Antonio Reyes Coronado, and Paola A. Gonzalez. "Wearable technology model to control and monitor hypertension during pregnancy." IEEE Computer Society, 2018. http://hdl.handle.net/10757/624723.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
In this paper, we proposed a wearable technology model to control and monitor hypertension during pregnancy. We enhanced prior models by adding a series of health parameters that could potentially prevent and correct hypertension disorders in pregnancy. Our proposed model also emphasizes the application of real-time data analysis for the healthcare organization. In this process, we also assessed the current technologies and systems applications offered in the market. The model consists of four phases: 1. The health parameters of the patient are collected through a wearable device; 2. The data is received by a mobile application; 3. The data is stored in a cloud database; 4. The data is analyzed on real-time using a data analytics application. The model was validated and piloted in a public hospital in Lima, Peru. The preliminary results showed an increased-on number of controlled patients by 11% and a reduction of maternal deaths by 7%, among other relevant health factors that allowed healthcare providers to take corrective and preventive actions.
Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
17

Yu, Xiang. "Analysis of new sentiment and its application to finance." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9062.

Full text
Abstract:
We report our investigation of how news stories influence the behaviour of tradable financial assets, in particular, equities. We consider the established methods of turning news events into a quantifiable measure and explore the models which connect these measures to financial decision making and risk control. The study of our thesis is built around two practical, as well as, research problems which are determining trading strategies and quantifying trading risk. We have constructed a new measure which takes into consideration (i) the volume of news and (ii) the decaying effect of news sentiment. In this way we derive the impact of aggregated news events for a given asset; we have defined this as the impact score. We also characterise the behaviour of assets using three parameters, which are return, volatility and liquidity, and construct predictive models which incorporate impact scores. The derivation of the impact measure and the characterisation of asset behaviour by introducing liquidity are two innovations reported in this thesis and are claimed to be contributions to knowledge. The impact of news on asset behaviour is explored using two sets of predictive models: the univariate models and the multivariate models. In our univariate predictive models, a universe of 53 assets were considered in order to justify the relationship of news and assets across 9 different sectors. For the multivariate case, we have selected 5 stocks from the financial sector only as this is relevant for the purpose of constructing trading strategies. We have analysed the celebrated Black-Litterman model (1991) and constructed our Bayesian multivariate predictive models such that we can incorporate domain expertise to improve the predictions. Not only does this suggest one of the best ways to choose priors in Bayesian inference for financial models using news sentiment, but it also allows the use of current and synchronised data with market information. This is also a novel aspect of our work and a further contribution to knowledge.
APA, Harvard, Vancouver, ISO, and other styles
18

Benson, Derek. "Application of Data Analytics for Prediction of Suicide Rates at the State and National Levels." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1993.

Full text
Abstract:
The increasing suicide rate in the United States has amplified the need to assure that regions with high suicide risk receive adequate funding for programs and related resources for prevention methods. The way in which organizations dedicated to preventing suicides distribute funding could be improved with the development of predictive models for suicide rates. In this study, a multiple linear regression model at a national level was developed to identify relevant factors associated with suicide. The national level model was developed in two phases; the first using response variable data and explanatory variable data from the same time period, and the second with the response variable data shifted one time period to create a more accurate model for prediction. The models had k-fold R-squared values of 0.676 and 0.675. The national model identified four variables to include in a predictive state level model: Foreclosure Rates, Violent Crime Rates, Gini ratio, and Consumption Volume. In the second part of this study, the use of Twitter data in a state level model was evaluated. Tweets terms relating to suicide were identified in fifteen states over a thirty-one-day period and used to calculate three variables: Tweet rate, Favorite rate, and Retweet rate. Each of these three variables for the terms “suicide” and “suicidal” underwent an Analysis of Variance test (ANOVA) to check for differences between states. Each ANOVA test resulted in a p-value less than 0.0001 providing strong evidence that there was a difference in Tweet rate, Favorite rate, and Retweet rate for the two search phrases analyzed among the states. Next, a Pearson Product-Moment correlation coefficient and Pearson Rho correlation coefficient were evaluated for each Twitter variable and the states’ historical suicide rates. All computed correlation coefficients were between -0.15 and 0.3 suggesting that there is, at best, a weak correlation between the Twitter variables and a state’s historical suicide rate. The results from the Twitter data analysis suggest that it is too early to accurately incorporate such data into a state level multiple linear regression model. The results of this study would help in further development of a state level model that allows organizations, dedicated to reducing suicides, to allocate related resources more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
19

Matteuzzi, Tommaso. "Network diffusion methods for omics big bio data analytics and interpretation with application to cancer datasets." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13660/.

Full text
Abstract:
Nella attuale ricerca biomedica un passo fondamentale verso una comprensione dei meccanismi alla radice di una malattia è costituito dalla identificazione dei disease modules, cioè quei sottonetwork dell'interattoma, il network delle interazioni tra proteine, con un alto numero di alterazioni geniche. Tuttavia, l'incompletezza del network e l'elevata variabilità dei geni alterati rendono la soluzione di questo problema non banale. I metodi fisici che sfruttano le proprietà dei processi diffusivi su network, dei quali mi sono occupato in questo lavoro di tesi, sono quelli che consentono di ottenere le migliori prestazioni. Nella prima parte del mio lavoro, ho indagato la teoria relativa alla diffusione ed ai random walk su network, trovando interessanti relazioni con le tecniche di clustering e con altri modelli fisici la cui dinamica è descritta dalla matrice laplaciana. Ho poi implementato un tecnica basata sulla diffusione su rete applicandola a dati di espressione genica e mutazioni somatiche di tre diverse tipologie di cancro. Il metodo è organizzato in due parti. Dopo aver selezionato un sottoinsieme dei nodi dell'interattoma, associamo ad ognuno di essi un'informazione iniziale che riflette il "grado" di alterazione del gene. L'algoritmo di diffusione propaga l'informazione iniziale nel network raggiungendo, dopo un transiente, lo stato stazionario. A questo punto, la quantità di fluido in ciascun nodo è utilizzata per costruire un ranking dei geni. Nella seconda parte, i disease modules sono identificati mediante una procedura di network resampling. L'analisi condotta ci ha permesso di identificare un numero consistente di geni già noti nella letteratura relativa ai tipi di cancro studiati, nonché un insieme di altri geni correlati a questi che potrebbero essere interessanti candidati per ulteriori approfondimenti.Attraverso una procedura di Gene Set Enrichment abbiamo infine testato la correlazione dei moduli identificati con pathway biologici noti.
APA, Harvard, Vancouver, ISO, and other styles
20

Aboturkia, Amna. "A Study of the Effectiveness of Mobile Technology in the Major Fields and Opioid Epidemic." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1562672587251166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bothorel, Gwenael. "Algorithmes automatiques pour la fouille visuelle de données et la visualisation de règles d’association : application aux données aéronautiques." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13783/1/bothorel.pdf.

Full text
Abstract:
Depuis quelques années, nous assistons à une véritable explosion de la production de données dans de nombreux domaines, comme les réseaux sociaux ou le commerce en ligne. Ce phénomène récent est renforcé par la généralisation des périphériques connectés, dont l'utilisation est devenue aujourd'hui quasi-permanente. Le domaine aéronautique n'échappe pas à cette tendance. En effet, le besoin croissant de données, dicté par l'évolution des systèmes de gestion du trafic aérien et par les événements, donne lieu à une prise de conscience sur leur importance et sur une nouvelle manière de les appréhender, qu'il s'agisse de stockage, de mise à disposition et de valorisation. Les capacités d'hébergement ont été adaptées, et ne constituent pas une difficulté majeure. Celle-ci réside plutôt dans le traitement de l'information et dans l'extraction de connaissances. Dans le cadre du Visual Analytics, discipline émergente née des conséquences des attentats de 2001, cette extraction combine des approches algorithmiques et visuelles, afin de bénéficier simultanément de la flexibilité, de la créativité et de la connaissance humaine, et des capacités de calculs des systèmes informatiques. Ce travail de thèse a porté sur la réalisation de cette combinaison, en laissant à l'homme une position centrale et décisionnelle. D'une part, l'exploration visuelle des données, par l'utilisateur, pilote la génération des règles d'association, qui établissent des relations entre elles. D'autre part, ces règles sont exploitées en configurant automatiquement la visualisation des données concernées par celles-ci, afin de les mettre en valeur. Pour cela, ce processus bidirectionnel entre les données et les règles a été formalisé, puis illustré, à l'aide d'enregistrements de trafic aérien récent, sur la plate-forme Videam que nous avons développée. Celle-ci intègre, dans un environnement modulaire et évolutif, plusieurs briques IHM et algorithmiques, permettant l'exploration interactive des données et des règles d'association, tout en laissant à l'utilisateur la maîtrise globale du processus, notamment en paramétrant et en pilotant les algorithmes.
APA, Harvard, Vancouver, ISO, and other styles
22

Enoch, John. "Application of Decision Analytic Methods to Cloud Adoption Decisions." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-25560.

Full text
Abstract:
This thesis gives an example of how decision analytic methods can be applied to choices in the adoption of cloud computing. The lifecycle of IT systems from planning to retirement is rapidly changing. Making a technology decision that can be justified and explained in terms of outcomes and benefits can be increasingly challenging without a systematic approach underlying the decision making process. It is proposed that better, more informed cloud adoption decisions would be taken if organisations used a structured approach to frame the problem to be solved and then applied trade-offs using an additive utility model. The trade-offs that can be made in the context of cloud adoption decisions are typically complex and rarely intuitively obvious. A structured approach is beneficial in that it enables decision makers to define and seek outcomes that deliver optimum benefits, aligned with their risk profile. The case study demonstrated that proven decision tools are helpful to decision makers faced with a complex cloud adoption decision but are likely to be more suited to the more intractable decision situations.
APA, Harvard, Vancouver, ISO, and other styles
23

Svenningsson, Philip, and Maximilian Drubba. "How to capture that business value everyone talks about? : An exploratory case study on business value in agile big data analytics organizations." Thesis, Internationella Handelshögskolan, Jönköping University, IHH, Företagsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-48882.

Full text
Abstract:
Background: Big data analytics has been referred to as a hype the past decade, making manyorganizations adopt data-driven processes to stay competitive in their industries. Many of theorganizations adopting big data analytics use agile methodologies where the most importantoutcome is to maximize business value. Multiple scholars argue that big data analytics lead toincreased business value, however, there is a theoretical gap within the literature about how agileorganizations can capture this business value in a practically relevant way. Purpose: Building on a combined definition that capturing business value means being able todefine-, communicate- and measure it, the purpose of this thesis is to explore how agileorganizations capture business value from big data analytics, as well as find out what aspects ofvalue are relevant when defining it. Method: This study follows an abductive research approach by having a foundation in theorythrough the use of a qualitative research design. A single case study of Nike Inc. was conducted togenerate the primary data for this thesis where nine participants from different domains within theorganization were interviewed and the results were analysed with a thematic content analysis. Findings: The findings indicate that, in order for agile organizations to capture business valuegenerated from big data analytics, they need to (1) define the value through a synthezised valuemap, (2) establish a common language with the help of a business translator and agile methods,and (3), measure the business value before-, during- and after the development by usingindividually idenified KPIs derived from the business value definition.
APA, Harvard, Vancouver, ISO, and other styles
24

Médoc, Nicolas. "A visual analytics approach for multi-resolution and multi-model analysis of text corpora : application to investigative journalism." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB042/document.

Full text
Abstract:
À mesure que la production de textes numériques croît exponentiellement, un besoin grandissant d’analyser des corpus de textes se manifeste dans beaucoup de domaines d’application, tant ces corpus constituent des sources inépuisables d’information et de connaissance partagées. Ainsi proposons-nous dans cette thèse une nouvelle approche de visualisation analytique pour l’analyse de corpus textuels, mise en œuvre pour les besoins spécifiques du journalisme d’investigation. Motivées par les problèmes et les tâches identifiés avec une journaliste d’investigation professionnelle, les visualisations et les interactions ont été conçues suivant une méthodologie centrée utilisateur, impliquant l’utilisateur durant tout le processus de développement. En l’occurrence, les journalistes d’investigation formulent des hypothèses, explorent leur sujet d’investigation sous tous ses angles, à la recherche de sources multiples étayant leurs hypothèses de travail. La réalisation de ces tâches, très fastidieuse lorsque les corpus sont volumineux, requiert l’usage de logiciels de visualisation analytique se confrontant aux problématiques de recherche abordées dans cette thèse. D’abord, la difficulté de donner du sens à un corpus textuel vient de sa nature non structurée. Nous avons donc recours au modèle vectoriel et son lien étroit avec l’hypothèse distributionnelle, ainsi qu’aux algorithmes qui l’exploitent pour révéler la structure sémantique latente du corpus. Les modèles de sujets et les algorithmes de biclustering sont efficaces pour l’extraction de sujets de haut niveau. Ces derniers correspondent à des groupes de documents concernant des sujets similaires, chacun représenté par un ensemble de termes extraits des contenus textuels. Une telle structuration par sujet permet notamment de résumer un corpus et de faciliter son exploration. Nous proposons une nouvelle visualisation, une carte pondérée des sujets, qui dresse une vue d’ensemble des sujets de haut niveau. Elle permet d’une part d’interpréter rapidement les contenus grâce à de multiples nuages de mots, et d’autre part, d’apprécier les propriétés des sujets telles que leur taille relative et leur proximité sémantique. Bien que l’exploration des sujets de haut niveau aide à localiser des sujets d’intérêt ainsi que leur voisinage, l’identification de faits précis, de points de vue ou d’angles d’analyse, en lien avec un événement ou une histoire, nécessite un niveau de structuration plus fin pour représenter des variantes de sujet. Cette structure imbriquée révélée par Bimax, une méthode de biclustering basée sur des motifs avec chevauchement, capture au sein des biclusters les co-occurrences de termes partagés par des sous-ensembles de documents pouvant dévoiler des faits, des points de vue ou des angles associés à des événements ou des histoires communes. Cette thèse aborde les problèmes de visualisation de biclusters avec chevauchement en organisant les biclusters terme-document en une hiérarchie qui limite la redondance des termes et met en exergue les parties communes et distinctives des biclusters. Nous avons évalué l’utilité de notre logiciel d’abord par un scénario d’utilisation doublé d’une évaluation qualitative avec une journaliste d’investigation. En outre, les motifs de co-occurrence des variantes de sujet révélées par Bima. sont déterminés par la structure de sujet englobante fournie par une méthode d’extraction de sujet. Cependant, la communauté a peu de recul quant au choix de la méthode et son impact sur l’exploration et l’interprétation des sujets et de ses variantes. Ainsi nous avons conduit une expérience computationnelle et une expérience utilisateur contrôlée afin de comparer deux méthodes d’extraction de sujet. D’un côté Coclu. est une méthode de biclustering disjointe, et de l’autre, hirarchical Latent Dirichlet Allocation (hLDA) est un modèle de sujet probabiliste dont les distributions de probabilité forment une structure de bicluster avec chevauchement. (...)
As the production of digital texts grows exponentially, a greater need to analyze text corpora arises in various domains of application, insofar as they constitute inexhaustible sources of shared information and knowledge. We therefore propose in this thesis a novel visual analytics approach for the analysis of text corpora, implemented for the real and concrete needs of investigative journalism. Motivated by the problems and tasks identified with a professional investigative journalist, visualizations and interactions are designed through a user-centered methodology involving the user during the whole development process. Specifically, investigative journalists formulate hypotheses and explore exhaustively the field under investigation in order to multiply sources showing pieces of evidence related to their working hypothesis. Carrying out such tasks in a large corpus is however a daunting endeavor and requires visual analytics software addressing several challenging research issues covered in this thesis. First, the difficulty to make sense of a large text corpus lies in its unstructured nature. We resort to the Vector Space Model (VSM) and its strong relationship with the distributional hypothesis, leveraged by multiple text mining algorithms, to discover the latent semantic structure of the corpus. Topic models and biclustering methods are recognized to be well suited to the extraction of coarse-grained topics, i.e. groups of documents concerning similar topics, each one represented by a set of terms extracted from textual contents. We provide a new Weighted Topic Map visualization that conveys a broad overview of coarse-grained topics by allowing quick interpretation of contents through multiple tag clouds while depicting the topical structure such as the relative importance of topics and their semantic similarity. Although the exploration of the coarse-grained topics helps locate topic of interest and its neighborhood, the identification of specific facts, viewpoints or angles related to events or stories requires finer level of structuration to represent topic variants. This nested structure, revealed by Bimax, a pattern-based overlapping biclustering algorithm, captures in biclusters the co-occurrences of terms shared by multiple documents and can disclose facts, viewpoints or angles related to events or stories. This thesis tackles issues related to the visualization of a large amount of overlapping biclusters by organizing term-document biclusters in a hierarchy that limits term redundancy and conveys their commonality and specificities. We evaluated the utility of our software through a usage scenario and a qualitative evaluation with an investigative journalist. In addition, the co-occurrence patterns of topic variants revealed by Bima. are determined by the enclosing topical structure supplied by the coarse-grained topic extraction method which is run beforehand. Nonetheless, little guidance is found regarding the choice of the latter method and its impact on the exploration and comprehension of topics and topic variants. Therefore we conducted both a numerical experiment and a controlled user experiment to compare two topic extraction methods, namely Coclus, a disjoint biclustering method, and hierarchical Latent Dirichlet Allocation (hLDA), an overlapping probabilistic topic model. The theoretical foundation of both methods is systematically analyzed by relating them to the distributional hypothesis. The numerical experiment provides statistical evidence of the difference between the resulting topical structure of both methods. The controlled experiment shows their impact on the comprehension of topic and topic variants, from analyst perspective. (...)
APA, Harvard, Vancouver, ISO, and other styles
25

Atif, Lynda. "P©, une approche collaborative d'analyse des besoins et des exigences dirigée par les problèmes : le cas de développement d'une application Analytics RH." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED042/document.

Full text
Abstract:
Le développement des systèmes d’information numériques et plus particulièrement les systèmes interactifs d’aide à la décision (SIAD) orientés données (Application Analytics) rencontre des échecs divers.La plupart des études montrent que ces échecs de projets de développement de SIAD relève de la phase d’analyse des besoins et des exigences. Les exigences qu'un système doit satisfaire sont insuffisamment définies à partir de besoins réels des utilisateurs finaux.D’un point de vue théorique, l’analyse de l’état de l’art, mais également du contexte industriel particulier, conduit donc à porter une attention particulière à cette phase et à élaborer une approche collaborative d’analyse des besoins et des exigences dirigée par les problèmes.Un système d’aide à la décision est avant tout un système d’aide à la résolution de problèmes et le développement de ce type d’artefact ne peut donc se faire sans avoir convenablement identifié en amont les problèmes de décision auxquels font face les utilisateurs décideurs, afin d’en déduire les exigences et le type de SIAD.Cette approche, par un renversement de la primauté implicite de la solution technique par rapport à la typologie des problèmes de décision, a été explicitée et mise en œuvre pour le développement d’une Application Analytics qui a permis d’atteindre l’objectif attendu : un système efficace et qui satisfasse d’un triple point de vue technique, fonctionnel et ergonomique, ses différents utilisateurs finaux
The design of digital information systems, especially interactive Data-Driven Decision Support System (DSS) (Analytics Application) often misses its target.Most of studies have proven that the sources of most DSS design failures are rooted in the analysis step of the users’ needs and requirements a system has to meet and comply with. From a theoretical point of view, the analysis of the state of art combined with the analysis of specific industrial contexts, leads to focus on this critical step, and consequently to develop a collaborative problem-driven requirements engineering approach.A DSS, first and foremost, is a problem solving support system. It implies that developing such an artefact cannot be performed without an adequate upstream identification of end-users’ decision problems, prior to defining the decision makers’ requirements and the appropriate type of DSS.Characterized by the reversal of the implicit primacy of technical solution versus the typology of decision problems, this approach has been elaborated and implemented to design an Analytics Application. As a result, it allowed to reach the expected objective: An effective system that meets the different end-users’ expectations from a technical, functional and ergonomic standpoint
APA, Harvard, Vancouver, ISO, and other styles
26

Gallego-Durán, Francisco J. "Estimating difficulty of learning activities in design stages: A novel application of Neuroevolution." Doctoral thesis, Universidad de Alicante, 2015. http://hdl.handle.net/10045/53697.

Full text
Abstract:
In every learning or training environment, exercises are the basis for practical learning. Learners need to practice in order to acquire new abilities and perfect those gained previously. However, not every exercise is valid for every learner: learners require exercises that match their ability levels. Hence, difficulty of an exercise could be defined as the amount of effort that a learner requires to successfully complete the exercise (its learning cost). Too high difficulties tend to discourage learners and make them drop out, whereas too low difficulties are perceived as unchallenging, resulting in loss of interest. Correctly estimating difficulties is hard and error-prone problem that tends to be done manually using domain-expert knowledge. Underestimating or overestimating difficulty generates a problem for learners, increasing dropout rates in learning environments. This paper presents a novel approach to improve difficulty estimations by using Neuroevolution. The method is based on measuring the computational cost that Neuroevolution algorithms require to successfully complete a given exercise and establishing similarities with previously gathered information from learners. For specific experiments presented, a game called PLMan has been used. PLMan is a PacMan-like game in which users have to program the Artificial Intelligence of the main character using a Prolog knowledge base. Results show that there exists a correlation between students’ learning costs and those of Neuroevolution. This suggests that the approach is valid, and measured difficulty of Neuroevolution algorithms may be used as estimation for student's difficulty in the proposed environment.
APA, Harvard, Vancouver, ISO, and other styles
27

Wiltshire, Serge William. "On The Application Of Computational Modeling To Complex Food Systems Issues." ScholarWorks @ UVM, 2019. https://scholarworks.uvm.edu/graddis/1077.

Full text
Abstract:
Transdisciplinary food systems research aims to merge insights from multiple fields, often revealing confounding, complex interactions. Computational modeling offers a means to discover patterns and formulate novel solutions to such systems-level problems. The best models serve as hubs—or boundary objects—which ground and unify a collaborative, iterative, and transdisciplinary process of stakeholder engagement. This dissertation demonstrates the application of agent-based modeling, network analytics, and evolutionary computational optimization to the pressing food systems problem areas of livestock epidemiology and global food security. It is comprised of a methodological introduction, an executive summary, three journal-article formatted chapters, and an overarching discussion section. Chapter One employs an agent-based computer model (RUSH-PNBM v.1.1) developed to study the potential impact of the trend toward increased producer specialization on resilience to catastrophic epidemics within livestock production chains. In each run, an infection is introduced and may spread according to probabilities associated with the various modes of contact between hog producer, feed mill, and slaughter plant agents. Experimental data reveal that more-specialized systems are vulnerable to outbreaks at lower spatial densities, have more abrupt percolation transitions, and are characterized by less-predictable outcomes; suggesting that reworking network structures may represent a viable means to increase biosecurity. Chapter Two uses a calibrated, spatially-explicit version of RUSH-PNBM (v.1.2) to model the hog production chains within three U.S. states. Key metrics are calculated after each run, some of which pertain to overall network structures, while others describe each actor’s positionality within the network. A genetic programming algorithm is then employed to search for mathematical relationships between multiple individual indicators that effectively predict each node’s vulnerability. This “meta-metric” approach could be applied to aid livestock epidemiologists in the targeting of biosecurity interventions and may also be useful to study a wide range of complex network phenomena. Chapter Three focuses on food insecurity resulting from the projected gap between global food supply and demand over the coming decades. While no single solution has been identified, scholars suggest that investments into multiple interventions may stack together to solve the problem. However, formulating an effective plan of action requires knowledge about the level of change resulting from a given investment into each wedge, the time before that effect unfolds, the expected baseline change, and the maximum possible level of change. This chapter details an evolutionary-computational algorithm to optimize investment schedules according to the twin goals of maximizing global food security and minimizing cost. Future work will involve parameterizing the model through an expert informant advisory process to develop the existing framework into a practicable food policy decision-support tool.
APA, Harvard, Vancouver, ISO, and other styles
28

Lee, Ji Eun. "Examining the Effects of Discussion Strategies and Learner Interactions on Performance in Online Introductory Mathematics Courses: An Application of Learning Analytics." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7583.

Full text
Abstract:
This dissertation study explored: 1) instructors’ use of discussion strategies that enhance meaningful learner interactions in online discussions and student performance, and 2) learners’ interaction patterns in online discussions that lead to better student performance in online introductory mathematics courses. In particular, the study applied a set of data mining techniques to a large-scale dataset automatically collected by the Canvas Learning Management System (LMS) for five consecutive years at a public university in the U.S., which included 2,869 students enrolled in 72 courses. First, the study found that the courses that posted more open-ended prompts, evaluated students’ discussion messages posted by students, used focused discussion settings (i.e., allowing a single response and replies to that response), and provided more elaborated feedback had higher students final grades than those which did not. Second, the results showed the instructors’ use of discussion strategies (discussion structures) influenced the quantity (volume of discussion), the breadth (distribution of participation throughout the discussion), and the quality of learner interactions (levels of knowledge construction) in online discussions. Lastly, the results also revealed that the students’ messages related to allocentric elaboration (i.e., taking other peers’ contributions in argumentive or evaluative ways) and application (i.e., application of new knowledge) showed the highest predictive value for their course performance. The findings from this study suggest that it is important to provide opportunities for learners to freely discuss course content, rather than creating a discussion task related to producing a correct answer, in introductory mathematics courses. Other findings reported in the study can also serve as guidance for instructors or instructional designers on how to design better online mathematics courses.
APA, Harvard, Vancouver, ISO, and other styles
29

Johnsson, Daniel. "Creating and Evaluating a Useful Web Application for Introduction to Programming." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-172528.

Full text
Abstract:
The aim of this thesis is to build a web application to teach students programming in Python through code puzzles that do not require them to write any code, to answer the research question How should a quiz application for introduction to Python programming be developed to be useful? The web application's utility and usability are evaluated through the learnability metric relative user efficiency. Data was collected and analyzed using Google Analytics and BigQuery. The study found that users were successfully aided with theoretical sections pertaining to the puzzles and even if programming is mainly a desktop activity there is still an interest for mobile access. Although evaluation of relative user efficiency did not serve as a sufficient learnability measure for this type of application, conclusions from the data analysis still gave insights into the utility of the web application.
APA, Harvard, Vancouver, ISO, and other styles
30

Heydenrych, Christine. "Fostering the effectiveness of reportable arrangements provisions by enhancing digitalisation at the South African Revenue Service." Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/80443.

Full text
Abstract:
Maladministration at the South African Revenue Service (SARS) resulted in the loss of public trust and negative implications on voluntary tax compliance and may encourage taxpayers to partake in aggressive tax planning schemes. This maladministration also resulted in the degeneration of SARS systems whilst technology advanced internationally. Digitalisation at SARS is crucial to address aggressive tax planning that has become more advanced as a result of the mobility of the digital economy. This study used a qualitative research methodology based on exploratory research which involved literature reviews of textbooks and articles in order to provide recommendations of how digitalisation can be adopted by SARS with a specific focus on ensuring the effectiveness of the South African Reportable Arrangements legislation. The operation of the South African Reportable Arrangements legislation was explained in order to benchmark it against the design features and best practices recommended by the OECD in Action 12 of the BEPS project and to highlight how digitalisation can enhance these provisions. Recommendations made considered the current state of digitalisation at SARS, how other countries’ tax administrations have become more digitalised and practical concerns to be borne in mind when deciding the appropriate technology. The study found that there are a handful of recommendations remaining on how South Africa could improve reportable arrangement legislation without unnecessarily increasing the compliance burden. Digitalisation techniques that could be considered are advanced analytics, artificial intelligence, blockchain technology and Application Programme Interfaces. The study proposed, amongst others, that these could be adopted by SARS to be able to gather information from various sources in real time to identify further characteristics of aggressive tax planning, perform completeness checks on reported transactions and re-deploy resources to investigate pre-identified possible reportable transactions.
Mini Dissertation (MPhil (International Taxation))--University of Pretoria, 2020.
pt2021
Taxation
MPhil (International Taxation)
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
31

Achenbach, Anna [Verfasser], Stefan [Gutachter] Spinler, and Arnd [Gutachter] Huchzermeier. "Predictive analytics in airline operations : application of machine learning for arrival time and fuel consumption prediction / Anna Achenbach ; Gutachter: Stefan Spinler, Arnd Huchzermeier." Vallendar : WHU - Otto Beisheim School of Management, 2021. http://d-nb.info/1225741033/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Oskar, Marko. "Application of innovative methods of machine learning in Biosystems." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108729&source=NDLTD&language=en.

Full text
Abstract:
The topic of the research in this dissertation is the application of machinelearning in solving problems characteristic to biosystems, with specialemphasis on agriculture. Firstly, an innovative regression algorithm based onbig data was presented, that was used for yield prediction. The predictionswere then used as an input for the improved portfolio optimisation algorithm,so that appropriate soybean varieties could be selected for fields withdistinctive parameters. Lastly, a multi-objective optimisation problem was setup and solved using a novel method for categorical evolutionary algorithmbased on NSGA-III.
Предмет истраживања докторске дисертације је примена машинског учења у решавању проблема карактеристичних за биосистемe са нагласком на пољопривреду. Најпре је представљен иновативни алгоритам за регресију који је примењен на великој количини података како би се са предиковали приноси. На основу предикција одабране су одговарајуће сорте соје за њиве са одређеним карактеристикама унапређеним алгоритмом оптимизације портфолија. Напослетку је постављен оптимизациони проблем одређивања сетвене структуре са вишеструким функцијама циља који је решен иновативном методом, категоричким еволутивним алгоритмом заснованом на NSGA-III алгоритму.
Predmet istraživanja doktorske disertacije je primena mašinskog učenja u rešavanju problema karakterističnih za biosisteme sa naglaskom na poljoprivredu. Najpre je predstavljen inovativni algoritam za regresiju koji je primenjen na velikoj količini podataka kako bi se sa predikovali prinosi. Na osnovu predikcija odabrane su odgovarajuće sorte soje za njive sa određenim karakteristikama unapređenim algoritmom optimizacije portfolija. Naposletku je postavljen optimizacioni problem određivanja setvene strukture sa višestrukim funkcijama cilja koji je rešen inovativnom metodom, kategoričkim evolutivnim algoritmom zasnovanom na NSGA-III algoritmu.
APA, Harvard, Vancouver, ISO, and other styles
33

Aronowitz, Jordan G. "Optimize Your Fitness, Optimize Your Business: The Balanced Scorecard, Analysis and Application for the CrossFit Affiliate." Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/cmc_theses/1973.

Full text
Abstract:
The Balanced Scorecard for the CrossFit Affiliate will provide unprepared or inexperienced managers with the necessary information to make informed operational decisions. The Balanced Scorecard incorporates financial data, customer satisfaction, internal operations, and future growth to create a series of cause-and-effect relationships that illustrate how decisions in one aspect of a business affect others. The Balanced Scorecard particularly benefits inexperienced managers leading small businesses. As overall small business growth increased, the boutique fitness industry expanded. The CrossFit brand rose in conjunction, but because corporate leaders insist on a hands-off licensing model, affiliate owners do not receive assistance from CrossFit corporate headquarters. In addition, the high frequency of affiliate owners without business education or management experience contributes to the likelihood of failure. Until CrossFit corporate actively guides its licensees, owners need a tool to promote business success. The Balanced Scorecard for the CrossFit Affiliate, comprised of data from one successful and four unsuccessful affiliates, will assist affiliate managers. Profitability, market share, retention, and the introduction of new services are the main drivers of the Balanced Scorecard and augment the analysis of relationships between the financial, customer, internal, and growth perspectives.
APA, Harvard, Vancouver, ISO, and other styles
34

Nyström, Björn. "Inomhuspositionering och applikationsanalys : Sammanställning och visualisering av relevant data vid event." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-45667.

Full text
Abstract:
The main task of this thesis was to create a software prototype of an analysing tool that would work as a compliment to IT-Maskinens event application, which is used as a digital tool for events. The requirement for the prototype was that it would be able to identify the user, gather relevant data from a possible storage location and visualize this in an aesthetically pleasing manner.   The thesis contained two different parts, investigation and implementation. The investigation included to determine which tool that would be used for the visualization of the data, which data likeminded companies thought was important and which information that was possible to extract from IT-Maskinens event application and indoor positioning system. How the software prototype was going to be implemented was also something that was investigated, in terms of which programming environment, library and language that could meet the requirements that were set for the prototype.   The implementation part of the thesis included the creation of a software prototype in ASP.NET MVC5 and Google Chart Tools. The creation of the software prototype was created using the programming languages C#, HTML, Razor, CSS, JavaScript and jQuery.
Huvuduppgiften i detta examensarbete har varit att skapa en programprototyp avett analyseringsverktyg tillhörande företaget IT-Maskinens eventapplikationen, som används som ett digitalt hjälpmedel vid event. Kravet på prototypen var att den skulle kunna identifiera användaren, samla in relevant data från en eventuell lagringsplats samt visualisera denna på ett estetiskt tillfredsställande sätt.   Examensarbetet innehöll två olika delar, utredning och implementation. Utredningen innefattade att fastslå vilket visualiseringsverktyg som skulle användas, vilken information liknande företag ansåg vara viktig samt vilken information som var möjlig att ta fram från IT-Maskinens eventapplikation och inomhuspositioneringssystem. Hur programprototypen skulle implementeras var också något som utreddes, i form av vilken programmeringsmiljö, vilket programmeringsbibliotek/programmeringsspråk som kunde uppfylla de krav som fanns på prototypen.   Implementationsdelen av examensarbetet innefattade skapande av programprototypen i ASP.NET MVC5 och Google Chart Tools. Skapandet av programprototypen gjordes i programmeringsspråken C#, HTML, Razor, CSS, JavaScript och jQuery.
APA, Harvard, Vancouver, ISO, and other styles
35

Ullsten, Sara. "Tailormade Surfaces for Extended CE Applications." Doctoral thesis, Uppsala University, Department of Chemistry, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4217.

Full text
Abstract:

The combination of capillary electrophoresis (CE) and mass spectrometry (MS) constitutes a powerful microanalytical system in the fields of biology, medicine and chemistry. This thesis describes the development of three novel capillary coatings and demonstrates how these extend the utility of CE as a high-efficiency separation technique in protein analysis and biopharmaceutical drug screening.

Due to the rapidly growing interest in characterizing the human proteome, there is an increased need for rapid protein separations. The use of CE in protein analysis is, however, nontrivial due to problems with protein adsorption to the fused-silica capillary walls. In this thesis, this problem was addressed by developing two novel, physically adsorbed, cationic polymer surface coatings, denoted PolyE-323 and Q-agarose. By using simple rinsing protocols, highly reproducible coatings, stable over a wide range of pH 2-11 were generated. Successful protein separations using cationic-coated capillaries in CE-MS, equipped with either electrospray ionization (ESI) or matrix-assisted laser desorption/ionization (MALDI), has been demonstrated.

In the pharmaceutical industry, favorable pharmacokinetic properties of a candidate drug, such as high bioavailability after oral administration, are crucial for a high success rate in clinical development. Tools for prediction of biopharmaceutically relevant drug properties are important in order to identify and discard poor candidate drugs as soon as possible. In this thesis, a membrane mimetic coating was developed by electrostatically immobilizing liposomes to the capillary wall, via an anchoring sublayer of Q-agarose. The liposome-coated capillaries were demonstrated in on-line CE-MS for prediction of drug membrane permeability.

APA, Harvard, Vancouver, ISO, and other styles
36

Vatin, Gabriel. "Formalisation d’un environnement d’aide à l’analyse géovisuelle : Application à la sécurité et sûreté de la maritimisation de l’energie." Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0095.

Full text
Abstract:
L'espace maritime est encore aujourd'hui le contexte de nombreux accidents et dangers, comme des collisions ou des attaques pirates. Afin de garantir le contrôle de la sûreté et de la sécurité de cet espace, il est nécessaire d'étudier les données de mouvement en temps réel (surveillance) et les évènements passés (analyse). Ces analyses ont pour objet de détecter une part des activités criminelles et les risques encourus, ainsi que les violations à la réglementation. Les opérateurs sont alors confrontés à de grandes quantités de données de mouvements, qui doivent être étudiées grâce à des cartes et des visualisations. Toutefois, leurs outils actuels sont limités en termes de capacités d'analyse. L'utilisation de méthodes d'analyse géovisuelle, qui a pu montrer son intérêt dans le monde académique, pourrait alors faciliter la découverte de connaissances au sein des données du trafic maritime. Toutefois, de telles méthodes ne sont pas encore utilisées de manière opérationnelle, telle que l'étude des risques maritimes.Dans ce contexte, nous proposons un environnement d'aide à l'analyse géovisuelle qui permet de guider dans l'analyse menée par les utilisateurs et dans l'utilisation de ces nombreuses visualisations. Notre démarche de thèse se fonde sur la formalisation des cas d'utilisation, des utilisateurs et des méthodes de visualisation. Le recours à des ontologies et des règles permet de mettre au point un système à base de connaissances, afin de proposer des méthodes adéquates pour la visualisation et l'analyse de données de mouvement, appliquées au domaine maritime. Nous illustrons cette proposition par plusieurs exemples d'analyse de risques en mer
The maritime space is still a sensitive area due to many accidents and dangers, such as collisions or pirate attacks. In order to ensure the control of safety and security of this area, it is essential to study near real-time movement information (surveillance) or past events (analysis). These studies aim at detecting part of criminal activities, assumed risks, and breaches of regulation. Maritime operators are then faced to large set of movement data, which must be studied with maps and visualizations. However, their current tools are limited in terms of analysis capacities. The use of geovisual analytics has proved great effectiveness in the academic world, and could allow operators to discover knowledge within maritime traffic data. However, these are not used yet in the operational word for studying maritime risks.In this context, we propose a geovisual analytics support system that will guide in the analysis led by users, and in the use of these many visualizations. Our research methodology is based on the formalization of use cases, of users and of several visualization methods. Ontologies and rules are used to create a knowledge-based system, which is used to select adequate solutions for visualizing and analyzing movement data, applied to the maritime domain. Some examples of risk analysis at sea are then presented to illustrate its use
APA, Harvard, Vancouver, ISO, and other styles
37

Petronyuk, Oleksandr. "Řešení Business Intelligence v oblastí Vysokého školství na základě modelu MBI." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-192367.

Full text
Abstract:
The thesis presents a Business Intelligence solution in higher education area. The solution is comprised of the design and implementation of representative tasks, metrics, dimensions and analytical applications with the use of MBI methods. The sub-goal is to summarize knowledge gained by the BI project implemented in higher education area by IBM Corporation. A Hands-on knowledge was used to achieve the set targets. This knowledge confirms the fact that designed tasks, metrics and dimensions are indeed applicable in practice. The main contribution of the thesis consists in enriching the MBI methodology with a solution applicable in high education sector. The interim contribution lies in documentation of specifics connected with implementation of the BI project within Internatinal Business Machines Corporation.
APA, Harvard, Vancouver, ISO, and other styles
38

Hultqvist, Andreas, and Tobias Hultqvist. "Developing an Analysing a Web Application made for Teachers to Evaluate Students' Performance : Utveckling och analys av en webbapplikation för examinatorers analys av elevers lärande." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176985.

Full text
Abstract:
The need to learn programming increases as more jobs require basic programming skills and computer knowledge. Compulsory school is adding programming to the curriculum, which leads to challenges due to both teachers and students are new to this subject. Even at the university level some students get in touch with programming for the first time in their lives.  This thesis aim to develop a web application that can be used by teachers as a reliable and informative tool when evaluating the learning process of its students, by combining data collected through user interactions while solving programming related puzzles in Python, with answers from periodic self-evaluation surveys.  The study shows that the web application can be seen as a valid tool when evaluating the students' learning process.
APA, Harvard, Vancouver, ISO, and other styles
39

Lo, Bobby. "Social media analytics in business intelligence applications." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46017.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 89-93).
Social media is becoming increasingly important in society and culture, empowering consumers to group together on common interests and share opinions through the Internet. The social web shifts the originators of content from companies to users. Differences caused by this dynamic result in existing web analytic techniques being inadequate. Because people reveal their thoughts and preferences in social media, there are significant opportunities in business intelligence by analyzing social media. These opportunities include brand monitoring; trend recognition, and targeted advertising. The market for social media analytics in business intelligence is further validated by its direct application in the consumer research market. Challenges lie ahead for development and adoption of social media analytics. Technology used in these analytics, such as natural language processing and social network analysis, need to mature to improve accuracy, performance, and scalability. Nevertheless, social media continues to grow at a rapid pace, and organizations should form strategies to incorporate social media analytics into their business intelligence frameworks.
by Bobby Lo.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
40

Eaglin, Todd. "Scalable, situationally aware visual analytics and applications." Thesis, The University of North Carolina at Charlotte, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10270103.

Full text
Abstract:

There is a need to understand large and complex datasets to provide better situa- tional awareness in-order to make timely well-informed actionable decisions in critical environments. These types of environments include emergency evacuations for large buildings, indoor routing for buildings in emergency situations, large-scale critical infrastructure for disaster planning and first responders, LiDAR analysis for coastal planning in disaster situations, and social media data for health related analysis. I introduce novel work and applications in real-time interactive visual analytics in these domains. I also detail techniques, systems and tools across a range of disciplines from GPU computing for real-time analysis to machine learning for interactive analysis on mobile and web-based platforms.

APA, Harvard, Vancouver, ISO, and other styles
41

Djelil, Fahima. "Conception et évaluation d'un micromonde de Programmation Orientée-Objet fondé sur un jeu de construction et d'animation 3D." Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22774/document.

Full text
Abstract:
Les micromondes de programmation sont des environnements restreints et interactifs, dans lesquels l’apprenant apprend en interagissant avec des entités visuelles ou tangibles, sémantiquement liées à des concepts de programmation formels. Ils favorisent l’assimilation de connaissances et la compréhension de concepts abstraits de programmation au moyen de métaphores visuelles et d’expériences ludiques. Cette thèse tente d’apporter des avancées théoriques et méthodologiques sur la conception et l’évaluation de tels environnements, qui sont connus pour avoir un grand potentiel sur l’apprentissage sans que cela ne soit démontré. Les micromondes étant des environnements d’apprentissage par le jeu, nous avons tout d’abord examiné la question du jeu et son lien à l’apprentissage. En nous appuyant sur une revue de la littérature, nous avons souligné au même titre que certains auteurs, la nécessité de distinguer le jeu-game (l’artefact informatique) du jeu-play (la situation qui découle des interactions avec le jeu-game). Le but étant de situer l’apprentissage et d’aboutir à des éléments de conception et d’évaluation de l’apprentissage. Nous nous sommes ensuite intéressés aux recherches en didactique de l’Informatique, afin d’identifier les approches d’enseignement les plus répandues visant à palier les difficultés d’apprentissage de la Programmation Orientée-Objet ( POO ) rencontrées par des débutants. Nous avons défini une nouvelle approche didactique pour l’introduction de la POO . Suite à cela, nous avons défini les dimensions de conception d’un micromonde, que nous désignons comme un système de représentation transitionnel, dans lequel l’apprenant développe des connaissances sur les concepts formels et abstraits de la programmation, suite à ses interactions avec l’interface du micromonde. Les avancées théoriques et méthodologiques apportées ont été mises en œuvre dans un nouveau micromonde de POO fondé sur un jeu de construction et d’animation 3D appelé PrOgO. PrOgO implémente un système de représentation transitionnel, dans lequel les concepts fondamentaux de la POO sont représentés par des graphiques 3D visuels et interactifs. Il crée un jeu-play qui découle des interactions de l’apprenant avec son interface. Jouer avec PrOgO consiste à imaginer, créer et animer des constructions 3D significatives. PrOgO peut également être déployé dans une classe multi-dispositifs, grâce au framework Tactileo conçu à cet effet. Dans l’évaluation de l’apprentissage, nous utilisons des méthodes relevant de l’analyse de l’apprentissage, par la collecte et l’analyse de traces d’interaction pour la classification et la caractérisation des apprenants. En complément à cela, nous examinons l’état des connaissances d’apprenants, au travers de tests de vérification de connaissances. Nous tentons également d’identifier par l’analyse statistique, les actions et les comportements d’apprenants qui déterminent leur progression dans l’évaluation pré/post de l’acquisition des connaissances
Programming microworlds are small and interactive environments, in which the learner learns from his interactions with visual or tangible entities having a strong semantic link with formal programming concepts. They promote knowledge assimilation and abstract programming concepts understanding by the use of visual metaphors and play. This thesis attempts to contribute to theoretical and methodological advances regarding the design and the assessment of such environments, which are known to have a great potential on learning without any evidence on that. As microworlds are game based learning environments, we first examined the gaming issue and its relation to learning. Based on a literature review, we emphasized as some authors, the need to distinguish between the game (the computing artefact) and the play (the situation that is triggered by the interactions with the game). The purpose is to analyze learning and establish concepts that will guide the design and the evaluation of learning. Then we reviewed some research on Computer Science Education, with the view to identify some widespread teaching approaches that address beginners’ difficulties in learning Object-Oriented Programming (OOP). We defined a new didactic approach for OOP introduction. We then defined the design dimensions of a microworld, we refer to as a transitional representation system, in which the learner develops knowledge on programming abstract and formal concepts, as a result to his interactions with the microworld interface. We have implemented the theoretical and methodological advances we provided, in a new OOP microworld based on a 3D constructive and animation game called PrOgO. PrOgO implements a transitional representation system, in which basic OOP concepts are depicted with visual and interactive 3D graphics. It enables play that arises from the learner’s interactions with its interface. Playing with PrOgO involves to imagining, creating and animating significant 3D constructions. PrOgO can be also deployed within a multi-device classroom through the Tactileo framework, we designed for that purpose. In the evaluation of learning, we use methods belonging to learning analytics by the collection and the analysis of digital interaction logs, with the view to classify and characterize learners. In addition to this, we examine the state of learners’ knowledge through test knowledge verifications. We also attempt to examine through statistical analysis, the learners’ actions and behaviours that affect their progress in pre/post evaluations of gained knowledge
APA, Harvard, Vancouver, ISO, and other styles
42

Eriksson, Björn. "In-line application of electric fields in capillary separation systems." Doctoral thesis, Karlstad University, Division for Chemistry, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1197.

Full text
Abstract:

The magnitude of an electric field possible to apply in a capillary separation system is limited, because a high electric field causes a too high current through the capillary. Application of the electric field in-line will give an increased conductivity in the column, further increasing the risk of too high currents. The conductivity changes were found to result from an overall increase in ionic strength within the electric field. The increase in ionic strength is caused by the increase in mobile phase ions with electrophoretic velocity against the flow, together with OH- or H3O+ ions (depending on polarity) formed at the inlet electrode. Further it was found that the use of a pressurized reservoir or splitting of the flow at the inlet electrode could significantly limit the conductivity changes and thereby the maximum applicable electric field strengths could be increased.

APA, Harvard, Vancouver, ISO, and other styles
43

Frost, S. J. "Analytical applications of liposomes." Thesis, University of Surrey, 1994. http://epubs.surrey.ac.uk/2745/.

Full text
Abstract:
Liposomes have established roles in drug delivery and cell membrane studies. Amongst other applications; they can also be used as analytical reagents, particularly in immunoassays. Liposomal immunoassays have potential advantages over alternatives; including sensitivity, speed, simplicity and relative reagent stability. The aim of these studies was to develop and evaluate novel examples of these assays. When liposomes entrapped the dye, Sulphorhodamine B, a shift in its maximum absorption wavelength compared to free dye was observed. This was attributed to dimerization of the dye at high concentrations. If the liposomes were disrupted, the released dye was diluted into the external buffer, and the dye's absorption spectrum reverted to that of free dye. After optimization of dye entrapment, immunoassays were developed using these liposomes. Albumin-coated liposomes were used in a model assay to measure serum albumin. This assay employed complement-mediated immunolysis, commonly used in liposomal immunoassays. The liposomes were lysed by anti-albumin and complement, and this could be competitively inhibited by serum albumin. To improve sensitivity, Fab' anti-albumin liposomes were prepared. These enabled measurement of urinary albumin by a complement-mediated immunoassay, but using a sandwich technique. Anti-albumin (intact) liposomes were shown to precipitate on gentle centrifugation after reaction with albumin. They were applied as a solid phase reagent in an heterogeneous immunoassay, using radioimmunoassay for urinary microalbumin as a model assay. Liposomes containing Sulphorhodamine B were also used in a more novel assay; for serum anticardiolipin antibodies. Cardiolipin-containing liposomes were prepared. These were lysable using magnesium ions. Anticardiolipin antibodies (IgG) were found to augment this lysis, enabling their estimation. Similar imprecision and acceptable correlation with a commercial enzyme-linked immunosorbent assay (ELISA) were obtained. The findings demonstrate Sulphorhodamine B release can be used as a marker in homogeneous colorimetric liposomal immunoassays; both in model assays and in potentially more useful clinical biochemistry applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Clerc, Stephane Daniel. "Analytical application of bacterial bioluminescence." Thesis, University of Huddersfield, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ho, Quan. "Architecture and Applications of a Geovisual Analytics Framework." Doctoral thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91679.

Full text
Abstract:
The large and ever-increasing amounts of multi-dimensional, multivariate, multi-source, spatio-temporal data represent a major challenge for the future. The need to analyse and make decisions based on these data streams, often in time-critical situations, demands integrated, automatic and sophisticated interactive tools that aid the user to manage, process, visualize and interact with large data spaces. The rise of `Web 2.0', which is undisputedly linked with developments such as blogs, wikis and social networking, and the internet usage explosion in the last decade represent another challenge for adapting these tools to the Internet to reach a broader user community. In this context, the research presented in this thesis introduces an effective web-enabled geovisual analytics framework implemented, applied and verified in Adobe Flash ActionScript and HTML5/JavaScript. It has been developed based on the principles behind Visual Analytics and designed to significantly reduce the time and effort needed to develop customized web-enabled applications for geovisual analytics tasks and to bring the benefits of visual analytics to the public. The framework has been developed based on a component architecture and includes a wide range of visualization techniques enhanced with various interaction techniques and interactive features to support better data exploration and analysis. The importance of multiple coordinated and linked views is emphasized and a number of effective techniques for linking views are introduced. Research has so far focused more on tools that explore and present data while tools that support capturing and sharing gained insight have not received the same attention. Therefore, this is one of the focuses of the research presented in this thesis. A snapshot technique is introduced, which supports capturing discoveries made during the exploratory data analysis process and can be used for sharing gained knowledge. The thesis also presents a number of applications developed to verify the usability and the overall performance of the framework for the visualization, exploration and analysis of data in different domains. Four application scenarios are presented introducing (1) the synergies among information visualization methods, geovisualization methods and volume data visualization methods for the exploration and correlation of spatio-temporal ocean data, (2) effective techniques for the visualization, exploration and analysis of self-organizing network data, (3) effective flow visualization techniques applied to the analysis of time-varying spatial interaction data such as migration data, commuting data and trade flow data, and (4) effective techniques for the visualization, exploration and analysis of flood data.
APA, Harvard, Vancouver, ISO, and other styles
46

Furtado, Jazmin D. (Jazmin Dahl). "Applications of healthcare analytics in reducing hospitalization days." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119355.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 108-114).
In this thesis, we employ healthcare analytics to inform system-level changes at Massachusetts General Hospital that could lead to a significant reduction in avoidable hospitalization days and improvement in patients outcomes. The first area of focus is around avoidable bed-days in the ICU. Many surgical patients experience non-clinical delays when they transfer from the ICU to a subsequent general care unit where they are expected to continue their recovery. As a result, they spend a longer time in the ICU than necessary. In spite of several studies that suggest out-of-ICU transfer delays are quite common, there is little work that quantifies the impact on patient recovery. Using multiple statistical approaches including regression and matching, we obtain a robust result that suggests that non-clinical transfer delays from the ICU delay the patient's recovery as well as extend the hospital LOS. Specifically, the analysis shows that each day that the patient is delayed in the ICU for non-clinical reasons increases hospital LOS by 0.71 days (p-value < 0.01) and the patient's progress of care by 0.32 days (p-value < 0.01), on average. The second area of focus is concerned with bed-days from heart failure (HF) admissions. Much of the current work in reducing HF hospitalizations promotes interventions after the patient is hospitalized, aiming to prevent subsequent hospitalizations within 30 days. In contrast, we focus on reducing overall hospitalizations from the general HF population. We first analyze the outpatient access for these patients before they are admitted to the hospital (mostly) through the Emergency Department. One of the main findings is that in more than half of these admissions, the patient did not have a completed appointment with any outpatient clinic (Primary Care, Cardiology, or Home Health) during the two weeks prior to hospitalization. This reveals the need for improved outpatient-based preventative measures to manage HF patients. To partially address this challenge, we develop a predictive model using a logistic regression to predict the risk of a HF-related admission within the next six months. The model performs quite well with an out-of-sample AUC of 0.78.
by Jazmin D. Furtado.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
47

Apshingekar, Prafulla P. "Applications of ultrasound in pharmaceutical processing and analytics." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/14127.

Full text
Abstract:
Innovations and process understanding is the current focus in pharmaceutical industry. The objective of this research was to explore application of high power ultrasound in the slurry crystallisation and application of low power ultrasound (3.5 MHz) as process analytical technology (PAT) tool to understand pharmaceutical processing such as hot melt extrusion. The effect of high power ultrasound (20 kHz) on slurry co-crystallisation of caffeine / maleic acid and carbamazepine / saccharin was investigated. To validate low power ultrasound monitoring technique, it was compared with the other techniques (PAT tools) such as in-line rheology and in-line NIR spectroscopy. In-line rheological measurements were used to understand melt flow behaviour of theophylline / Kollidon VA 64 system in the slit die attached to the hot melt extruder. In-line NIR spectroscopic measurements were carried out for monitoring any molecular interactions occurring during extrusion. Physical mixtures and the processed samples obtained from all experiments were characterised using powder X-ray diffraction, thermogravimetry analysis, differential scanning calorimetry, scanning Electron Microscopy, dielectric spectroscopy and high performance liquid chromatography, rotational rheology, fourier transform infrared spectroscopy and near infrared spectroscopy. The application of high power ultrasound in slurry co-crystallisation of caffeine / maleic acid helped in reducing equilibrium time required for co-crystal formation. During carbamazepine / saccharin co-crystallisation high power ultrasound induced degradation of carbamazepine was negligible. Low power ultrasound can be used as a PAT tool as it was found to be highly sensitive to the changes in processing temperatures and drug concentration.
APA, Harvard, Vancouver, ISO, and other styles
48

Uichanco, Joline Ann Villaranda. "Data-driven optimization and analytics for operations management applications." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85695.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 163-166).
In this thesis, we study data-driven decision making in operation management contexts, with a focus on both theoretical and practical aspects. The first part of the thesis analyzes the well-known newsvendor model but under the assumption that, even though demand is stochastic, its probability distribution is not part of the input. Instead, the only information available is a set of independent samples drawn from the demand distribution. We analyze the well-known sample average approximation (SAA) approach, and obtain new tight analytical bounds on the accuracy of the SAA solution. Unlike previous work, these bounds match the empirical performance of SAA observed in extensive computational experiments. Our analysis reveals that a distribution's weighted mean spread (WMS) impacts SAA accuracy. Furthermore, we are able to derive distribution parametric free bound on SAA accuracy for log-concave distributions through an innovative optimization-based analysis which minimizes WMS over the distribution family. In the second part of the thesis, we use spread information to introduce new families of demand distributions under the minimax regret framework. We propose order policies that require only a distribution's mean and spread information. These policies have several attractive properties. First, they take the form of simple closed-form expressions. Second, we can quantify an upper bound on the resulting regret. Third, under an environment of high profit margins, they are provably near-optimal under mild technical assumptions on the failure rate of the demand distribution. And finally, the information that they require is easy to estimate with data. We show in extensive numerical simulations that when profit margins are high, even if the information in our policy is estimated from (sometimes few) samples, they often manage to capture at least 99% of the optimal expected profit. The third part of the thesis describes both applied and analytical work in collaboration with a large multi-state gas utility. We address a major operational resource allocation problem in which some of the jobs are scheduled and known in advance, and some are unpredictable and have to be addressed as they appear. We employ a novel decomposition approach that solves the problem in two phases. The first is a job scheduling phase, where regular jobs are scheduled over a time horizon. The second is a crew assignment phase, which assigns jobs to maintenance crews under a stochastic number of future emergencies. We propose heuristics for both phases using linear programming relaxation and list scheduling. Using our models, we develop a decision support tool for the utility which is currently being piloted in one of the company's sites. Based on the utility's data, we project that the tool will result in 55% reduction in overtime hours.
by Joline Ann Villaranda Uichanco.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Al-Shiakhli, Sarah. "Big Data Analytics: A Literature Review Perspective." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74173.

Full text
Abstract:
Big data is currently a buzzword in both academia and industry, with the term being used todescribe a broad domain of concepts, ranging from extracting data from outside sources, storingand managing it, to processing such data with analytical techniques and tools.This thesis work thus aims to provide a review of current big data analytics concepts in an attemptto highlight big data analytics’ importance to decision making.Due to the rapid increase in interest in big data and its importance to academia, industry, andsociety, solutions to handling data and extracting knowledge from datasets need to be developedand provided with some urgency to allow decision makers to gain valuable insights from the variedand rapidly changing data they now have access to. Many companies are using big data analyticsto analyse the massive quantities of data they have, with the results influencing their decisionmaking. Many studies have shown the benefits of using big data in various sectors, and in thisthesis work, various big data analytical techniques and tools are discussed to allow analysis of theapplication of big data analytics in several different domains.
APA, Harvard, Vancouver, ISO, and other styles
50

Jones, David C. "Analytical applications of supercritical fluids." Thesis, University of Nottingham, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography