To see the other types of publications on this topic, follow the link: Interactional data.

Dissertations / Theses on the topic 'Interactional data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Interactional data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lozano, Prieto David. "Data analysis and visualization of the 360degrees interactional datasets." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-88985.

Full text
Abstract:
Nowadays, there has been an increasing interest in using 360degrees video in medical education. Recent efforts are starting to explore how nurse students experience and interact with 360degrees videos. However, once these interactions have been registered in a database, there is a lack of ways to analyze these data, which generates a necessity of creating a reliable method that can manage all this collected data, and visualize the valuable insights of the data. Hence, the main goal of this thesis is to address this challenge by designing an approach to analyze and visualize this kind of data. This will allow teachers in health care education, and medical specialists to understand the collected data in a meaningful way. To get the most suitable solution, several meetings with nursing teachers took place to draw the first draft structure of an application which acts as the needed approach. Then, the application was used to analyze collected data in a study made in December. Finally, the application was evaluated through a questionnaire that involved a group of medical specialists related to education. The initial outcome from those testing and evaluations indicate that the application successfully achieves the main goals of the project, and it has allowed discussing some ideas that will help in the future to improve the 360degrees video experience and evaluation in the nursing education field providing an additional tool to analyze, compare and assess students.
APA, Harvard, Vancouver, ISO, and other styles
2

Hannila, H. (Hannu). "Towards data-driven decision-making in product portfolio management:from company-level to product-level analysis." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224428.

Full text
Abstract:
Abstract Products and services are critical for companies as they create the foundation for companies’ financial success. Twenty per cent of company products typically account for some eighty per cent of sales volume. Nevertheless, the product portfolio decisions — how to strategically renew company product offering — tend to involve emotions, pet products and who-shout-the-loudest mentality while facts, numbers, and quantitative analyses are missing. Profitability is currently measured and reported at a company level, and firms seem unable to measure product-level profitability in a constant way. Consequently, companies are unable to maintain and renew their product portfolio in a strategically or commercially balanced way. The main objective of this study is to provide a data-driven product portfolio management (PPM) concept, which recognises and visualises in real-time and based on facts which company products are concurrently strategic and profitable, and what is the share of them in the product portfolio. This dissertation is a qualitative study to understand the topical area by the means combining literature review, company interviews, observations, and company internal material, to take steps towards data-driven decision-making in PPM. This study indicates that company data assets need to be combined and governed company-widely to realise the full potential of company strategic assets — the DATA. Data must be governed separately from business IT technology and beyond it. Beyond data and technology, the data-driven company culture must be adopted first. The data-driven PPM concept connects key business processes, business IT systems and several concepts, such as productization, product lifecycle management and PPM. The managerial implications include, that the shared understanding of the company products is needed, and the commercial and technical product structures are created accordingly, as they form the backbone of the company business as the skeleton to gather all product-related business-critical information for product-level profitability analysis. Also, product classification for strategic, supportive and non-strategic is needed, since the strategic nature of the product can change during the entire product lifecycle, e.g. due to the technology obsolescence, disruptive innovations by competitors, or for any other reason
Tiivistelmä Tuotteet ja palvelut ovat yrityksille kriittisiä, sillä ne luovat perustan yritysten taloudelliselle menestykselle. Kaksikymmentä prosenttia yrityksen tuotteista edustaa tyypillisesti noin kahdeksaakymmentä prosenttia myyntimääristä. Siitä huolimatta tuoteporfoliopäätöksiin — kuinka strategisesti uudistetaan yrityksen tuotetarjoomaa — liittyy tunteita, lemmikkituotteita ja kuka-huutaa-kovimmin -mentaliteettia faktojen, numeroiden ja kvantitatiivisten analyysien puuttuessa. Kannattavuutta mitataan ja raportoidaan tällä hetkellä yritystasolla, ja yritykset eivät näyttäisi pystyvän mittaamaan tuotetason kannattavuutta johdonmukaisesti. Tämä estää yrityksiä ylläpitämästä ja uudistamasta tuotevalikoimaansa strategisesti tai kaupallisesti tasapainoisella tavalla. Tämän tutkimuksen päätavoite on tarjota dataohjattu (data-driven) tuoteportfoliohallinnan konsepti, joka tunnistaa ja visualisoi reaaliajassa ja faktapohjaisesti, mitkä yrityksen tuotteet ovat samanaikaisesti strategisia ja kannattavia ja mikä on niiden osuus tuoteportfoliossa. Tämä väitöskirja on laadullinen tutkimus, jossa yhdistyy kirjallisuuskatsaus, yrityshaastattelut, havainnot ja yritysten sisäinen dokumentaatio, joiden pohjalta pyritään kohti dataohjautuvaa päätöksentekoa tuoteportfolion hallinnassa. Tämä tutkimus osoittaa, että yrityksen data assettit on yhdistettävä ja hallittava yrityksenlaajuisesti, jotta yrityksen strategisten assettien — DATAN — potentiaali voidaan hyödyntää kokonaisuudessaan. Data on hallittava erillään yrityksen IT-teknologiasta ja sen yläpuolella. Ennen dataa ja teknologiaa on omaksuttava dataohjattu yrityskulttuuri. Dataohjatun tuoteportfolionhallinnan konsepti yhdistää keskeiset liiketoimintaprosessit, liiketoiminnan IT-järjestelmät ja useita konsepteja, kuten tuotteistaminen, tuotteen elinkaaren hallinta ja tuoteportfolion hallinta. Yhteisymmärrys yrityksen tuotteista ja sekä kaupallisen että teknisen tuoterakenteet luominen vastaavasti on ennakkoedellytys dataohjatulle tuoteportfolion hallinnalle, koska ne muodostavat yrityksen liiketoiminnan selkärangan, joka yhdistää kaikki tuotteisiin liittyvät liiketoimintakriittiset tiedot tuotetason kannattavuuden analysoimiseksi. Lisäksi tarvitaan tuotteiden kategorisointi strategisiin, tukeviin ja ei-strategisiin tuotteisiin, koska tuotteen strateginen luonne voi muuttua tuotteen elinkaaren aikana, johtuen esimerkiksi teknologian vanhenemisesta, kilpailijoiden häiritsevistä innovaatioista tai mistä tahansa muusta syystä
APA, Harvard, Vancouver, ISO, and other styles
3

Xue, Vincent. "Modeling and designing Bc1-2 family protein interactions using high-throughput interaction data." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120446.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Computational and Systems Biology Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 153-164).
Protein-protein interactions (PPIs) play a major role in cellular function, mediating signal processing and regulating enzymatic activity. Understanding how proteins interact is essential for predicting new binding partners and engineering new functions. Mutational analysis is one way to study the determinants of protein interaction. Traditionally, the biophysical study of protein interactions has been limited by the number of mutants that could be made and analyzed, but advances in high-throughput sequencing have enabled rapid assessment of thousands of variants. The Keating lab has developed an experimental protocol that can rank peptides based on their binding affinity for a designated receptor. This technique, called SORTCERY, takes advantage of cell sorting and deep-sequencing technologies to provide more binding data at a higher resolution than has previously been achievable. New computational methods are needed to process and analyze the high-throughput datasets. In this thesis, I show how experimental data from SORTCERY experiments can be processed, modeled, and used to design novel peptides with select specificity characteristics. I describe the computational pipeline that I developed to curate the data and regression models that I constructed from the data to relate protein sequence to binding. I applied models trained on experimental data sets to study the peptide-binding specificity landscape of the Bc1-xL, Mc1-1, and Bf1-1 anti-apoptotic proteins, and I designed novel peptides that selectively bind tightly to only one of these receptors, or to a pre-specified combination of receptors. My thesis illustrates how data-driven models combined with high-throughput binding assays provide new opportunities for rational design.
by Vincent Xue.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Popov, Igor. "End-user data-centric interactions over linked data." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/361729/.

Full text
Abstract:
The ability to build tools that support gathering and querying information from distributed sources on the Web rests on the availability of structured data. Linked Data, as a way for publishing and linking distributed structured data sources on the Web, provides an opportunity to create this kind of tools. Currently, however, the ability to complete such tasks over Linked Data sources is limited to users with advanced technical skills, resulting in an online information space largely inaccessible to non-technical end users. This thesis explores the challenges of designing user interfaces for end users, those without technical skills, to use Linked Data to solve information tasks that require combining information from multiple sources. The thesis explores the design space around interfaces that support access to Linked Data on demand, suggests potential use cases and stakeholders, and proposes several direct manipulation tools for end users with diverse needs and skills. User studies indicate that the tools built offer solutions to various challenges in accessing Linked Data that are identified in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
5

Carlsson, Nicole. "Vulnerable data interactions — augmenting agency." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23309.

Full text
Abstract:
This thesis project opens up an interaction design space in the InfoSec domain concerning raising awareness of common vulnerabilities and facilitating counter practices through seamful design.This combination of raising awareness coupled with boosting possibilities for deliberate action (or non-action) together account for augmenting agency. This augmentation takes the form of bottom up micro-movements and daily gestures contributing to opportunities for greater agency in the increasingly fraught InfoSec domain.
APA, Harvard, Vancouver, ISO, and other styles
6

Elhageen, Adel Abdelfatah M. "Effect of interaction between parental treatment styles and peer relations in classroom on the feelings of loneliness among deaf children in Egyptian schools /." Berlin : WVB Wissenschaftlicher Verlag, 2005. http://www.wvberlin.de/data/inhalt/elhageen.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rodriguez, Perdomo Carlos Mario. "Designing interactions for data obfuscation in IoT." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22494.

Full text
Abstract:
This project explores the internet of things (IoT) at home, especially the aspects related to the quantity and the quality of the of data collected by the smart devices and the violation of the users’ privacy this situation represents, since with the help of machine learning algorithms, these devices are capable of storing and analysing information related to the daily routine of each user at home. Therefore, this research enquires new ways to raise the user's’ awareness about the flow of the data within the IoT at home in order to empower them and give them back the status of administrators of this context by designing devices that are capable of obfuscating the data before it leaves the home.During this process, several methods were used together in order to reach the outcomes. From the use of annotated portfolios to evaluate the state of the art related with the field, to video sketching as a useful and quick tool to embrace the user’s perspective in parallel with the use of cultural probes in order to test some conceptual scenarios and find new ways to explore this project based on the experiences of the participants.As a result, this project’s outcome is based on the use of the materialization of the data as the proper way to bring the abstract process that happens in the background closer to the user's reality in order to display how this data is actually flowing through the environment and at the end generate a call­to­action to guide the user into the execution of the obfuscation of the data.This project opens up the discussion within the interaction design field about the way we are communicating with the technology and if it is the proper way to do it when this technology coexist with the user at home. Additionally, it questions the way in which the interfaces should be designed in order to create a transparent dialogue between the users, the objects and the vendors.
APA, Harvard, Vancouver, ISO, and other styles
8

Fischer, Manfred M., and Daniel A. Griffith. "Modelling spatial autocorrelation in spatial interaction data." WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/3948/1/SSRN%2Did1102183.pdf.

Full text
Abstract:
Spatial interaction models of the gravity type are widely used to model origindestination flows. They draw attention to three types of variables to explain variation in spatial interactions across geographic space: variables that characterise an origin region of a flow, variables that characterise a destination region of a flow, and finally variables that measure the separation between origin and destination regions. This paper outlines and compares two approaches, the spatial econometric and the eigenfunction-based spatial filtering approach, to deal with the issue of spatial autocorrelation among flow residuals. An example using patent citation data that capture knowledge flows across 112 European regions serves to illustrate the application and the comparison of the two approaches.(authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Helen. "Enabling scalable online user interaction management through data warehousing of interaction histories / by Helen Thomas." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Chunmei 1970. "Cross-layer protocol interactions in heterogeneous data networks." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/28918.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2005.
Includes bibliographical references (p. 143-148).
(cont.) TCP timeout backoff and MAC layer retransmissions, are studied in detail. The results show that the system performance is a balance of idle slots and collisions at the MAC layer, and a tradeoff between packet loss probability and round trip time at the transport layer. Finally, we consider the optimal scheduling problem with window service constraints. Optimal policies that minimize the average response time of jobs are derived and the results show that both the job lengths and the window sizes are essential to the optimal policy.
Modern data networks are heterogeneous in that they often employ a variety of link technologies, such as wireline, optical, satellite and wireless links. As a result, Internet protocols, such as Transmission Control Protocol (TCP), that were designed for wireline networks, perform poorly when used over heterogeneous networks. This is particularly the case for satellite and wireless networks which are often characterized by high bandwidth-delay product and high link loss probability. This thesis examines the performance of TCP in the context of heterogeneous networks, particularly focusing on interactions between protocols across different layers of the protocol stack. First, we provide an analytical framework to study the interaction between TCP and link layer retransmission protocols (ARQ). The system is modelled as a Markov chain with reward functions, and detailed queueing models are developed for the link layer ARQ. The analysis shows that in most cases implementing ARQ can achieve significant improvement in system throughput. Moreover, by proper choice of protocols parameters, such as the packet size and the number of transmission attempts per packet, significant performance improvement can be obtained. We then investigate the interaction between TCP at the transport layer and ALOHA at the MAC layer. Two equations are derived to express the system performance in terms of various system and protocol parameters, which show that the maximum possible system throughput is 1/e. A sufficient and necessary condition to achieve this throughput is also presented, and the optimal MAC layer transmission probability at which the system achieves its highest throughput is given. Furthermore, the impact of other system and protocol parameters, such as
by Chunmei Liu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Johansson, Annelie. "Identifying gene regulatory interactions using functional genomics data." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-230285.

Full text
Abstract:
Previously studies used correlation of DNase I hypersensitivity sites sequencing (DNase-seq) experiments to predict interactions between enhancers and its target promoter gene. We investigate the correlation methods Pearson’s correlation and Mutual Information, using DNase-seq data for 100 cell-types in regions on chromosome one. To assess the performances, we compared our results of correlation scores to Hi-C data from Jin et al. 2013. We showed that the performances are low when comparing it to the Hi-C data, and there is a need of improved correlation metrics. We also demonstrate that the use of Hi-C data as a gold standard is limited, because of its low resolution, and we suggest using another gold standard in further studies.
APA, Harvard, Vancouver, ISO, and other styles
12

Polyakova, Evgenia I. "A general theory for evaluating joint data interaction when combining diverse data sources /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Xin. "Be the Data: Embodied Visual Analytics." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/72287.

Full text
Abstract:
With the rise of big data, it is becoming increasingly important to educate students about data analytics. In particular, students without a strong mathematical background usually have an unenthusiastic attitude towards high-dimensional data and find it challenging to understand relevant complex analytical methods, such as dimension reduction. In this thesis, we present an embodied approach for visual analytics designed to teach students exploring alternative 2D projections of high dimensional data points using weighted multidimensional scaling. We proposed a novel application, Be the Data, to explore the possibilities of using human's embodied resources to learn from high dimensional data. In our system, each student embodies a data point and the position of students in a physical space represents a 2D projection of the high-dimensional data. Students physically moves in a room with respect to others to interact with alternative projections and receive visual feedback. We conducted educational workshops with students inexperienced in relevant data analytical methods. Our findings indicate that the students were able to learn about high-dimensional data and data analysis process despite their low level of knowledge about the complex analytical methods. We also applied the same techniques into social meetings to explain social gatherings and facilitate interactions.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Li. "Searching for significant feature interaction from biological data." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Si. "Active Learning Under Limited Interaction with Data Labeler." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104894.

Full text
Abstract:
Active learning (AL) aims at reducing labeling effort by identifying the most valuable unlabeled data points from a large pool. Traditional AL frameworks have two limitations: First, they perform data selection in a multi-round manner, which is time-consuming and impractical. Second, they usually assume that there are a small amount of labeled data points available in the same domain as the data in the unlabeled pool. In this thesis, we initiate the study of one-round active learning to solve the first issue. We propose DULO, a general framework for one-round setting based on the notion of data utility functions, which map a set of data points to some performance measure of the model trained on the set. We formulate the one-round active learning problem as data utility function maximization. We then propose D²ULO on the basis of DULO as a solution that solves both issues. Specifically, D²ULO leverages the idea of domain adaptation (DA) to train a data utility model on source labeled data. The trained utility model can then be used to select high-utility data in the target domain and at the same time, provide an estimate for the utility of the selected data. Our experiments show that the proposed frameworks achieves better performance compared with state-of-the-art baselines in the same setting. Particularly, D²ULO is applicable to the scenario where the source and target labels have mismatches, which is not supported by the existing works.
M.S.
Machine Learning (ML) has achieved huge success in recent years. Machine Learning technologies such as recommendation system, speech recognition and image recognition play an important role on human daily life. This success mainly build upon the use of large amount of labeled data: Compared with traditional programming, a ML algorithm does not rely on explicit instructions from human; instead, it takes the data along with the label as input, and aims to learn a function that can correctly map data to the label space by itself. However, data labeling requires human effort and could be time-consuming and expensive especially for datasets that contain domain-specific knowledge (e.g., disease prediction etc.) Active Learning (AL) is one of the solution to reduce data labeling effort. Specifically, the learning algorithm actively selects data points that provide more information for the model, hence a better model can be achieved with less labeled data. While traditional AL strategies do achieve good performance, it requires a small amount of labeled data as initialization and performs data selection in multi-round, which pose great challenge to its application, as there is no platform provide timely online interaction with data labeler and the interaction is often time inefficient. To deal with the limitations, we first propose DULO which a new setting of AL is studied: data selection is only allowed to be performed once. To further broaden the application of our method, we propose D²ULO which is built upon DULO and Domain Adaptation techniques to avoid the use of initial labeled data. Our experiments show that both of the proposed two frameworks achieve better performance compared with state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
16

Kis, Filip. "Prototyping with Data : Opportunistic Development of Data-Driven Interactive Applications." Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196851.

Full text
Abstract:
There is a growing amount of digital information available from Open-Data initiatives, Internet-of-Things technologies, and web APIs in general. At the same time, an increasing amount of technology in our lives is creating a desire to take advantage of the generated data for personal or professional interests. Building interactive applications that would address this desire is challenging since it requires advanced engineering skills that are normally reserved for professional software developers. However, more and more interactive applications are prototyped outside of enterprise environments, in more opportunistic settings. For example, knowledge workers apply end-user development techniques to solve their tasks, or groups of friends get together for a weekend hackathon in the hope of becoming the next big startup. This thesis focuses on how to design prototyping tools that support opportunistic development of interactive applications that take advantage of the growing amount of available data. In particular, the goal of this thesis is to understand what are the current challenges of prototyping with data and to identify important qualities of tools addressing these challenges. To accomplish this, declarative development tools were explored, while keeping focus on what data and interaction the application should afford rather than on how they should be implemented (programmed). The work presented in this thesis was carried out as an iterative process which started with a design exploration of Model-based UI Development, followed by observations of prototyping practices through a series of hackathon events and an iterative design of Endev – a prototyping tool for data-driven web applications. Formative evaluations of Endev were conducted with programmers and interaction designers.  The main results of this thesis are the identified challenges for prototyping with data and the key qualities required of prototyping tools that aim to address these challenges. The identified key qualities that lower the threshold for prototyping with data are: declarative prototyping, familiar and setup-free environment, and support tools. Qualities that raise the ceiling for what can be prototyped are: support for heterogeneous data and for advanced look and feel.
Mer och mer digital information görs tillgänglig på olika sätt, t.ex. via öppna data-initiativ, Sakernas internet och API:er. Med en ökande teknikanvändning så skapas även ett intresse för att använda denna data i olika sammanhang, både privat och professionellt. Att bygga interaktiva applikationer som adresserar dessa intressen är en utmaning då det kräver avancerade ingenjörskunskaper, vilket man vanligtvis endast hittar hos professionella programmerare. Sam­tidigt byggs allt fler interaktiva applikationer utanför företagsmiljöer, i mer opportunistiska sammanhang. Detta kan till exempel vara kunskapsarbetare som använder sig av s.k. anveckling (eng. end-user development) för att lösa en uppgift, eller kompisar som träffas en helg för att hålla ett hackaton med hopp om att bli nästa framgångsrika startup-företag. Den här avhandlingen fokuserar på hur prototypverktyg kan utformas för att stödja utveckling av interaktiva applikationer i sådana opportunistiska sammanhang, som utnyttjar den ökande mängden av tillgänglig data. Målet med arbetet som presenteras i den här avhandlingen har varit att förstå utmaningarna som det innebär att använda data i prototyparbete och att identifiera viktiga kvalitéer för de verktyg som ska kunna hantera detta. För att uppnå detta mål har verktyg för deklarativ programmering utforskats med ett fokus kring vilken data och interaktion en applikationen ska erbjuda snarare än hur den ska implementeras. Arbetet som presenteras i den här avhandlingen har genomförts som en iterativ process, med en startpunkt i en utforskning av modellbaserad gränssnittsutveckling, vilket sedan följdes av observationer av prototyparbete i praktiken genom en serie hackathon och en iterativ design av Endev, som är ett prototypverktyg för att skapa datadrivna webbapplikationer. Formativa utvärderingar av Endev har utförts med programmerare och interaktionsdesigners. De viktigaste resultaten av den här avhandlingen är de utmaningar som har identifierats kring hur man skapar prototyper och de kvalitéer som krävs av prototypverktyg som ska adressera dessa utmaningar. De identifierade kvalitéerna som sänker trösklarna för att inkludera data i prototyper är: deklarativt prototyparbete, välbekanta och installationsfria miljöer, och supportverktyg. Kvalitéer som höjer taket för vad som kan göras i en prototyp är: stöd för olika typer av data och för avancerad “look and feel”.
APA, Harvard, Vancouver, ISO, and other styles
17

Momal, Raphaëlle. "Network inference from incomplete abundance data Accounting for missing actors in interaction network inference from abundance data Tree‐based inference of species interaction networks from abundance data." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM017.

Full text
Abstract:
Les réseaux sont utilisés comme outils en microbiologie et en écologie pour représenter des relations entre espèces. Les modèles graphiques gaussiens sont le cadre mathématique dédié à l'inférence des réseaux de dépendances conditionnelles, qui permettent une séparation claires des effets directs et indirects. Cependant, les données observées sont souvent des comptages discrèts qui ne permettent pas l'utilisation de ce modèle. Cette thèse développe une méthodologie pour l'inférence de réseaux à partir de données d'abondance d'espèces. La méthode repose sur une exploration efficace et exhaustive de l'espace des arbres couvrants dans un espace latent des comptages observés, rendue possible par les propriétés algébriques de ces structures.Par ailleurs, il est probable que les comptages observés dépendent d'acteurs non mesurés (espèces ou covariable). Ce phénomène produit des arêtes supplémentaires dans le réseau marginal entre les espèces liées à l'acteur manquant dans le réseau complet, ce qui fausse la suite des analyses. Le second objectif de ce travail est de prendre en compte les acteurs manquants lors de l'inférence de réseau. Les paramètres du modèle proposé sont estimés par une approche variationnelle, qui fournit des éléments d'information pertinents à propos des données non observées
Networks are tools used to represent species relationships in microbiology and ecology. Gaussian Graphical Models provide with a mathematical framework for the inference of conditional dependency networks, which allow for a clear separation of direct and indirect effects. However observed data are often discrete counts and the inference cannot be directly performed with this model. This work develops a methodology for network inference from species observed abundances. The method relies on specific algebraic properties of spanning tree structures to perform an efficient and complete exploration of the space of spanning trees. The inference takes place in a latent space of the observed counts.Then, observed abundances are likely to depend on unmeasured actors (e.g. species or covariate). This results in spurious edges in the marginal network between the species linked to the latter in the complete network, causing inaccurate further analysis. The second objective of this work is to account for missing actors during network inference. To do so we adopt a variational approach yielding valuable insights about the missing actors
APA, Harvard, Vancouver, ISO, and other styles
18

Evans, Jason Peter, and jason evans@yale edu. "Modelling Climate - Surface Hydrology Interactions in Data Sparse Areas." The Australian National University. Centre for Resource and Environmental Studies, 2000. http://thesis.anu.edu.au./public/adt-ANU20020313.032142.

Full text
Abstract:
The interaction between climate and land-surface hydrology is extremely important in relation to long term water resource planning. This is especially so in the presence of global warming and massive land use change, issues which seem likely to have a disproportionate impact on developing countries. This thesis develops tools aimed at the study and prediction of climate effects on land-surface hydrology (in particular streamflow), which require a minimum amount of site specific data. This minimum data requirement allows studies to be performed in areas that are data sparse, such as the developing world. ¶ A simple lumped dynamics-encapsulating conceptual rainfall-runoff model, which explicitly calculates the evaporative feedback to the atmosphere, was developed. It uses the linear streamflow routing module of the rainfall-runoff model IHACRES, with a new non-linear loss module based on the Catchment Moisture Deficit accounting scheme, and is referred to as CMD-IHACRES. In this model, evaporation can be calculated using a number of techniques depending on the data available, as a minimum, one to two years of precipitation, temperature and streamflow data are required. The model was tested on catchments covering a large range of hydroclimatologies and shown to estimate streamflow well. When tested against evaporation data the simplest technique was found to capture the medium to long term average well but had difficulty reproducing the short-term variations. ¶ A comparison of the performance of three limited area climate models (MM5/BATS, MM5/SHEELS and RegCM2) was conducted in order to quantify their ability to reproduce near surface variables. Components of the energy and water balance over the land surface display considerable variation among the models, with no model performing consistently better than the other two. However, several conclusions can be made. The MM5 longwave radiation scheme performed worse than the scheme implemented in RegCM2. Estimates of runoff displayed the largest variations and differed from observations by as much as 100%. The climate models exhibited greater variance than the observations for almost all the energy and water related fluxes investigated. ¶ An investigation into improving these streamflow predictions by utilizing CMD-IHACRES was conducted. Using CMD-IHACRES in an 'offline' mode greatly improved the streamflow estimates while the simplest evaporation technique reproduced the evaporative time series to an accuracy comparable to that obtained from the limited area models alone. The ability to conduct a climate change impact study using CMD-IHACRES and a stochastic weather generator is also demonstrated. These results warrant further investigation into incorporating the rainfall-runoff model CMD-IHACRES in a fully coupled 'online' approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Paneels, Sabrina A. "Development and prototyping of haptic interactions for data exploration." Thesis, University of Kent, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Frusher, Marie J. "Predicting protein-protein interactions from sequence and structure data." Thesis, University of Essex, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dionysiou, Ioanna. "Dynamic and composable trust for indirect interactions." Online access for everyone, 2006. http://www.dissertations.wsu.edu/Dissertations/Summer2006/i%5Fdionysiou%5F072406.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Taslim, Cenny. "Algorithm for comparing large scale protein-DNA interaction data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306894920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Han, Bote. "The Multimodal Interaction through the Design of Data Glove." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32529.

Full text
Abstract:
In this thesis, we propose and present a multimodal interaction system that can provide a natural way for human-computer interaction. The core idea of this system is to help users to interact with the machine naturally by recognizing various gestures from the user from a wearable device. To achieve this goal, we have implemented a system including both hardware solution and gesture recognizing approaches. For the hardware solution, we designed and implemented a data glove based interaction device with multiple kinds of sensors to detect finger formations, touch commands and hand postures. We also modified and implemented two gesture recognizing approach based on support vector machine (SVM) as well as the lookup table. The detailed design and information is presented in this thesis. In the end, the system achieves supporting over 30 kinds of touch commands, 18 kinds of finger formation, and 10 kinds of hand postures as well as the combination of finger formation and hand posture with the recognition rate of 86.67% as well as the accurate touch command detection. We also evaluated the system from the subjective user experience.
APA, Harvard, Vancouver, ISO, and other styles
24

Khan, Mohd Tauheed. "Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo156440368925597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ross, Ian. "Nonlinear dimensionality reduction methods in climate data analysis." Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492479.

Full text
Abstract:
Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These hnear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-dimensional manifolds in climate data sets arising from nonlinear dynamics. In this thesis I apply three such techniques to the study of El Niño/Southern Oscillation variability in tropical Pacific sea surface temperatures and thermocline depth, comparing observational data with simulations from coupled atmosphere-ocean general circulation models from the CMIP3 multi-model ensemble.
APA, Harvard, Vancouver, ISO, and other styles
26

Page, Christopher Samuel. "On non-classical intermolecular interactions and chiral recognition." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Katsura-Gordon, Shigeo. "Democratizing Our Data : Finding Balance Living In A World Of Data Control." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148942.

Full text
Abstract:
The 2018 scandal where Cambridge Analytica tampered with U.S. elections using targeted ad campaigns driven by illicitly collected Facebook data has shown us that there consequences of living in a world of technology driven by data. Mark Zuckerberg recently took part in a congressional hearing making the topic of controlling data an important discussion at even the highest level of the government. Alternatively we can also recognize the benefits that data has in terms of technology and services that are highly personalized because of data.There’s nothing better than a targeted ad that appears at just the right time when you need to make a purchase or when Spotify provides you with the perfect playlist for a Friday night. This leaves us torn between opposites; To reject data and abandon our technology returning to the proverbial stone age, or to accept being online all the time monitored by a vast network of sensors that feed data into algorithms that may know more about our habits then we do. It is the friction of these polar opposites that will lead us on a journey to find balance between the benefits and negatives of having data as part of our everyday lives.To help explore the negatives and positives that will occur on this journey I developed Data Control Box, a product that ask the question “How would you live in a world where you can control your data?” Found in homes and workplaces, it allows individuals or groups of people to control their data by placing their mobile devices into it’s 14x22.5x15 cm acrylic container.Where the General Data Protect Act (GDPR) regulates and controls data after it has been produced by enforcing how “business processes that handle personal data must be built with data protection by design and by default, meaning that personal data must be stored using pseudonymisation or full anonymisation, and use the highest-possible privacy settings by default, so that the data is not available publicly without explicit consent, and cannot be used to identify a subject without additional information stored separately” (Wikipedia, 2018),Data Control Box limits personal data production through a physical barrier to it’s user prior to it’s creation. This physical embodiment of data control disrupts everyday habits when using a mobile device, which in turn of a creates the opportunity for reflection and questioning on what control of data is and how it works. For example a person using Data Control Box can still create data using a personal computer despite having placed their mobile device inside Data ControlBox. Being faced with this realization reveals aspects of the larger systems that might not have been as apparent without Data Control Box and can serve as a starting point to answering the question “How would you live in a world where you can control your data.” To further build on this discussion people using DataControl Box are encouraged to share their reflections by tweeting to the hashtag#DataControlBox. These tweets are displayed through Data Control Box’s 1.5 inchOLED breakout board connected to an Arduino micro-controller. Data ControlBox can interface with any network connected computer using a usb cord which also serves as a power source. The connected feature of Data Control Box allows units found around the world to become nodes in a real time discussion about the balance of data as a part of everyday life, but also serves as a collection of discussions that took place over time starting May of 2018.As a designer, the deployment of Data Control Box allowed me to probe the lives of real people and to see how they might interact with Data Control Box but also their data in a day to day setting. A total of fifteen people interacted with DataControl Box following a single protocol that was read aloud to them beforehand.A number of different contexts for the deployment of Data Control Box we’re explored such as at home, on a desk at school and during a two hour human computer lecture. I collected a variety of qualitative research in the form of photos and informal video interviews during these deployments which I synthesized into the following insights that can be used by designers when considering how to design for the control of data but also how to design for complex subjects like data. This paper retraces my arrival at this final prototype sharing the findings of my initial research collected during desk research, initial participant activities, and creation of my initial prototype Data Box /01. It then closes with a deeper dive into the design rationale and process when building my final prototype Data ControlBox and summarizes in greater detail insights I’ve learned from it’s deployment through results discussion and creative reflection.
APA, Harvard, Vancouver, ISO, and other styles
28

Sigursteinsdottir, Gudrun. "Learning gene interactions from gene expression data dynamic Bayesian networks." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-886.

Full text
Abstract:

Microarray experiments generate vast amounts of data that evidently reflect many aspects of the underlying biological processes. A major challenge in computational biology is to extract, from such data, significant information and knowledge about the complex interplay between genes/proteins. An analytical approach that has recently gained much interest is reverse engineering of genetic networks. This is a very challenging approach, primarily due to the dimensionality of the gene expression data (many genes, few time points) and the potentially low information content of the data. Bayesian networks (BNs) and its extension, dynamic Bayesian networks (DBNs) are statistical machine learning approaches that have become popular for reverse engineering. In the present study, a DBN learning algorithm was applied to gene expression data produced from experiments that aimed to study the etiology of necrotizing enterocolitis (NEC), a gastrointestinal inflammatory (GI) disease that is the most common GI emergency in neonates. The data sets were particularly challenging for the DBN learning algorithm in that they contain gene expression measurements for relatively few time points, between which the sampling intervals are long. The aim of this study was, therefore, to evaluate the applicability of DBNs when learning genetic networks for the NEC disease, i.e. from the above-mentioned data sets, and use biological knowledge to assess the hypothesized gene interactions. From the results, it was concluded that the NEC gene expression data sets were not informative enough for effective derivation of genetic networks for the NEC disease with DBNs and Bayesian learning.

APA, Harvard, Vancouver, ISO, and other styles
29

Pfaff, Lee. "The effect of training on individuals' interactions with visual data." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12186.

Full text
Abstract:
Thesis (M.A.)--Boston University
Introduction: Traditionally, students demonstrate their learning via testing and demonstrations but little is known about how learners’ interaction with information changes during and after training. Previous studies have shown the difference between naive and expert individuals’ interactions with an image but never in the same individuals before and after the educational process. Our lab’s goal is to explore this question using gaze tracking and quantitative measures. This will be done by looking at 3 specific variables: entry time, number of visits and fraction of viewing time. Hypotheses: We test 3 main hypotheses. (1) The trained group will attend to educationally salient features more than the non-trained group, after the training. (2) The non-trained group will attend to visually salient features more than the trained group after training. (3) Training will cause the trained group to attend more to educationally salient features after then training, when compared to base, while the non-trained group will have no change. [TRUNCATED]
APA, Harvard, Vancouver, ISO, and other styles
30

Huynh, David François 1978. "User interfaces supporting casual data-centric interactions on the Web." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42232.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 131-134).
Today's Web is full of structured data, but much of it is transmitted in natural language text or binary images that are not conducive to further machine processing by the time it reaches the user's web browser. Consequently, casual users-those without programming skills-are limited to whatever features that web sites offer. Encountering a few dozens of addresses of public schools listed in a table on one web site and a few dozens of private schools on another web site, a casual user would have to painstakingly copy and paste each and every address into an online map service, copy and paste the schools' names, to get a unified view of where the schools are relative to her home. Any more sophisticated operations on data encountered on the Web-such as re-plotting the results of a scientific experiment found online just because the user wants to test a different theory-would be tremendously difficult. Conversely, to publish structured data to the Web, a casual user settles for static data files or HTML pages that offer none of the features provided by commercial sites such as searching, filtering, maps, timelines, etc., or even as basic a feature as sorting. To offer a rich experience on her site, the casual user must single-handedly build a three-tier web application that normally takes a team of engineers several months. This thesis explores user interfaces for casual users-those without programming skills-to extract and reuse data from today's Web as well as publish data into the Web in richly browsable and reusable form. By assuming that casual users most often deal with small and simple data sets, declarative syntaxes and direct manipulation techniques can be supported for tasks previously done only with programming in experts' tools. User studies indicated that tools built with such declarative syntaxes and direct manipulation techniques could be used by casual users. Moreover, the data publishing tool built from this research has been used by actual users on the Web for many purposes, from presenting educational materials in classroom to listing products for very small businesses.
by David F. Huynh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
31

Seby, Jean-Baptiste. "Networked interactions, graphical models and econometrics perspectives in data analysis." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129081.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, September, 2020
Thesis: S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 231-243).
This thesis is composed of two independent parts. In Part I, we study higher-order interactions in both graphical models and networks, i.e., interactions between more than two nodes. In the graphical model setting, we do not assume that interactions are known and our goal is to recover the structure of the graph. Our main contribution is an algebraic criterion that enables us to determine whether a set of observed variables have a single cause or multiple causes. We also prove that this criterion holds in the presence of confounders, i.e., when the causes are hidden. In the network setting, we assume that the structure of the graph is known. Our objective is then to identify what kind of information about data can be learned from the analysis of higher-order interactions. More precisely, using the generalization of the normalized Laplacian and random walks on graphs to simplicial complexes, we study a simplicial notion of PageRank centrality as defined in [Schaub et al., 2018].
Conducting numerical experiments on both synthetic and true data, we find evidence that the so-called edge PageRank is related to the concepts of local and global bridges in networks. In Part II, we analyze the determinants of yield gaps in Semi-Arid Tropics (SAT) regions in India. Analyzing a panel data of households within 30 villages over 6 years in India, we apply a fixed effects estimation method and a quantile regression with fixed effects to identify the most significant explanatory variables of yield gaps for 5 different crops. Using a correlated random effects estimator for unbalanced panel data, we can also estimate coefficients for time-invariant variables. We find that yield gaps determinants are crop specific. In addition to that, soil characteristics show the most significant effects on output rate. When statistically significant, correlations with the type of soil are negative. This result might suggest that the choice of cropping pattern is not necessarily appropriate.
Finally, results suggest that unobservable heterogeneity of households is critical in explaining farm productivity. Time-invariant variables hardly explain this heterogeneity for which more research is needed.
by Jean-Baptiste Seby.
S.M. in Technology and Policy
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program
S.M.MassachusettsInstituteofTechnology,DepartmentofElectricalEngineeringandComputerScience
APA, Harvard, Vancouver, ISO, and other styles
32

Peralta, Veronika. "Data Quality Evaluation in Data Integration Systems." Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2006. http://tel.archives-ouvertes.fr/tel-00325139.

Full text
Abstract:
Les besoins d'accéder, de façon uniforme, à des sources de données multiples, sont chaque jour plus forts, particulièrement, dans les systèmes décisionnels qui ont besoin d'une analyse compréhensive des données. Avec le développement des Systèmes d'Intégration de Données (SID), la qualité de l'information est devenue une propriété de premier niveau de plus en plus exigée par les utilisateurs. Cette thèse porte sur la qualité des données dans les SID. Nous nous intéressons, plus précisément, aux problèmes de l'évaluation de la qualité des données délivrées aux utilisateurs en réponse à leurs requêtes et de la satisfaction des exigences des utilisateurs en terme de qualité. Nous analysons également l'utilisation de mesures de qualité pour l'amélioration de la conception du SID et de la qualité des données. Notre approche consiste à étudier un facteur de qualité à la fois, en analysant sa relation avec le SID, en proposant des techniques pour son évaluation et en proposant des actions pour son amélioration. Parmi les facteurs de qualité qui ont été proposés, cette thèse analyse deux facteurs de qualité : la fraîcheur et l'exactitude des données. Nous analysons les différentes définitions et mesures qui ont été proposées pour la fraîcheur et l'exactitude des données et nous faisons émerger les propriétés du SID qui ont un impact important sur leur évaluation. Nous résumons l'analyse de chaque facteur par le biais d'une taxonomie, qui sert à comparer les travaux existants et à faire ressortir les problèmes ouverts. Nous proposons un canevas qui modélise les différents éléments liés à l'évaluation de la qualité tels que les sources de données, les requêtes utilisateur, les processus d'intégration du SID, les propriétés du SID, les mesures de qualité et les algorithmes d'évaluation de la qualité. En particulier, nous modélisons les processus d'intégration du SID comme des processus de workflow, dans lesquels les activités réalisent les tâches qui extraient, intègrent et envoient des données aux utilisateurs. Notre support de raisonnement pour l'évaluation de la qualité est un graphe acyclique dirigé, appelé graphe de qualité, qui a la même structure du SID et contient, comme étiquettes, les propriétés du SID qui sont relevants pour l'évaluation de la qualité. Nous développons des algorithmes d'évaluation qui prennent en entrée les valeurs de qualité des données sources et les propriétés du SID, et, combinent ces valeurs pour qualifier les données délivrées par le SID. Ils se basent sur la représentation en forme de graphe et combinent les valeurs des propriétés en traversant le graphe. Les algorithmes d'évaluation peuvent être spécialisés pour tenir compte des propriétés qui influent la qualité dans une application concrète. L'idée derrière le canevas est de définir un contexte flexible qui permet la spécialisation des algorithmes d'évaluation à des scénarios d'application spécifiques. Les valeurs de qualité obtenues pendant l'évaluation sont comparées à celles attendues par les utilisateurs. Des actions d'amélioration peuvent se réaliser si les exigences de qualité ne sont pas satisfaites. Nous suggérons des actions d'amélioration élémentaires qui peuvent être composées pour améliorer la qualité dans un SID concret. Notre approche pour améliorer la fraîcheur des données consiste à l'analyse du SID à différents niveaux d'abstraction, de façon à identifier ses points critiques et cibler l'application d'actions d'amélioration sur ces points-là. Notre approche pour améliorer l'exactitude des données consiste à partitionner les résultats des requêtes en portions (certains attributs, certaines tuples) ayant une exactitude homogène. Cela permet aux applications utilisateur de visualiser seulement les données les plus exactes, de filtrer les données ne satisfaisant pas les exigences d'exactitude ou de visualiser les données par tranche selon leur exactitude. Comparée aux approches existantes de sélection de sources, notre proposition permet de sélectionner les portions les plus exactes au lieu de filtrer des sources entières. Les contributions principales de cette thèse sont : (1) une analyse détaillée des facteurs de qualité fraîcheur et exactitude ; (2) la proposition de techniques et algorithmes pour l'évaluation et l'amélioration de la fraîcheur et l'exactitude des données ; et (3) un prototype d'évaluation de la qualité utilisable dans la conception de SID.
APA, Harvard, Vancouver, ISO, and other styles
33

Levy, Marcel Andrew. "Ringermute an audio data mining toolkit /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Dameh, Mustafa, and n/a. "Insights into gene interactions using computational methods for literature and sequence resources." University of Otago. Department of Anatomy & Structural Biology, 2008. http://adt.otago.ac.nz./public/adt-NZDU20090109.095349.

Full text
Abstract:
At the beginning of this century many sequencing projects were finalised. As a result, overwhelming amount of literature and sequence data have been available to biologist via online bioinformatics databases. This biological data lead to better understanding of many organisms and have helped identify genes. However, there is still much to learn about the functions and interactions of genes. This thesis is concerned with predicting gene interactions using two main online resources: biomedical literature and sequence data. The biomedical literature is used to explore and refine a text mining method, known as the "co-occurrence method", which is used to predict gene interactions. The sequence data are used in an analysis to predict an upper bound of the number of genes involved in gene interactions. The co-occurrence method of text mining was extensively explored in this thesis. The effects of certain computational parameters on influencing the relevancy of documents in which two genes co-occur were critically examined. The results showed that indeed some computational parameters do have an impact on the outcome of the co-occurrence method, and if taken into consideration, can lead to better identification of documents that describe gene interactions. To explore the co-occurrence method of text mining, a prototype system was developed, and as a result, it contains unique functions that are not present in currently available text mining systems. Sequence data were used to predict the upper bound of the number of genes involved in gene interactions within a tissue. A novel approach was undertaken that used an analysis of SAGE and EST sequence libraries using ecological estimation methods. The approach proves that the species accumulation theory used in ecology can be applied to tag libraries (SAGE or EST) to predict an upper bound to the number of mRNA transcript species in a tissue. The novel computational analysis provided in this study can be used to extend the body of knowledge and insights relating to gene interactions and, hence, provide better understanding of genes and their functions.
APA, Harvard, Vancouver, ISO, and other styles
35

VanderMeer, Debra. "Data access and interaction management in mobile and distributed environments." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/9245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Kuhnke, Dominik [Verfasser]. "Spray/Wall-Interaction Modelling by Dimensionless Data Analysis / Dominik Kuhnke." Aachen : Shaker, 2004. http://d-nb.info/1186574682/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Tileylioglu, Salih. "Evaluation of soil-structure interaction effects from field performance data." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1666368201&sid=2&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nergis, Damirag Melodi. "Web Based Cloud Interaction and Visualization of Air Pollution Data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254401.

Full text
Abstract:
According to World Health Organization, around 7 million people die every year due to diseases caused by air pollution. With the improvements in Internet of Things in the recent years, environmental sensing systems has started to gain importance. By using technologies like Cloud Computing, RFID, Wireless Sensor Networks, and open Application Programming Interfaces, it has become easier to collect data for visualization on different platforms. However, collected data need to be represented in an efficient way for better understanding and analysis, which requires design of data visualization tools. The GreenIoT initiative aims to provide open data with its infrastructure for sustainable city development in Uppsala. An environmental web application is presented within this thesis project, which visualizes the gathered environmental data to help municipality organizations to implement new policies for sustainable urban planning, and citizens to gain more knowledge to take sustainable decisions in their daily life. The application has been developed making use of the 4Dialog API, which is developed to provide data from a dedicated cloud storage for visualization purposes. According to the evaluation presented in this thesis, further development is needed to improve the performance to provide faster and more reliable service as well as the accessibility to promote openness and social inclusion.
Enligt World Health Organization dör 7 miljoner människor varje år på grund av sjukdomar orsakade av luftföroreningar. Med förbättringar inom Internet of Things under senare år, har betydelsen av system för miljösensorer. Genom att använda tekniker som molntjänster, RFID, trådlösa sensornätverk och öppna programmeringsgränssnitt, har det blivit enklare att samla in data för visualisering på olika plattformar. Men insamlad data behöver bli representerad på ett effektivt sätt för bättre förståelse och analys, vilket kräver utformande av verktyg för visualisering av data. Initiativet GreenIoT strävar mot att erbjuda öppen data med sin infrastruktur för hållbar stadsutveckling i Uppsala. I detta arbete presenteras en webb-tillämpning, som visualiserar den insamlade miljödatan för att hjälpa kommunen att implementera nya policies för hållbar stadsutveckling, och stimulera medborgare till att skaffa mer kunskap för att göra miljövänliga val i sin vardag. Tillämpningen har utvecklats med hjälp av 4Dialog API, som tillhandahåller data från lagring i molnet för visualiseringssyfte. Enligt den utvärdering som presenteras i denna rapport konstateras att vidare utveckling behövs för att förbättra dels prestanda för att erbjuda en snabbare och mer tillförlitlig service, och dels åtkomstmöjligheter för att främja öppenhet och social inkludering.
APA, Harvard, Vancouver, ISO, and other styles
39

Lie, Jonathan Ken 1977. "Correlation of data in the unified modeling language interaction diagram." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Arslan, Cagan. "Doing more without more : data fusion in human-computer interaction." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I042.

Full text
Abstract:
La variété croissante des tâches qui nécessitent des interfaces homme-machine aboutit à la production des nouveaux capteurs améliorés et provoque donc l'obsolescence des technologies plus anciennes. Dans un monde aux ressources limitées, le taux de production de nouveaux appareils homme-machine ne semble pas viable. La conception durable nécessite une réappropriation des matériaux existants, nous devons donc concevoir des interfaces modulaires, réutilisables, mais qui permettent de nouvelles techniques d'interaction. Nous pensons que la combinaison des puissances de différents périphériques grâce à la fusion de données peut permettre des interactions puissantes tout en prolongeant la durée de vie des matériaux électroniques. À mesure que la complexité des capteurs augmente, leur combinaison présente de nouveaux défis et opportunités, notamment en termes de puissance de calcul et de comportement des utilisateurs, que nous explorons dans ce document. Nous expliquons d'abord comment les travaux antérieurs menés dans différents sous-domaines d'interaction homme-machine s'intègrent dans la perspective de la fusion de données. Dans cette perspective, nous prenons en compte tous les aspects des dispositifs d'entrée pour définir le cadre auquel appartient cette thèse. La première étape consiste à manipuler les périphériques d'entrée pour fournir des informations significatives à fusionner. Nous montrons donc comment passer d'une source de données complexe, telle qu'un flux de caméra, à une simple information descriptive qui permet une fusion légère. Ensuite, nous séparons les avantages de la fusion de données multi-capteurs pour les espaces d'interaction en deux catégories; enrichir l'espace d'interaction et étendre l'espace d'interaction. Notre contribution aux espaces enrichis se concentre principalement sur les interfaces musicales où nous proposons une application de sonification de mouvement sur un appareil mobile et un mécanisme de retour d'information visuelle, le tout en utilisant une combinaison de capteurs. De plus, nous contribuons à une surface virtuellement étendue pour les interactions sur grand écran à l'aide d'un écran tactile portable et examinons l'appropriation de l'utilisateur dans ce nouvel espace d'interaction
The increasing variety of tasks which require human-computer interfaces result in the production of new and improved sensing devices and therefore causes the obsolescence of older technologies. In a world of limited resources, the production rate of new interaction devices is unsustainable. Sustainable design calls for re-appropriation of existing materials, so we need to design interfaces that are modular, re-usable, yet that allow new interaction techniques. We believe that combining the strength of different input devices through data fusion can enable powerful interactions while extending the lifespan of electronic materials. As the complexity of sensors increases, their combination presents new challenges and opportunities, notably in terms of computational power and user behavior, which we explore in this document. We first explain how previous work conducted in different sub-domains of human-computer interaction fit into the data fusion perspective. From this perspective, we take all aspects of input devices into consideration to define the framework to which this thesis belongs. The first step consists of handling input devices to provide meaningful information to be fused, so we demonstrate how to go from a complex data source such as a camera stream, to a small, descriptive bit of information that enables lightweight fusion. Then, we separate the benefits of multi-sensor data fusion for interaction spaces into two categories; enriching the interaction space and extending the interaction space. Our contribution to the enriched spaces mainly focuses on musical interfaces where we propose a movement sonification application on a mobile device and a visual feedback mechanism, all by using a combination sensors. Further, we contribute a virtually extended surface for large display interactions using a hand-held touchscreen and examine the user's appropriation to the new interaction space
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Can. "Embodied Interaction for Data Manipulation Tasks on Wall-sized Displays." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS207/document.

Full text
Abstract:
De grands ensembles de données sont de plus en plus utilisés dans divers domaines professionnels, tels que la médecine, la sociologie et l'économie. Ceci pose de nombreux défis dans leurs utilisations pour, par exemple, les classifier et la prise de décision. Pour cela nous n'avons pas seulement besoin d'algorithmes élaborés pour leur traitement, il faut aussi que les utilisateurs puissent visualiser et interagir avec les données pour pouvoir les appréhender et éventuellement corriger ou vérifier les traitement fait par les machines. Cette thèse explore cette problématique en étudiant l'interaction d'utilisateurs avec de grands ensembles de données sur des murs d'écrans.Le corps humain est fait pour interagir avec le monde physique, du microscopique aux grandes échelles. Nous pouvons naturellement nous coordonner pour voir, entendre, toucher et nous déplacer pour interagir avec l'environnement à diverses échelles. Au-delà d'un individu, les êtres humains collaborent en communicant et en se coordonnant. En suivant la définition de Dourish, l'Interaction Incorporée encourage les concepteurs d'interaction de profiter de l'expérience existante du monde physique des utilisateurs lors de la conception de l'interaction avec les interfaces numériques.Je soutiens que les grands espaces interactifs permettent une interaction incorporée de l'utilisateur avec des données répartis dans l'espace, en tirant parti des capacités physiques des utilisateurs, comme la marche, l'approche et l'orientation. Au-delà d'un simple utilisateur, ces environnements permettent aussi à plusieurs utilisateurs d'interagir ensemble en utilisant la communication verbale et gestuelle tout en ayant une conscience de la présence physique de chacun. Alors que dans le cadre mono-utilisateur, de nombreuses recherches portent sur la transformation d'actions physiques en modalités d'entrées, le cas des relations entre plusieurs utilisateurs a été très peu étudié. Dans cette thèse, je présente tout d'abord une expérience qui évalue formellement l'avantage pour un utilisateur d'exécuter une tâche de manipulation de données sur un grand mur d'écrans par rapport à un ordinateur de bureau. Cette expérience montre que les mouvements physiques de l'utilisateur l'aide à naviguer dans une grande surface de données, et permet de surpasser les techniques de navigation existantes sur un ordinateur de bureau tels que les techniques de Focus+Contexte. Avec la même tâche expérimentale, j'étudie ensuite la manipulation de données collaborative avec un mur d'écrans, en imposant différents styles de collaboration, de étroitement couplées à lâche. L'expérience mesure l'effet de l'introduction d'une technique d'interaction partagée, dans lequel les collaborateurs effectuent chacun une partie d'une action pour émettre une commande. Les résultats montrent les avantages d'une telle technique en termes d'efficacité, d'engagement des utilisateurs, ainsi que de fatigue physique. Enfin, j'explore le concept d'augmentation de l'interaction humain-à-humain avec des techniques d'interaction partagées, et je propose un espace de conception pour ces techniques pour facilité la manipulation de données collaborative. Je présente la conception, la mise en œuvre et l'évaluation d'un ensemble de ces techniques, ainsi que les travaux futurs qui en découlent
Large data sets are used acceleratingly in various professional domains, such as medicine and business. This rises challenges in managing and using them, typically including sense-making, searching and classifying. This does not only require advanced algorithms to process the data sets automatically, but also need users' direct interaction to make initial judgment or to correct mistakes from the machine work. This dissertation explores this problem domain and study users' direct interaction with scattered large data sets. Human body is made for interacting with the physical world, from micro scope to very large scales. We can naturally coordinate ourselves to see, hear, touch and move to interact with the environment in various scales. Beyond individual, humans collaborate with each other through communication and coordination. Based on Dourish's definitioncite{2001:AFE:513034}, Embodied Interaction encourages interaction designers to take advantage of users' existing skills in the physical world, when designing the interaction with digital artefacts. I argue that large interactive spaces enable embodied user interaction with data spread over space, by leveraging users' physical abilities such as walking, approaching and orienting. Beyond single users, co-located environments provide multiple users with physical awareness and verbal gestural communication. While single users' physical actions have been augmented to be various input modalities in existing research, the augmentation of between-user resources has been less explored. In this dissertation, I first present an experiment that formally evaluates the advantage of single users performing a data manipulation task on a wall-sized display, comparing to on a desktop computer. It shows that using users' physical movements to navigate in a large data surface, outperforms existing digital navigation techniques on a desktop computer such as Focus+Context. With the same experimental task, I then study the interaction efficiency of collaborative data manipulation with a wall-sized display, in loosely or closely coupled collaboration styles. The experiment measures the effect of providing a Shared Interaction Technique, in which collaborators perform part of an action each to issue a command. The results conclude its benefits in terms of efficiency, user engagement as well as physical fatigue. Finally, I explore the concept of augmenting human-to-human interaction with shared interaction techniques, and illustrate a design space of such techniques for supporting collaborative data manipulation. I report the design, implementation and evaluation of a set of these techniques and discuss the future work
APA, Harvard, Vancouver, ISO, and other styles
42

Gao, Cen. "Research in target specificity based on microRNA-target interaction data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1275685130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Laha, Bireswar. "Immersive Virtual Reality and 3D Interaction for Volume Data Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51817.

Full text
Abstract:
This dissertation provides empirical evidence for the effects of the fidelity of VR system components, and novel 3D interaction techniques for analyzing volume datasets. It provides domain-independent results based on an abstract task taxonomy for visual analysis of scientific datasets. Scientific data generated through various modalities e.g. computed tomography (CT), magnetic resonance imaging (MRI), etc. are in 3D spatial or volumetric format. Scientists from various domains e.g., geophysics, medical biology, etc. use visualizations to analyze data. This dissertation seeks to improve effectiveness of scientific visualizations. Traditional volume data analysis is performed on desktop computers with mouse and keyboard interfaces. Previous research and anecdotal experiences indicate improvements in volume data analysis in systems with very high fidelity of display and interaction (e.g., CAVE) over desktop environments. However, prior results are not generalizable beyond specific hardware platforms, or specific scientific domains and do not look into the effectiveness of 3D interaction techniques. We ran three controlled experiments to study the effects of a few components of VR system fidelity (field of regard, stereo and head tracking) on volume data analysis. We used volume data from paleontology, medical biology and biomechanics. Our results indicate that different components of system fidelity have different effects on the analysis of volume visualizations. One of our experiments provides evidence for validating the concept of Mixed Reality (MR) simulation. Our approach of controlled experimentation with MR simulation provides a methodology to generalize the effects of immersive virtual reality (VR) beyond individual systems. To generalize our (and other researchers') findings across disparate domains, we developed and evaluated a taxonomy of visual analysis tasks with volume visualizations. We report our empirical results tied to this taxonomy. We developed the Volume Cracker (VC) technique for improving the effectiveness of volume visualizations. This is a free-hand gesture-based novel 3D interaction (3DI) technique. We describe the design decisions in the development of the Volume Cracker (with a list of usability criteria), and provide the results from an evaluation study. Based on the results, we further demonstrate the design of a bare-hand version of the VC with the Leap Motion controller device. Our evaluations of the VC show the benefits of using 3DI over standard 2DI techniques. This body of work provides the building blocks for a three-way many-many-many mapping between the sets of VR system fidelity components, interaction techniques and visual analysis tasks with volume visualizations. Such a comprehensive mapping can inform the design of next-generation VR systems to improve the effectiveness of scientific data analysis.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
44

Cao, Hetian. "Designing for Interaction and Insight: Experimental Techniques For Visualizing Building Energy Consumption Data." Research Showcase @ CMU, 2017. http://repository.cmu.edu/theses/130.

Full text
Abstract:
While more efficient use of energy is increasingly vital to the development of the modern industrialized world, emerging visualization tools and approaches of telling data stories provide an opportunity for the exploration of a wide range of topics related to energy consumption and conservation (Olsen, 2017). Telling energy stories using data visualization has generated great interest among journalists, designers and scientific researchers; over time it has been proven to be effective to provide knowledge and insights (Holmes, 2007). This thesis proposes a new angle of tackling the challenge of designing visualization experience for building energy data, which aims to invite the users to think besides the established data narratives, augment the knowledge and insight of energy-related issues, and potentially trigger ecological responsible behaviors, by investigating and evaluating the efficacy of the existing interactive energy data visualization projects, and experimenting with user-centric interactive interface and unusual visual expressions though the development of a data visualization prototype.
APA, Harvard, Vancouver, ISO, and other styles
45

McLellan, Shelagh. "Precision Medicine : The Future of Data-driven Healthcare." Thesis, Umeå universitet, Designhögskolan vid Umeå universitet, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-93460.

Full text
Abstract:
Precision Medicine: the future of data-driven healthcare is an interaction design master’s thesis project aimed at presenting a vision of how genomic and quantified data might be integrated into the Swedish public healthcare system. This thesis focuses on a user-centered design process, examining patient health needs and desires. It also looks at the rise of genomic data and precision medicine. Ethnographic research has been conducted with people in the different Scandinavian countries, hearing their health stories first hand, both in relation to genomic data, quantified self data and overall health. Commonly used service design methods such as customer journey mappings, blueprinting and business model mapping have played a large role in shaping the experience of the concept
APA, Harvard, Vancouver, ISO, and other styles
46

Sonning, Sabina. "Big Data - Small Device: AMobile Design Concept fo rGeopolitical Awareness when Traveling." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-87203.

Full text
Abstract:
This work explores an application concept for small mobile devices, displaying structured "Big Data" based on human web reporting. The target user is a traveler interested in geopolitical events in the visited region and the concept focuses on high level signals to describethe situation and allows for following up, down to original reporting sources. Interviews and a survey was used to investigate the target user group's current behavior and needs while traveling and in unstable regions. The design process is described in reference to interaction design practices and successful applications on the market today, resulting in aconcept presented in the form of high fidelity sketches, well documented interaction style and transitions, and a clickable low delity prototype. The work can be used as a reference document for further development.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Xiyao. "Augmented reality environments for the interactive exploration of 3D data." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG052.

Full text
Abstract:
La visualisation exploratoire des données 3D est fondamentale dans des domaines scientifiques. Traditionnellement, les experts utilisent un PC et s'appuient sur la souris pour ajuster la vue. Cette configuration permet l'immersion par interaction---l'utilisateur peut contrôler précisément la vue, mais elle ne fournit pas de profondeur, qui limite la compréhension de données complexes. La réalité virtuelle ou augmentée (RV/A), en revanche, offre une immersion visuelle avec des vues stéréoscopiques. Bien que leurs avantages aient été prouvés, plusieurs points limitent leur application, notamment les besoins élevés de configuration/maintenance, les difficultés de contrôle précis et, plus important, la séparation des outils d'analyse traditionnels. Pour bénéficier des deux côtés, nous avons donc étudié un système hybride combinant l'environnement RA avec un PC pour fournir des immersions interactives et visuelles. Nous avons collaboré étroitement avec des physiciens des particules afin de comprendre leur processus de travail et leurs besoins de visualisation pour motiver notre conception. D'abord, basé sur nos discussions avec les physiciens, nous avons construit un prototype qui permet d'accomplir des tâches pour l'exploration de leurs données. Ce prototype traitait l'espace RA comme une extension de l'écran du PC et permettait aux utilisateurs d'interagir librement avec chacun d'eux avec la souris. Ainsi, les experts pouvaient bénéficier de l'immersion visuelle et utilisent les outils d'analyse sur PC. Une étude observationnelle menée avec 7 physiciens au CERN a validé la faisabilité et confirmé les avantages. Nous avons également constaté que la grande toile du RA et le fait de se déplacer pour observer les données dans le RA présentaient un grand potentiel. Cependant, la conception de l'interaction de la souris et l’utilisation de widgets dans la RA devaient être améliorés. Ensuite, nous avons décidé de ne pas utiliser intensivement les widgets plats dans la RA. Mais nous nous sommes demandé si l'utilisation de la souris pour naviguer dans la RA est problématique, et nous avons ensuite tenté d'étudier si la correspondance de la dimensionnalité entre les dispositifs d'entrée et de sortie joue un rôle important. Les résultats des études (qui ont comparé la performance de l'utilisation de la souris, de la souris spatiale et de la tablette tangible couplée à l'écran ou à l'espace de RA) n'ont pas montré que la correspondance était importante. Nous avons donc conclu que la dimensionnalité n'était pas un point critique à considérer, ce qui suggère que les utilisateurs sont libres de choisir toute entrée qui convient à une tâche spécifique. De plus, nos résultats ont montré que la souris restait un outil efficace. Nous pouvons donc valider notre conception et conserver la souris comme entrée principale, tandis que les autres modalités ne devraient servir que comme complément pour des cas spécifiques. Ensuite, pour favoriser l'interaction et conserver les informations pendant que les utilisateurs se déplacent en RA, nous avons proposé d'ajouter un appareil mobile. Nous avons introduit une nouvelle approche qui augmente l'interaction tactile avec la détection de pression pour la navigation 3D. Les résultats ont montré que cette méthode pouvait améliorer efficacement la précision, avec une influence limitée sur le temps. Nous pensons donc qu'elle est utile à des tâches de vis où une précision est exigée. Enfin, nous avons résumé tous les résultats obtenus et imaginé un scénario réaliste qui utilise un poste de travail PC, un casque RA et un appareil mobile. Les travaux présentés dans cette thèse montrent le potentiel de la combinaison d'un PC avec des environnements de RA pour améliorer le processus d'exploration de données 3D et confirment sa faisabilité, ce qui, nous l'espérons, inspirera la future conception qui apportera une visualisation immersive aux flux de travail scientifiques existants
Exploratory visualization of 3D data is fundamental in many scientific domains. Traditionally, experts use a PC workstation and rely on mouse and keyboard to interactively adjust the view to observe the data. This setup provides immersion through interaction---users can precisely control the view and the parameters, but it does not provide any depth clues which can limit the comprehension of large and complex 3D data. Virtual or augmented reality (V/AR) setups, in contrast, provide visual immersion with stereoscopic views. Although their benefits have been proven, several limitations restrict their application to existing workflows, including high setup/maintenance needs, difficulties of precise control, and, more importantly, the separation from traditional analysis tools. To benefit from both sides, we thus investigated a hybrid setting combining an AR environment with a traditional PC to provide both interactive and visual immersions for 3D data exploration. We closely collaborated with particle physicists to understand their general working process and visualization requirements to motivate our design. First, building on our observations and discussions with physicists, we built up a prototype that supports fundamental tasks for exploring their datasets. This prototype treated the AR space as an extension to the PC screen and allowed users to freely interact with each using the mouse. Thus, experts could benefit from the visual immersion while using analysis tools on the PC. An observational study with 7 physicists in CERN validated the feasibility of such a hybrid setting, and confirmed the benefits. We also found that the large canvas of the AR and walking around to observe the data in AR had a great potential for data exploration. However, the design of mouse interaction in AR and the use of PC widgets in AR needed improvements. Second, based on the results of the first study, we decided against intensively using flat widgets in AR. But we wondered if using the mouse for navigating in AR is problematic compared to high degrees of freedom (DOFs) input, and then attempted to investigate if the match or mismatch of dimensionality between input and output devices play an important role in users’ performance. Results of user studies (that compared the performance of using mouse, space mouse, and tangible tablet paired with the screen or the AR space) did not show that the (mis-)match was important. We thus concluded that the dimensionality was not a critical point to consider, which suggested that users are free to choose any input that is suitable for a specific task. Moreover, our results suggested that the mouse was still an efficient tool compared to high DOFs input. We can therefore validate our design of keeping the mouse as the primary input for the hybrid setting, while other modalities should only serve as an addition for specific use cases. Next, to support the interaction and to keep the background information while users are walking around to observe the data in AR, we proposed to add a mobile device. We introduced a novel approach that augments tactile interaction with pressure sensing for 3D object manipulation/view navigation. Results showed that this method could efficiently improve the accuracy, with limited influence on completion time. We thus believe that it is useful for visualization purposes where a high accuracy is usually demanded. Finally, we summed up in this thesis all the findings we have and came up with an envisioned setup for a realistic data exploration scenario that makes use of a PC workstation, an AR headset, and a mobile device. The work presented in this thesis shows the potential of combining a PC workstation with AR environments to improve the process of 3D data exploration and confirms its feasibility, all of which will hopefully inspire future designs that seamlessly bring immersive visualization to existing scientific workflows
APA, Harvard, Vancouver, ISO, and other styles
48

Martins, Guarese Renan Luigi. "Augmenting analytics : Situated Data Visualization towards decision-making for EMC testing." Thesis, Högskolan i Halmstad, Centrum för forskning om inbyggda system (CERES), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42889.

Full text
Abstract:
The present work proposes the use of information visualization techniques allied to an Augmented Reality user interface to provide in-formation that helps professionals to analyse data, however spatially situated where it was originally measured. This problem and the proposed solution may be adapted into different professional contexts. Three use case visualizations were designed, implemented and testedin the following task contexts: classroom seat analysis, GPS route following and EMC data extraction. Apart from visualizing the situated data, users may also interact with it to narrow down their search by switching the attributes being displayed, combining them together, applying filters, changing its formatting and extracting data from it. The approaches being proposed in this work were tested against each other in comparable 2D and3D interactive visualizations of the same data in a series of usability and performance assessments with users to validate the solutions. The goal was to ultimately expose whether AR can help users to perform better in different decision-making contexts. Our tests exposed relevant results in a series of the variables measured, such as accuracy, correctness, distance travelled and time taken.
APA, Harvard, Vancouver, ISO, and other styles
49

Adrup, Joakim. "Visualization and Interaction with Temporal Data using Data Cubes in the Global Earth Observation System of Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231364.

Full text
Abstract:
The purpose of this study was to explore the usage of data cubes in the context of the Global Earth Observation System of Systems (GEOSS). This study investigated what added benefit could be provided to users of the GEOSS platform by utilizing the capabilities of data cubes. Data cubes in earth observation is a concept for how data should be handled and provided by a data server. It includes aspects such as flexible extraction of subsets and processing capabilities. In this study it was found that the most frequent use case for data cubes was time analysis. One of the main services provided by the GEOSS portal was the discovery and inspection of datasets. In the study a timeline interface was constructed to facilitate the exploration and inspection of datasets with a temporal dimension. The datasets were provided by a data cube, and made use of the data cubes capabilities in retrieving subsets of data along any arbitrary axis. A usability evaluation was conducted on the timeline interface to gain insight into the users requirements and user satisfaction. The results showed that the design worked well in many regards, ranking high in user satisfaction. On a number of points the study highlighted areas of improvement. Providing insight into important design limitations and challenges together with suggestions on how these could be approached in different ways.
Syftet med studien var att undersöka hur Data Cubes kunde komma att användas inom ramarna för Global Earth Observation System of Systems (GEOSS). Vilka fördelar som kunde dras ifrån att utnyttja den potential som data cubes besitter och använda dem i GEOSS plattformen undersöktes i studien. Data cubes för earth observation är ett koncept om hur data ska hanteras och tillhandahållas av datatjänster. Det ämnar bland annat flexibel extrahering av datapartitioner och dataprocesseringsförmågor. I denna studie iakttogs det att det mest frekvent förekommande användningsområdet för data cubes var analys av tid. Ett huvudsyfte med GEOSS portalen var att tillhandahålla användaren med verktyg för att utforska och inspektera dataset. I denna studie tillverkades ett användargränssnitt med en tidslinje för att ge användaren tillgång till att även utforska och inspektera dataset med en tidsdimension. Datasetet tillhandahålls från en data cube och utnyttjar data cubes färdighet i att förse utvalda partitioner av datasetet som kan extraheras längs valfri axel. En användarstudie har gjorts på användargränssnittet för att utvärdera till vilken grad användarna var nöjda och hur det uppfyllde deras krav, för att samla värdefulla insikter. Resultatet visar på att designen presterar väl på flera punkter, den rankar högt i användartillfredsställelse. Med studien klargör även framtida förbättringsmöjligheter och gav insikter om viktiga designbegränsningar och utmaningar. I rapporten diskuteras det hur dessa kan hanteras på olika sätt.
APA, Harvard, Vancouver, ISO, and other styles
50

Pamuk, Bahar. "Coevolution Based Prediction Of Protein-protein Interactions With Reduced Training Data." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610389/index.pdf.

Full text
Abstract:
Protein-protein interactions are important for the prediction of protein functions since two interacting proteins usually have similar functions in a cell. Available protein interaction networks are incomplete
but, they can be used to predict new interactions in a supervised learning framework. However, in the case that the known protein network includes large number of protein pairs, the training time of the machine learning algorithm becomes quite long. In this thesis work, our aim is to predict protein-protein interactions with a known portion of the interaction network. We used Support Vector Machines (SVM) as the machine learning algoritm and used the already known protein pairs in the network. We chose to use phylogenetic profiles of proteins to form the feature vectors required for the learner since the similarity of two proteins in evolution gives a reasonable rating about whether the two proteins interact or not. For large data sets, the training time of SVM becomes quite long, therefore we reduced the data size in a sensible way while we keep approximately the same prediction accuracy. We applied a number of clustering techniques to extract the most representative data and features in a two categorical framework. Knowing that the training data set is a two dimensional matrix, we applied data reduction methods in both dimensions, i.e., both in data size and in feature vector size. We observed that the data clustered by the k-means clustering technique gave superior results in prediction accuracies compared to another data clustering algorithm which was also developed for reducing data size for SVM training. Still the true positive and false positive rates (TPR-FPR) of the training data sets constructed by the two clustering methods did not give satisfying results about which method outperforms the other. On the other hand, we applied feature selection methods on the feature vectors of training data by selecting the most representative features in biological and in statistical meaning. We used phylogenetic tree of organisms to identify the organisms which are evolutionarily significant. Additionally we applied Fisher&sbquo
Ä
ô
s test method to select the features which are most representative statistically. The accuracy and TPR-FPR values obtained by feature selection methods could not provide to make a certain decision on the performance comparisons. However it can be mentioned that phylogenetic tree method resulted in acceptable prediction values when compared to Fisher&sbquo
Ä
ô
s test.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography