To see the other types of publications on this topic, follow the link: Data interaction analysis.

Dissertations / Theses on the topic 'Data interaction analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data interaction analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dowling, Michelle Veronica. "Semantic Interaction for Symmetrical Analysis and Automated Foraging of Documents and Terms." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104682.

Full text
Abstract:
Sensemaking tasks, such as reading many news articles to determine the truthfulness of a given claim, are difficult. These tasks require a series of iterative steps to first forage for relevant information and then synthesize this information into a final hypothesis. To assist with such tasks, visual analytics systems provide interactive visualizations of data to enable faster, more accurate, or more thorough analyses. For example, semantic interaction techniques leverage natural or intuitive interactions, like highlighting text, to automatically update the visualization parameters using machine learning. However, this process of using machine learning based on user interaction is not yet well defined. We begin our research efforts by developing a computational pipeline that models and captures how a system processes semantic interactions. We then expanded this model to denote specifically how each component of the pipeline supports steps of the Sensemaking Process. Additionally, we recognized a cognitive symmetry in how analysts consider data items (like news articles) and their attributes (such as terms that appear within the articles). To support this symmetry, we also modeled how to visualize and interact with data items and their attributes simultaneously. We built a testbed system and conducted a user study to determine which analytic tasks are best supported by such symmetry. Then, we augmented the testbed system to scale up to large data using semantic interaction foraging, a method for automated foraging based on user interaction. This experience enabled our development of design challenges and a corresponding future research agenda centered on semantic interaction foraging. We began investigating this research agenda by conducting a second user study on when to apply semantic interaction foraging to better match the analyst's Sensemaking Process.
Doctor of Philosophy
Sensemaking tasks such as determining the truthfulness of a claim using news articles are complex, requiring a series of steps in which the relevance of each piece of information within the articles is first determined. Relevant pieces of information are then combined together until a conclusion may be reached regarding the truthfulness of the claim. To help with these tasks, interactive visualizations of data can make it easier or faster to find or combine information together. In this research, we focus on leveraging natural or intuitive interactions, such organizing documents in a 2-D space, which the system uses to perform machine learning to automatically adjust the visualization to better support the given task. We first model how systems perform such machine learning based on interaction as well as model how each component of the system supports the user's sensemaking task. Additionally, we developed a model and accompanying testbed system for simultaneously evaluating both data items (like news articles) and their attributes (such as terms within the articles) through symmetrical visualization and interaction methods. With this testbed system, we devised and conducted a user study to determine which types of tasks are supported or hindered by such symmetry. We then combined these models to build an additional testbed system that implemented a searching technique to automatically add previously unseen, relevant pieces of information to the visualization. Using our experience in implementing this automated searching technique, we defined design challenges to guide future implementations, along with a research agenda to refine the technique. We also devised and conducted another user study to determine when such automated searching should be triggered to best support the user's sensemaking task.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuhnke, Dominik [Verfasser]. "Spray/Wall-Interaction Modelling by Dimensionless Data Analysis / Dominik Kuhnke." Aachen : Shaker, 2004. http://d-nb.info/1186574682/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Laha, Bireswar. "Immersive Virtual Reality and 3D Interaction for Volume Data Analysis." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51817.

Full text
Abstract:
This dissertation provides empirical evidence for the effects of the fidelity of VR system components, and novel 3D interaction techniques for analyzing volume datasets. It provides domain-independent results based on an abstract task taxonomy for visual analysis of scientific datasets. Scientific data generated through various modalities e.g. computed tomography (CT), magnetic resonance imaging (MRI), etc. are in 3D spatial or volumetric format. Scientists from various domains e.g., geophysics, medical biology, etc. use visualizations to analyze data. This dissertation seeks to improve effectiveness of scientific visualizations. Traditional volume data analysis is performed on desktop computers with mouse and keyboard interfaces. Previous research and anecdotal experiences indicate improvements in volume data analysis in systems with very high fidelity of display and interaction (e.g., CAVE) over desktop environments. However, prior results are not generalizable beyond specific hardware platforms, or specific scientific domains and do not look into the effectiveness of 3D interaction techniques. We ran three controlled experiments to study the effects of a few components of VR system fidelity (field of regard, stereo and head tracking) on volume data analysis. We used volume data from paleontology, medical biology and biomechanics. Our results indicate that different components of system fidelity have different effects on the analysis of volume visualizations. One of our experiments provides evidence for validating the concept of Mixed Reality (MR) simulation. Our approach of controlled experimentation with MR simulation provides a methodology to generalize the effects of immersive virtual reality (VR) beyond individual systems. To generalize our (and other researchers') findings across disparate domains, we developed and evaluated a taxonomy of visual analysis tasks with volume visualizations. We report our empirical results tied to this taxonomy. We developed the Volume Cracker (VC) technique for improving the effectiveness of volume visualizations. This is a free-hand gesture-based novel 3D interaction (3DI) technique. We describe the design decisions in the development of the Volume Cracker (with a list of usability criteria), and provide the results from an evaluation study. Based on the results, we further demonstrate the design of a bare-hand version of the VC with the Leap Motion controller device. Our evaluations of the VC show the benefits of using 3DI over standard 2DI techniques. This body of work provides the building blocks for a three-way many-many-many mapping between the sets of VR system fidelity components, interaction techniques and visual analysis tasks with volume visualizations. Such a comprehensive mapping can inform the design of next-generation VR systems to improve the effectiveness of scientific data analysis.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Chen. "From network to pathway: integrative network analysis of genomic data." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77121.

Full text
Abstract:
The advent of various types of high-throughput genomic data has enabled researchers to investigate complex biological systems in a systemic way and started to shed light on the underlying molecular mechanisms in cancers. To analyze huge amounts of genomic data, effective statistical and machine learning tools are clearly needed; more importantly, integrative approaches are especially needed to combine different types of genomic data for a network or pathway view of biological systems. Motivated by such needs, we make efforts in this dissertation to develop integrative framework for pathway analysis. Specifically, we dissect the molecular pathway into two parts: protein-DNA interaction network and protein-protein interaction network. Several novel approaches are proposed to integrate gene expression data with various forms of biological knowledge, such as protein-DNA interaction and protein-protein interaction for reliable molecular network identification. The first part of this dissertation seeks to infer condition-specific transcriptional regulatory network by integrating gene expression data and protein-DNA binding information. Protein-DNA binding information provides initial relationships between transcription factors (TFs) and their target genes, and this information is essential to derive biologically meaningful integrative algorithms. Based on the availability of this information, we discuss the inference task based on two different situations: (a) if protein-DNA binding information of multiple TFs is available: based on the protein-DNA data of multiple TFs, which are derived from sequence analysis between DNA motifs and gene promoter regions, we can construct initial connection matrix and solve the network inference using a constraint least-squares approach named motif-guided network component analysis (mNCA). However, connection matrix usually contains a considerable amount of false positives and false negatives that make inference results questionable. To circumvent this problem, we propose a knowledge based stability analysis (kSA) approach to test the conditional relevance of individual TFs, by checking the discrepancy of multiple estimations of transcription factor activity with respect to different perturbations on the connections. The rationale behind stability analysis is that the consistency of observed gene expression and true network connection shall remain stable after small perturbations are applied to initial connection matrix. With condition-specific TFs prioritized by kSA, we further propose to use multivariate regression to highlight condition-specific target genes. Through simulation studies comparing with several competing methods, we show that the proposed schemes are more sensitive to detect relevant TFs and target genes for network inference purpose. Experimentally, we have applied stability analysis to yeast cell cycle experiment and further to a series of anti-estrogen breast cancer studies. In both experiments not only biologically relevant regulators are highlighted, the condition-specific transcriptional regulatory networks are also constructed, which could provide further insights into the corresponding cellular mechanisms. (b) if only single TF's protein-DNA information is available: this happens when protein-DNA binding relationship of individual TF is measured through experiments. Since original mNCA requires a complete connection matrix to perform estimation, an incomplete knowledge of single TF is not applicable for such approach. Moreover, binding information derived from experiments could still be inconsistent with gene expression levels. To overcome these limitations, we propose a linear extraction scheme called regulatory component analysis (RCA), which can infer underlying regulation relationships, even with partial biological knowledge. Numerical simulations show significant improvement of RCA over other traditional methods to identify target genes, not only in low signal-to-noise-ratio situations and but also when the given biological knowledge is incomplete and inconsistent to data. Furthermore, biological experiments on Escherichia coli regulatory network inferences are performed to fairly compare traditional methods, where the effectiveness and superior performance of RCA are confirmed. The second part of the dissertation moves from protein-DNA interaction network up to protein-protein interaction network, to identify dys-regulated protein sub-networks by integrating gene expression data and protein-protein interaction information. Specifically, we propose a statistically principled method, namely Metropolis random walk on graph (MRWOG), to highlight condition-specific PPI sub-networks in a probabilistic way. The method is based on the Markov chain Monte Carlo (MCMC) theory to generate a series of samples that will eventually converge to some desired equilibrium distribution, and each sample indicates the selection of one particular sub-network during the process of Metropolis random walk. The central idea of MRWOG is built upon that the essentiality of one gene to be included in a sub-network depends on not only its expression but also its topological importance. Contrasted to most existing methods constructing sub-networks in a deterministic way and therefore lacking relevance score for each protein, MRWOG is capable of assessing the importance of each individual protein node in a global way, not only reflecting its individual association with clinical outcome but also indicating its topological role (hub, bridge) to connect other important proteins. Moreover, each protein node is associated with a sampling frequency score, which enables the statistical justification of each individual node and flexible scaling of sub-network results. Based on MRWOG approach, we further propose two strategies: one is bootstrapping used for assessing statistical confidence of detected sub-networks; the other is graphic division to separate a large sub-network to several smaller sub-networks for facilitating interpretations. MRWOG is easy to use with only two parameters need to be adjusted, one is beta value for performing random walk and another is Quantile level for calculating truncated posteriori mean. Through extensive simulations, we show that the proposed scheme is not sensitive to these two parameters in a relatively wide range. We also compare MRWOG with deterministic approaches for identifying sub-network and prioritizing topologically important proteins, in both cases MRWG outperforms existing methods in terms of both precision and recall. By utilizing MRWOG generated node/edge sampling frequency, which is actually posteriori mean of corresponding protein node/interaction edge, we illustrate that condition-specific nodes/interactions can be better prioritized than the schemes based on scores of individual node/interaction. Experimentally, we have applied MRWOG to study yeast knockout experiment for galactose utilization pathways to reveal important components of corresponding biological functions; we also applied MRWSOG to study breast cancer patient prognostics problems, where the sub-network analysis could lead to an understanding of the molecular mechanisms of antiestrogen resistance in breast cancer. Finally, we conclude this dissertation with a summary of the original contributions, and the future work for deepening the theoretical justification of the proposed methods and broadening their potential biological applications such as cancer studies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Pradhananga, Nipesh. "Construction site safety analysis for human-equipment interaction using spatio-temporal data." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52326.

Full text
Abstract:
The construction industry has consistently suffered the highest number of fatalities among all human involved industries over the years. Safety managers struggle to prevent injuries and fatalities by monitoring at-risk behavior exhibited by workers and equipment operators. Current methods of identifying and reporting potential hazards on site involve periodic manual inspection, which depends upon personal judgment, is prone to human error, and consumes enormous time and resources. This research presents a framework for automatic identification and analysis of potential hazards by analyzing spatio-temporal data from construction resources. The scope of the research is limited to human-equipment interactions in outdoor construction sites involving ground workers and heavy equipment. A grid-based mapping technique is developed to quantify and visualize potentially hazardous regions caused by resource interactions on a construction site. The framework is also implemented to identify resources that are exposed to potential risk based on their interaction with other resources. Cases of proximity and blind spots are considered in order to create a weight-based scoring approach for mapping hazards on site. The framework is extended to perform ``what-if'' safety analysis for operation planning by iterating through multiple resource configurations. The feasibility of using both real and simulated data is explored. A sophisticated data management and operation analysis platform and a cell-based simulation engine are developed to support the process. This framework can be utilized to improve on-site safety awareness, revise construction site layout plans, and evaluate the need for warning or training workers and equipment operators. It can also be used as an education and training tool to assist safety managers in making better, more effective, and safer decisions.
APA, Harvard, Vancouver, ISO, and other styles
6

Asur, Sitaram. "A Framework for the Static and Dynamic Analysis of Interaction Graphs." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243902523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Florez, Omar Ulises. "Knowledge Extraction in Video Through the Interaction Analysis of Activities Knowledge Extraction in Video Through the Interaction Analysis of Activities." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1720.

Full text
Abstract:
Video is a massive amount of data that contains complex interactions between moving objects. The extraction of knowledge from this type of information creates a demand for video analytics systems that uncover statistical relationships between activities and learn the correspondence between content and labels. However, those are open research problems that have high complexity when multiple actors simultaneously perform activities, videos contain noise, and streaming scenarios are considered. The techniques introduced in this dissertation provide a basis for analyzing video. The primary contributions of this research consist of providing new algorithms for the efficient search of activities in video, scene understanding based on interactions between activities, and the predicting of labels for new scenes.
APA, Harvard, Vancouver, ISO, and other styles
8

Alam, Sayeed Safayet. "Analysis of Eye-Tracking Data in Visualization and Data Space." FIU Digital Commons, 2017. http://digitalcommons.fiu.edu/etd/3473.

Full text
Abstract:
Eye-tracking devices can tell us where on the screen a person is looking. Researchers frequently analyze eye-tracking data manually, by examining every frame of a visual stimulus used in an eye-tracking experiment so as to match 2D screen-coordinates provided by the eye-tracker to related objects and content within the stimulus. Such task requires significant manual effort and is not feasible for analyzing data collected from many users, long experimental sessions, and heavily interactive and dynamic visual stimuli. In this dissertation, we present a novel analysis method. We would instrument visualizations that have open source code, and leverage real-time information about the layout of the rendered visual content, to automatically relate gaze-samples to visual objects drawn on the screen. Since such visual objects are shown in a visualization stand for data, the method would allow us to necessarily detect data that users focus on or Data of Interest (DOI). This dissertation has two contributions. First, we demonstrated the feasibility of collecting DOI data for real life visualization in a reliable way which is not self-evident. Second, we formalized the process of collecting and interpreting DOI data and test whether the automated DOI detection can lead to research workflows, and insights not possible with traditional, manual approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Cannon, Paul C. "Extending the information partition function : modeling interaction effects in highly multivariate, discrete data /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2263.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wictorin, Sebastian. "Streamlining Data Journalism: Interactive Analysis in a Graph Visualization Environment." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22498.

Full text
Abstract:
This thesis explores the topic of how one can streamline a data journalists analytical workflow in a graph visualization environment. Interactive graph visualizations have been used recently by data journalists to investigate the biggest leaks of data in history. Graph visualizations empower users to find patterns in their connected data, and as the world continuously produces more data, the more important it becomes to make sense of it. The exploration was done by conducting semi-structured interviews with users, which illuminated three categories of insights called Graph Readability, Charts in Graphs and Temporality. Graph Readability was the category that were conceptualized and designed by integrating user research and data visualization best practises. The design process was concluded with a usability test with graph visualization developers, followed by a final iteration of the concept. The outcome resulted in a module that lets users simplify their graph and preserve information by aggregating nodes with similar attributes.
APA, Harvard, Vancouver, ISO, and other styles
11

Browne, Fiona. "Integrative data analysis models and tools for the prediction of protein-protein interaction networks." Thesis, University of Ulster, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Self, Jessica Zeitz. "Designing and Evaluating Object-Level Interaction to Support Human-Model Communication in Data Analysis." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/70950.

Full text
Abstract:
High-dimensional data appear in all domains and it is challenging to explore. As the number of dimensions in datasets increases, the harder it becomes to discover patterns and develop insights. Data analysis and exploration is an important skill given the amount of data collection in every field of work. However, learning this skill without an understanding of high-dimensional data is challenging. Users naturally tend to characterize data in simplistic one-dimensional terms using metrics such as mean, median, mode. Real-world data is more complex. To gain the most insight from data, users need to recognize and create high-dimensional arguments. Data exploration methods can encourage thinking beyond traditional one-dimensional insights. Dimension reduction algorithms, such as multidimensional scaling, support data explorations by reducing datasets to two dimensions for visualization. Because these algorithms rely on underlying parameterizations, they may be manipulated to assess the data from multiple perspectives. Manipulating can be difficult for users without a strong knowledge of the underlying algorithms. Visual analytics tools that afford object-level interaction (OLI) allow for generation of more complex insights, despite inexperience with multivariate data or the underlying algorithm. The goal of this research is to develop and test variations on types of interactions for interactive visual analytic systems that enable users to tweak model parameters directly or indirectly so that they may explore high-dimensional data. To study interactive data analysis, we present an interface, Andromeda, that enables non-experts of statistical models to explore domain-specific, high-dimensional data. This application implements interactive weighted multidimensional scaling (WMDS) and allows for both parametric and observation-level interaction to provide in-depth data exploration. We performed multiple user studies to answer how parametric and object-level interaction aid in data analysis. With each study, we found usability issues and then designed solutions for the next study. With each critique we uncovered design principles of effective, interactive, visual analytic tools. The final part of this research presents these principles supported by the results of our multiple informal and formal usability studies. The established design principles focus on human-centered usability for developing interactive visual analytic systems that enable users to analyze high-dimensional data through object-level interaction.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Ho, Quan. "Architecture and Applications of a Geovisual Analytics Framework." Doctoral thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-91679.

Full text
Abstract:
The large and ever-increasing amounts of multi-dimensional, multivariate, multi-source, spatio-temporal data represent a major challenge for the future. The need to analyse and make decisions based on these data streams, often in time-critical situations, demands integrated, automatic and sophisticated interactive tools that aid the user to manage, process, visualize and interact with large data spaces. The rise of `Web 2.0', which is undisputedly linked with developments such as blogs, wikis and social networking, and the internet usage explosion in the last decade represent another challenge for adapting these tools to the Internet to reach a broader user community. In this context, the research presented in this thesis introduces an effective web-enabled geovisual analytics framework implemented, applied and verified in Adobe Flash ActionScript and HTML5/JavaScript. It has been developed based on the principles behind Visual Analytics and designed to significantly reduce the time and effort needed to develop customized web-enabled applications for geovisual analytics tasks and to bring the benefits of visual analytics to the public. The framework has been developed based on a component architecture and includes a wide range of visualization techniques enhanced with various interaction techniques and interactive features to support better data exploration and analysis. The importance of multiple coordinated and linked views is emphasized and a number of effective techniques for linking views are introduced. Research has so far focused more on tools that explore and present data while tools that support capturing and sharing gained insight have not received the same attention. Therefore, this is one of the focuses of the research presented in this thesis. A snapshot technique is introduced, which supports capturing discoveries made during the exploratory data analysis process and can be used for sharing gained knowledge. The thesis also presents a number of applications developed to verify the usability and the overall performance of the framework for the visualization, exploration and analysis of data in different domains. Four application scenarios are presented introducing (1) the synergies among information visualization methods, geovisualization methods and volume data visualization methods for the exploration and correlation of spatio-temporal ocean data, (2) effective techniques for the visualization, exploration and analysis of self-organizing network data, (3) effective flow visualization techniques applied to the analysis of time-varying spatial interaction data such as migration data, commuting data and trade flow data, and (4) effective techniques for the visualization, exploration and analysis of flood data.
APA, Harvard, Vancouver, ISO, and other styles
14

Currin, Aubrey Jason. "Text data analysis for a smart city project in a developing nation." Thesis, University of Fort Hare, 2015. http://hdl.handle.net/10353/2227.

Full text
Abstract:
Increased urbanisation against the backdrop of limited resources is complicating city planning and management of functions including public safety. The smart city concept can help, but most previous smart city systems have focused on utilising automated sensors and analysing quantitative data. In developing nations, using the ubiquitous mobile phone as an enabler for crowdsourcing of qualitative public safety reports, from the public, is a more viable option due to limited resources and infrastructure limitations. However, there is no specific best method for the analysis of qualitative text reports for a smart city in a developing nation. The aim of this study, therefore, is the development of a model for enabling the analysis of unstructured natural language text for use in a public safety smart city project. Following the guidelines of the design science paradigm, the resulting model was developed through the inductive review of related literature, assessed and refined by observations of a crowdsourcing prototype and conversational analysis with industry experts and academics. The content analysis technique was applied to the public safety reports obtained from the prototype via computer assisted qualitative data analysis software. This has resulted in the development of a hierarchical ontology which forms an additional output of this research project. Thus, this study has shown how municipalities or local government can use CAQDAS and content analysis techniques to prepare large quantities of text data for use in a smart city.
APA, Harvard, Vancouver, ISO, and other styles
15

Rzeszotarski, Jeffrey M. "Uncovering Nuances in Complex Data Through Focus and Context Visualizations." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/958.

Full text
Abstract:
Across a wide variety of digital devices, users create, consume, and disseminate large quantities of information. While data sometimes look like a spreadsheet or network diagram, more often for everyday users their data look more like an Amazon search page, the line-up for a fantasy football team, or a set of Yelp reviews. However, interpreting these kinds of data remains a difficult task even for experts since they often feature soft or unknown constraints (e.g. ”I want some Thai food, but I also want a good bargain”) across highly multidimensional data (i.e. rating, reviews, popularity, proximity). Existing technology is largely optimized for users with hard criteria and satisfiable constraints, and consumer systems often use representations better suited for browsing than sensemaking. In this thesis I explore ways to support soft constraint decision-making and exploratory data analysis by giving users tools that show fine-grained features of the data while at the same time displaying useful contextual information. I describe approaches for representing collaborative content history and working behavior that reveal both individual and group/dataset level features. Using these approaches, I investigate general visualizations that utilize physics to help even inexperienced users find small and large trends in multivariate data. I describe the transition of physicsbased visualization from the research space into the commercial space through a startup company, and the insights that emerged both from interviews with experts in a wide variety of industries during commercialization and from a comparative lab study. Taking one core use case from commercialization, consumer search, I develop a prototype, Fractal, which helps users explore and apply constraints to Yelp data at a variety of scales by curating and representing individual-, group-, and dataset-level features. Through a user study and theoretical model I consider how the prototype can best aide users throughout the sensemaking process. My dissertation further investigates physics-based approaches for represent multivariate data, and explores how the user’s exploration process itself can help dynamically to refine the search process and visual representation. I demonstrate that selectively representing points using clusters can extend physics-based visualizations across a variety of data scales, and help users make sense of data at scales that might otherwise overload them. My model provides a framework for stitching together a model of user interest and data features, unsupervised clustering, and visual representations for exploratory data visualization. The implications from commercialization are more broad, giving insight into why research in the visualization space is/isn’t adopted by industry, a variety of real-world use cases for multivariate exploratory data analysis, and an index of common data visualization needs in industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Fabbri, Renato. "Topological stability and textual differentiation in human interaction networks: statistical analysis, visualization and linked data." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-11092017-154706/.

Full text
Abstract:
This work reports on stable (or invariant) topological properties and textual differentiation in human interaction networks, with benchmarks derived from public email lists. Activity along time and topology were observed in snapshots in a timeline, and at different scales. Our analysis shows that activity is practically the same for all networks across timescales ranging from seconds to months. The principal components of the participants in the topological metrics space remain practically unchanged as different sets of messages are considered. The activity of participants follows the expected scale-free outline, thus yielding the hub, intermediary and peripheral classes of vertices by comparison against the Erdös-Rényi model. The relative sizes of these three sectors are essentially the same for all email lists and the same along time. Typically, 3-12% of the vertices are hubs, 15-45% are intermediary and 44-81% are peripheral vertices. Texts from each of such sectors are shown to be very different through direct measurements and through an adaptation of the Kolmogorov-Smirnov test. These properties are consistent with the literature and may be general for human interaction networks, which has important implications for establishing a typology of participants based on quantitative criteria. For guiding and supporting this research, we also developed a visualization method of dynamic networks through animations. To facilitate verification and further steps in the analyses, we supply a linked data representation of data related to our results.
Este trabalho relata propriedades topológicas estáveis (ou invariantes) e diferenciação textual em redes de interação humana, com referências derivadas de listas públicas de e-mail. A atividade ao longo do tempo e a topologia foram observadas em instantâneos ao longo de uma linha do tempo e em diferentes escalas. A análise mostra que a atividade é praticamente a mesma para todas as redes em escalas temporais de segundos a meses. As componentes principais dos participantes no espaço das métricas topológicas mantêm-se praticamente inalteradas quando diferentes conjuntos de mensagens são considerados. A atividade dos participantes segue o esperado perfil livre de escala, produzindo, assim, as classes de vértices dos hubs, dos intermediários e dos periféricos em comparação com o modelo Erdös-Rényi. Os tamanhos relativos destes três setores são essencialmente os mesmos para todas as listas de e-mail e ao longo do tempo. Normalmente, 3-12% dos vértices são hubs, 15-45% são intermediários e 44-81% são vértices periféricos. Os textos de cada um destes setores são considerados muito diferentes através de uma adaptação dos testes de Kolmogorov-Smirnov. Estas propriedades são consistentes com a literatura e podem ser gerais para redes de interação humana, o que tem implicações importantes para o estabelecimento de uma tipologia dos participantes com base em critérios quantitativos. De modo a guiar e apoiar esta pesquisa, também desenvolvemos um método de visualização para redes dinâmicas através de animações. Para facilitar a verificação e passos seguintes nas análises, fornecemos uma representação em dados ligados dos dados relacionados aos nossos resultados.
APA, Harvard, Vancouver, ISO, and other styles
17

Richardson, Emma. "The order of ordering : analysing customer-bartender service encounters in public bars." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/14293.

Full text
Abstract:
This thesis will explore how customers and bartenders accomplish the service encounter in a public house, or bar. Whilst there is a body of existing literature on service encounters, this mainly investigates customer satisfaction and ignores the mundane activities that comprise the service encounter itself. In an attempt to fill this gap, I will examine how the activities unfold sequentially by examining the spoken and embodied conduct of the participants, over the course of the encounter. The data comprise audio -and video- recorded, dyadic and multi-party interactions between customer(s) and bartender(s), occurring at the bar counter. The data were analyzed using conversation analysis (CA) to investigate the talk and embodied conduct of participants, as these unfold sequentially. The first analytic chapter investigates how interactions between customers and bartenders are opened. The analysis reveals practices for communicating availability to enter into a service encounter; with customers being found to do this primarily through embodied conduct, and bartenders primarily through spoken turns. The second analytic chapter investigates the role of objects in the ordering sequence. Specifically, the analysis reveals how the Cash Till and the seating tables in the bar are mobilized by participants to accomplish action. In the third analytic chapter, multi-party interactions are investigated, focusing on the organization of turn-taking when two or more customers interact with one or more bartenders. Here, customers are found to engage in activities where they align as a unit, with a lead speaker, who interacts with the bartender on behalf of the party. In the final analytic chapter, the payment sequence of the service encounter is explored to investigate at what sequential position in the interaction payment, as an action, is oriented to. Analysis reveals that a wallet, purse, or bag, may be displayed and money or a payment card retrieved, in a variety of sequential slots, with each contributing differentially to the efficiency of the interaction. I also find that payment may be prematurely proffered due to the preference for efficiency. Overall, the thesis makes innovative contributions to our understanding of customer and bartender practices for accomplishing core activities in what members come to recognize as a service encounter It also contributes substantially to basic conversation analytic research on openings , which has traditionally been founded on telephone interactions, as well as the action of requesting. I enhance our knowledge of face-to-face opening practices, by revealing that the canonical opening sequence (see Schegloff, 1968; 1979; 1986) is not present, at least in this context. From the findings, I also develop our understanding of how objects constrain, or further, progressivity in interaction; while arguing for the importance of analysing the participants semiotic field in aggregate with talk and embodied conduct. The thesis also contributes to existing literature on multi-party interactions, identifying a new turn-taking practice with a directional flow that works effectively to accomplish ordering. Finally, I contribute to knowledge on the provision of payment, an under-researched yet prominent action in the service encounter. This thesis will show the applicability of CA to service providers; by analysing the talk and embodied conduct in aggregate, effective practices for accomplishing a successful service encounter are revealed.
APA, Harvard, Vancouver, ISO, and other styles
18

Cannon, Paul C. "Extending the Information Partition Function: Modeling Interaction Effects in Highly Multivariate, Discrete Data." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1234.

Full text
Abstract:
Because of the huge amounts of data made available by the technology boom in the late twentieth century, new methods are required to turn data into usable information. Much of this data is categorical in nature, which makes estimation difficult in highly multivariate settings. In this thesis we review various multivariate statistical methods, discuss various statistical methods of natural language processing (NLP), and discuss a general class of models described by Erosheva (2002) called generalized mixed membership models. We then propose extensions of the information partition function (IPF) derived by Engler (2002), Oliphant (2003), and Tolley (2006) that will allow modeling of discrete, highly multivariate data in linear models. We report results of the modified IPF model on the World Health Organization's Survey on Global Aging (SAGE).
APA, Harvard, Vancouver, ISO, and other styles
19

Musleh, Maath. "Visual Analysis of Industrial Multivariate Time-Series Data : Effective Solution to Maximise Insights from Blow Moulding Machine Sensory Data." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105253.

Full text
Abstract:
Developments in the field of data analytics provides a boost for small-sized factories. These factories are eager to take full advantage of the potential insights in the remotely collected data to minimise cost and maximise quality and profit. This project aims to process, cluster and visualise sensory data of a blow moulding machine in a plastic production factory. In collaboration with Lean Automation, we aim to develop a data visualisation solution to enable decision-makers in a plastic factory to improve their production process. We will investigate three different aspects of the solution: methods for processing multivariate time-series data, clustering approaches for the sensory-data cultivated, and visualisation techniques that maximises production process insights. We use a formative evaluation method to develop a solution that meets partners' requirements and best practices within the field. Through building the MTSI dashboard tool, we hope to answer questions on optimal techniques to represent, cluster and visualise multivariate time series data.
APA, Harvard, Vancouver, ISO, and other styles
20

Sturgill, David Matthew. "Comparative Genome Analysis of Three Brucella spp. and a Data Model for Automated Multiple Genome Comparison." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/10163.

Full text
Abstract:
Comparative analysis of multiple genomes presents many challenges ranging from management of information about thousands of local similarities to definition of features by combination of evidence from multiple analyses and experiments. This research represents the development stage of a database-backed pipeline for comparative analysis of multiple genomes. The genomes of three recently sequenced species of Brucella were compared and a superset of known and hypothetical coding sequences was identified to be used in design of a discriminatory genomic cDNA array for comparative functional genomics experiments. Comparisons were made of coding regions from the public, annotated sequence of B. melitensis (GenBank) to the annotated sequence of B. suis (TIGR) and to the newly-sequenced B. abortus (personal communication, S. Halling, National Animal Disease Center, USDA). A systematic approach to analysis of multiple genome sequences is described including a data model for storage of defined features is presented along with necessary descriptive information such as input parameters and scores from the methods used to define features. A collection of adjacency relationships between features is also stored, creating a unified database that can be mined for patterns of features which repeat among or within genomes. The biological utility of the data model was demonstrated by a detailed analysis of the multiple genome comparison used to create the sample data set. This examination of genetic differences between three Brucella species with different virulence patterns and host preferences enabled investigation of the genomic basis of virulence. In the B. suis genome, seventy-one differentiating genes were found, including a contiguous 17.6 kb region unique to the species. Although only one unique species-specific gene was identified in the B. melitensis genome and none in the B. abortus genome, seventy-nine differentiating genes were found to be present in only two of the three Brucella species. These differentiating features may be significant in explaining differences in virulence or host specificity. RT-PCR analysis was performed to determine whether these genes are transcribed in vitro. Detailed comparisons were performed on a putative B. suis pathogenicity island (PAI). An overview of these genomic differences and discussion of their significance in the context of host preference and virulence is presented.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
21

Shang, Lifeng, and 尚利峰. "Facial expression analysis with graphical models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47849484.

Full text
Abstract:
Facial expression recognition has become an active research topic in recent years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain, temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as neural networks, Gaussian processes, support vector machines, etc.) have been applied to facial expression analysis. Recently graphical models have emerged as a general framework for applying probabilistic models. They provide a natural framework for describing the generative process of facial expressions. However, these models often su?er from too many latent variables or too complex model structures, which makes learning and inference di±cult. In this thesis, we will try to analyze the deformation of facial expression by introducing some recently developed graphical models (e.g. latent topic model) or improving the recognition ability of some already widely used models (e.g. HMM). In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of exemplars and topics in between. Our ¯rst model incorporates exemplar-based representation into graphical models. To further improve computational e±- ciency of the proposed model, we build it in a local linear subspace constructed by principal component analysis. The second model is an extension of the recently developed topic model by introducing temporal and categorical information into Latent Dirichlet Allocation model. In our discriminative temporal topic model (DTTM), temporal information is integrated by placing an asymmetric Dirichlet prior over document-topic distributions. The discriminative ability is improved by a supervised term weighting scheme. We describe the resulting DTTM in detail and show how it can be applied to facial expression recognition. Our third model is a nonparametric discriminative variation of HMM. HMM can be viewed as a prototype model, and transition parameters act as the prototype for one category. To increase the discrimination ability of HMM at both class level and state level, we introduce linear interpolation with maximum entropy (LIME) and member- ship coe±cients to HMM. Furthermore, we present a general formula for output probability estimation, which provides a way to develop new HMM. Experimental results show that the performance of some existing HMMs can be improved by integrating the proposed nonparametric kernel method and parameters adaption formula. In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal and categorical information into Latent Dirichlet Allocation (LDA) topic model, and (iii) increasing the discrimination ability of HMM at both hidden state level and class level.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
22

Korkmaz, Gulberal Kircicegi Yoksul. "Mining Microarray Data For Biologically Important Gene Sets." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614266/index.pdf.

Full text
Abstract:
Microarray technology enables researchers to measure the expression levels of thousands of genes simultaneously to understand relationships between genes, extract pathways, and in general understand a diverse amount of biological processes such as diseases and cell cycles. While microarrays provide the great opportunity of revealing information about biological processes, it is a challenging task to mine the huge amount of information contained in the microarray datasets. Generally, since an accurate model for the data is missing, first a clustering algorithm is applied and then the resulting clusters are examined manually to find genes that are related with the biological process under inspection. We need automated methods for this analysis which can be used to eliminate unrelated genes from data and mine for biologically important genes. Here, we introduce a general methodology which makes use of traditional clustering algorithms and involves integration of the two main sources of biological information, Gene Ontology and interaction networks, with microarray data for eliminating unrelated information and find a clustering result containing only genes related with a given biological process. We applied our methodology successfully on a number of different cases and on different organisms. We assessed the results with Gene Set Enrichment Analysis method and showed that our final clusters are highly enriched. We also analyzed the results manually and found that most of the genes that are in the final clusters are actually related with the biological process under inspection.
APA, Harvard, Vancouver, ISO, and other styles
23

Dinh, Ngoc Phuoc. "Investigations of the retention mechanisms in hydrophilic interaction chromatography." Doctoral thesis, Umeå universitet, Kemiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-68071.

Full text
Abstract:
Hydrophilic interaction chromatography is well known as a powerful technique separation of polar and ionizable compound nowadays. However the retention mechanism of the technique is still under debate. Understanding retention mechanism would facilitate the method development using the technique and its future improvement. This was inspiring and became the goal of this thesis. This work involves the characterization of the water enriched layer regarding to water and buffer salt accumulation. Twelve HILIC stationary phase with a diverse surface chemistry regarding to function groups and modification type were studied. Effect of water and salt on regarding to the retention mechanism was investigated by correlating the adsorption data to the retention of selected solutes This also involved the characterization of interactions involve in the separation of 21 HILIC columns. Interactions was probe by retention ratio of pair solutes which are characteristic for each specific interaction. The data was evaluate using principle component analysis – a multivariable data analysis method. The model was comprehensive and its outcomes were confirmed by the studies on adsorptions of water and salts.
APA, Harvard, Vancouver, ISO, and other styles
24

Cooke, Emma J. "Transcriptional analysis of the interaction between Botrytis cinerea and a host Arabidopsis thaliana using high-throughput data." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/54512/.

Full text
Abstract:
Botrytis cinerea is an economically important necrotrophic pathogen which causes disease in hundreds of species of plants during pre- and post-harvest conditions. This thesis investigates the transcriptional responses of B. cinerea and the host Arabidopsis thaliana during the infection through the analysis of microarray and RNA-seq data sets. This work develops techniques for clustering time series expression profiles and identifying direct gene targets using microarray data; and techniques for identifying differentially expressed and differentially spliced genes using RNA-seq data. A clustering algorithm which uses Gaussian process regression to capture the time series structure of microarray data was developed and analysed. Features which are not considered by standard clustering algorithms were added, specifically the ability to include replicate data by using replicate information to inform a prior distribution for noise, and the ability to consider outlier values by using a mixture model likelihood. This algorithm is shown to produce more coherent and biologically meaningful clusters than standard algorithms when applied to publicly available time series data. This algorithm was also used to cluster A. thaliana transcription factors with similar expression profiles during B. cinerea infection. The transcription factors CAMTA3 and MYB108 are known to play a role in the A. thaliana defence response to B. cinerea. Mutant A. thaliana plants were generated which constitutively express the CAMTA3 gene, and these are shown to be more susceptible to B. cinerea infection than wild-type plants. Microarray data sets from mutant CAMTA3 and MYB108 A. thaliana plants were generated and used together with a time series data set of A. thaliana infected with B. cinerea to identify the most likely direct targets for these two transcription factors. Possible regulatory motifs to which these transcription factors bind were also identified. RNA-seq data sets of A. thaliana infected and mock-infected with B. cinerea at three key infection stages were generated. 2,081 novel splice junctions were identi fied for A. thaliana from the data. Differentially expressed genes for A. thaliana and B. cinerea were identified between the key infection stages using existing methods, however these methods are limited to pairwise testing. An improved method using generalised linear models was developed to enable the incorporation of both time and infection stage factors, which identified 12,940 A. thaliana genes differentially expressed due to B. cinerea infection. Different isoforms of A. thaliana genes were identified at a transcript level, at an event level and at a splice junction level. Generalised linear models were then used with the multinomial distribution, which considered both time and infection stage factors, to identify 928 A. thaliana genes which are likely to be differentially spliced due to B. cinerea infection.
APA, Harvard, Vancouver, ISO, and other styles
25

Goodman, Daniel Hayim. "Aware surfaces : large-scale, surface-based sensing for new modes of data collection, analysis, and human interaction." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98643.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 152-158).
This thesis describes the design and construction of pressure sensing matrices for capturing human location and activity data from large surfaces in a space such as the floors, walls, tabletops, countertops, and furniture. With the ability to operate either alone or connected to others in an assembly, each sensor module is 0.3m x 2m, contains 512 force sensitive resistors, and has a refresh rate of about 8Hz. Each module was made with conductive inkjet printing and PCB fabrication, creating a low-profile sensing surface with robust signal-collecting circuitry. Several experiments were conducted on an assemblage of three modules to assess parameters such as response time, sensitivity, measurement repeatability, spatial and pressure resolution, and accuracy in analyzing walking data as compared to a camera. Applications that could employ such a system are explored and two visualizations were prototyped that could ambiently provide data and trends to a user.
by Daniel Hayim Goodman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Barra, Cortés Maximiliano Hernán. "Analysis of beams with transverse opening using a shear-flexure interaction model and validation with experimental data." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/136162.

Full text
Abstract:
Ingeniero Civil
Un modelo que combina las respuestas de corte y flexión fue desarrollado por Massone et al. (2006). Este modelo ha sido validado para muros esbeltos y muros cortos (Massone et al., 2009). El modelo fue adaptado para su uso en vigas simplemente apoyadas con ciertas particularidades, como fibras de acero en la mezcla de hormigón o la utilización de hormigón de auto consolidación (Galleguillos, 2010 y Gotschlich, 2011 respectivamente). El modelo de interacción corte-flexión fue adaptado para simular vigas de hormigón armado en cantiléver con una abertura rectangular en la dirección transversal horizontal al centro de su luz. El objetivo era el de validar el modelo para su uso en elementos de esta naturaleza, que son comunes en edificios modernos, en donde se busca aprovechar la altura completa de pisos. Las aberturas se utilizan para el paso de conductos y tuberías. Los resultados obtenidos mediante el modelo de interacción fueron comparados con resultados experimentales, descritos por Lemnitzer et al. (2013). La respuesta global predicha se acerca considerablemente a la respuesta experimental, mostrando curvas de carga desplazamiento razonables. Las limitaciones del modelo fueron evidentes al estimar la zona de falla del Espécimen 1, que presenta daño en su abertura. Otras discrepancias son la alta ductilidad que entrega el modelo analítico, retrasando la degradación por la contribución de corte, así como la alta rigidez inicial que presentan las simulaciones. La acumulación de daño por corte en ciertas zonas fue bien capturada mediante el modelo para los tres especímenes que fallaron en su interfaz con el bloque de reacción, pero no así la acumulación de daño por flexión. La máxima capacidad de los especímenes fue bien predicha, con discrepancias iguales o menores a un 10%. Una variación en la discretización inicial de las vigas junto a una baja en las resistencias de los elementos en el modelo permite inducir la falla en la zona de la abertura. Esta última discretización es recomendada para estudios a futuro.
APA, Harvard, Vancouver, ISO, and other styles
27

Sagala, Ramadhan Kurniawan. "Visualization of Vehicle Usage Based on Position Data for Root-Cause Analysis : A Case Study in Scania CV AB." Thesis, Uppsala universitet, Människa-datorinteraktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-355909.

Full text
Abstract:
Root cause analysis (RCA) is a process in Scania carried out to understand the root cause of vehicle breakdowns. It is commonly done by studying vehicle warranty claims and failure reports, identifying patterns that are correlated to the breakdowns, and then analyzing the root cause based on those findings. Vehicle usage is believed to be one of the factors that may contribute towards the breakdowns, but the data on vehicle usage is not commonly utilized in RCA. This thesis investigates a way to help RCA process by introducing a dataset of vehicle usage based on position data gathered in project FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). A user-centered design process of a visualization tool which presents FUMA data for people working in RCA process was carried out. Interviews were conducted to gain insights about the RCA process and generate design ideas. PACT framework was used to organize the ideas, and Use Cases were developed to project a conceptual scenario. A low fidelity prototype was developed as design artifact for the visualization, and a formative test was done to validate the design and gather feedback for future prototyping iterations. In each design phase, more insights about how visualization of vehicle usage should be used in RCA were obtained. Based on this study, the prototype design showed a promising start in visualizing vehicle usage for RCA purpose. Improvement on data presentation, however, still needs to be addressed to reach the level of practicality required in RCA.
Root cause analysis (RCA) är en process på Scania som används för att förstå rotorsaken till fordons behov av reparation.Oftast studeras fordonets försäkringsrapporter och felrapporter, för att identifiera och analysera mönster som motsvarar de olika behoven för reparation. Fordonsanvändningen tros vara en av de faktorer som bidrar till reparationsbehoven, men data angående detta används sällan i RCA. Denna rapport undersöker hur RCA-processen kan dra nytta av positionsdata som samlats in i projekt FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). En användarcentrerad designmetodik har använts för att ta fram ett visualiseringsverktyg som presenterar FUMA-data för personer som deltar i RCA-processen. Intervjuer har genomförts för att samla insikter om RCA-processen och för att generera designidéer. PACT-ramverket användes sedan för att organisera idéerna, och användningssituationer togs fram för att skapa ett konceptuellt scenario. En low-fidelity prototyp togs fraför personer som deltar i RCA-processenm som en designartefakt för visualiseringen och ett utvecklande test genomfördes för att validera designen och samla in feedback för framtida iterationer av prototyping. Under varje design-fas, samlades mer insikter om hur visualiseringen av fordonsanvändning skulle kunna användas för RCA in. Baserat på detta, visade design-prototypen en lovande start för visualisering av fordonsanvändning i RCA. Förbättringar på hur data presenteras måste dock genomföras, så att rätt genomförbarhet för RCA uppnås.
APA, Harvard, Vancouver, ISO, and other styles
28

Shah, Dhaval Kashyap. "Impact of Visualization on Engineers – A Survey." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6385.

Full text
Abstract:
In the recent years, there has been a tremendous growth in data. Numerous research and technologies have been proposed and developed in the field of Visualization to cope with the associated data analytics. Despite these new technologies, the pace of people’s capacity to perform data analysis has not kept pace with the requirement. Past literature has hinted as to various reasons behind this disparity. The purpose of this research is to demonstrate specifically the usage of Visualization in the field of engineering. We conducted the research with the help of a survey identifying the places where Visualization educational shortcomings may exist. We conclude by asserting that there is a need for creating awareness and formal education about Visualization for Engineers.
APA, Harvard, Vancouver, ISO, and other styles
29

Butscher, Simon [Verfasser]. "Reality-based Idioms : Designing Interfaces for Visual Data Analysis that Provide the Means for Familiar Interaction / Simon Butscher." Konstanz : Bibliothek der Universität Konstanz, 2018. http://d-nb.info/1172205426/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cooper, Crispin H. V. "Exploratory analysis of large spatial time series and network interaction data sets : house prices in England and Wales." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/54535/.

Full text
Abstract:
This thesis describes a combined exploratory analysis, on a fine spatial scale, of (i) England and Wales house prices, between the years 2000 and 2006; (ii) aggregate statistics taken from the UK census of 2001; and (iii) interaction statistics also taken from that census. The house price data is derived from individual transactions and analysed mainly in the form of ward level indices with a time resolution of 100 days. The study has twin aims: firstly, to improve understanding of the data set - which is large in nature - particularly with respect to exploring the interaction statistics; secondly, to improve the methods of exploratory analysis themselves. With respect to the aim of understanding the data, both migration and house price changes are visualised in a novel way, and regression is used to determine indicators of likely house price cross-correlations between different market areas. Ripple type effects are shown to be related both to reactive mechanisms, and to the composition of migration flows. Further visualisation shows that the market may be understood in terms of clusters with similar behaviour, or alternatively, in terms of market-driving and market-driven regions. Variables which can be used to define these clusters and regions are identified via further regression. With respect to improving the techniques of analysis, existing methods of visualising interaction data - based on clustering and linear ordering of points in geographic space - are extended to larger, hierarchical data sets and evaluated in this context. Novel approaches are presented for (i) construction of relative house price indices with minimal hedonic data, (ii) enhancement of time series predictions using cross-correlation data, and (iii) comparison of heterogeneous data sets via unification of all relevant information in the interaction domain, making it susceptible to analysis by regression aided with principal component based dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
31

Smoliński, Dominik. "Application of data warehousing and data mining in forecasting cancer diseases threats." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2943.

Full text
Abstract:
Multidimensional analysis, trends analysis, summaries and drill-downs as data warehousing methods of choice provided rich, valuable and detailed perspective of cancer threats in terms of virtually any dimension covered by data. These allowed to model the risk of cancer including age, race, sex and survival chances among others, to spot most dangerous and incident cancers, revealed how little survival chances and treatment efficiency increased over last 30 years and how little early diagnosis was improved, presented trends and changes in them and changes in cancer risk related to place of residence and emphasized the importance of risk mitigation by screening and healthy lifestyle. These methods also turned out to be easy, requiring less computer science related knowledge as one could expect. With little support from IT staff, oncology domain professionals can easily benefit from vast data sets and analytical power applied to it. Data mining algorithms evaluated over melanoma of the skin data managed to extract what's already known in the domain. Therefore, when used by oncology professionals over less generic data one can expect data mining to have the potential of extending experts' knowledge. Neural networks, decision trees and clusters showed higher prediction accuracy than Naive Bayes classifiers and association rules but it is advised to merge results from many algorithms. Findings by particular algorithms are often disjoint and when combined, allow to reveal more despite varying predictive performance. Analysis of caCORE system and systemic integration experiment proved that building a large-scale oncological data system integrating distributed data is extremely complex. Integrating with it requires a lot of effort to understand its structures, prepare data mappings and implement integration procedures. Strict cooperation of IT and oncology professionals is mandatory. Suggestions were made to simplify the generic caCORE data model (ontology) or split it into smaller parts and expose as much integration functionality as web interfaces or encapsulated classes to decrease the complexity of the process. Tweaked like that, caCORE would be fully feasible and could be considered as the future of application of data warehousing and data mining techniques in oncology, providing distributed and common-model compliant dataset and leveraging the power of research community.
The thesis evaluates: application of data warehousing and mining analysis to SEERStat surveillance and epidemiology oncological database and aspects of future development of integrated and extensible data systems for oncology domain basing on integration experiment with caCORE project. In the thesis following is presented: results of the analysis of cancer diseases data with conclusions and advice, potential of this specific analytical application and conclusions as well as guidelines about how future, more powerful oncological analytical systems could be built.
dominiksm@o2.pl
APA, Harvard, Vancouver, ISO, and other styles
32

Nielsen, Niels Bech. "Using electronic voting systems data outside lectures to support learning." Connect to e-thesis. Move to record for print version, 2007. http://theses.gla.ac.uk/46/.

Full text
Abstract:
Thesis (MSc. (R)) - University of Glasgow, 2007.
MSc. (R) thesis submitted to the Department of Computing Science, Faculty of Information and Mathematical Sciences, University of Glasgow, 2007. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Ayati, Marzieh. "Algorithms to Integrate Omics Data for Personalized Medicine." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1527679638507616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Öztürk, Esma. "WAKE INDUCED POWER DEFICIT ANALYSIS ON WIND TURBINES IN FORESTED MODERATELY COMPLEX TERRAIN USING SCADA DATA." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-346639.

Full text
Abstract:
Over the last few decades, wind power has shown a continuous and significant developmentin the energy market globally. Having reached a certain level in both technologyand in dimensions, the role of optimizing wind turbines as well as wind farms hasbecome an additional aspect to future development and research. Since turbine wakescan cause significant power deficits within a farm, research in this area has the potentialfor large improvements in wind farm design. A wake is described as the downstream flow behind the rotor of an operating windturbine. The two main characteristics of wakes are a velocity (momentum) deficit and anincreased turbulence level. The velocity deficit behind the upwind turbine results in apower loss of the downstream turbines, whereas the higher turbulence causes additionalloads on the downstream turbines’ structures resulting in fatigue problems. The study of wakes is a complex topic, they are influenced by an interconnection of anumber of parameters like ambient wind speed and turbulence, atmospheric stabilityconditions (stable, unstable, and neutral), the turbines’ operational characteristics, andthe terrain properties. In order to assess the power deficits affected by wake interaction between turbines,an analysis can be realized by processing SCADA data of turbines in a wind farm. The collected data is treated by a comprehensive filtration process, excluding events of icing, curtailment, faults, etc. and by grouping into different atmospheric conditions, windspeed intervals and wind speed sectors. Finally, power deficit values, as a function ofwind direction, are calculated and quantified, and thereafter analyzed to assess the wakebehavior at different conditions for different cases.In this thesis, the wake-induced power deficit has been investigated in a specificstudy case for three pairs of two neighboring turbines in a forested moderately complexterrain using SCADA data. The production losses amounted between the range of 32% to 67% for the specific site with turbine spacing around 4D. The obtained results werepartially unsatisfactory, caused by the reasons of inaccurate wind direction values due toyaw misalignment issues and challenging separation into different stability conditions. Moreover, the power deficits showed a clear reduction of losses with increasing windspeed. A conclusion regarding the differences between stable and near neutral conditionscould not be determined from the data.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Tianyou. "3D Representation of EyeTracking Data : An Implementation in Automotive Perceived Quality Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291222.

Full text
Abstract:
The importance of perceived quality within the automotive industry has been rapidly increasing these years. Since judgmentsconcerning perceived quality is a highly subjective process, eye-tracking technology is one of the best approaches to extractcustomers’ subconscious visual activity during interaction with the product. This thesis aims to find an appropriate solution forrepresenting 3D eye-tracking data for further improvements in the validity and verification efficiency of perceived qualityanalysis, attempting to answer the question:How can eye-tracking data be presented and integrated into 3D automobile design workflow as a material that allows designersto understand their customers better?In the study, a prototype system was built for car-interior inspection in the virtual reality (VR) showroom through an explorativeresearch process including investigations in the acquisition of gaze data in VR, classification of eye movement from thecollected gaze data, and the visualizations for the classified eye movements. The prototype system was then evaluated throughcomparisons between algorithms and feedbacks from the engineers who participated in the pilot study.As a result, a method combining I-VT (identification with velocity threshold) and DBSCAN (density-based spatial clusteringof application with noise) was implemented as the optimum algorithm for eye movement classification. A modified heat map,a cluster plot, a convex hull plot, together with textual information, were used to construct the complete visualization of theeye-tracking data. The prototype system has enabled car designers and engineers to examine both the customers’ and their ownvisual behavior in the 3D virtual showroom during a car inspection, followed by the extraction and visualization of the collectedgaze data. This paper presents the research process, including the introduction to relevant theory, the implementation of theprototype system, and its results. Eventually, strengths and weaknesses, as well as the future work in both the prototype solutionitself and potential experimental use cases, are discussed.
Betydelsen av upplevd kvalitet inom bilindustrin har ökat kraftigt dessa år. Eftersom uppfattningar om upplevd kvalitet är en mycket subjektivt är ögonspårningsteknik en av de bästa metoderna för att extrahera kundernas undermedvetna visuella aktivitet under interaktion med produkten. Denna avhandling syftar till att hitta en lämplig lösning för att representera 3Dögonspårningsdata för ytterligare förbättringar av validitets- och verifieringseffektiviteten hos upplevd kvalitetsanalys, och försöker svara på frågan: Hur kan ögonspårningsdata presenteras och integreras i 3D-arbetsflödet för bildesign som ett material som gör det möjligt för designers att bättre förstå sina kunder? I studien byggdes ett prototypsystem för bilinteriörinspektion i showroomet för virtuell verklighet (VR) genom en explorativ forskningsprocess inklusive undersökningar i förvärv av blickdata i VR, klassificering av ögonrörelse från insamlad blicksdata och visualiseringar för de klassificerade ögonrörelserna. Prototypsystemet utvärderades sedan genom jämförelser mellan algoritmer och återkopplingar från ingenjörerna som deltog i pilotstudien. Följaktligen implementerades en metod som kombinerar I-VT (identifiering med hastighetströskel) och DBSCAN (densitetsbaserad spatial gruppering av applikation med brus) som den optimala algoritmen för ögonrörelseklassificering. En modifierad värmekarta, ett klusterdiagram, en konvex skrovdiagram, tillsammans med textinformation, användes för att konstruera den fullständiga visualiseringen av ögonspårningsdata. Prototypsystemet har gjort det möjligt för bilkonstruktörer och ingenjörer att undersöka både kundernas och deras visuella beteende i det virtuella 3D-utställningsrummet under en bilinspektion, följt av utvinning och visualisering av den insamlade blicken. Denna uppsats presenterar forskningsprocessen, inklusive introduktion till relevant teori, implementeringen av prototypsystemet och dess resultat. Så småningom diskuteras styrkor och svagheter, liksom det framtida arbetet i både prototyplösningen och potentiella experimentella användningsfall.
APA, Harvard, Vancouver, ISO, and other styles
36

Uvehag, Daniel. "Design and Implementation of a Computational Platform and a Parallelized Interaction Analysis for Large Scale Genomics Data in Multiple Sclerosis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128459.

Full text
Abstract:
The multiple sclerosis (MS) genetics research group led by professor Jan Hillert at Karolinska Institutet, focuses on investigating the aetiology of the disease. Samples have been collected routinely from patients visiting the clinic for decades. From these samples, large amounts of genetics data is being generated. The traditional methods of analyzing the data is becoming increasingly inefficient as data sets grow larger. New approaches are needed to perform the analyses. This thesis gives an introduction to the relevant genetics and discusses possible approaches for enabling more efficient execution of legacy analysis tools, as well as improving a gene-environment and gene-gene interaction analysis. Different computational paradigms are presented followed by the implementation of a computational platform to support the researchers’ existing, and possibly future, analysis needs. The improved interaction analysis application is then implemented and executed in a virtual instance of this platform. The performance of the analysis application is then evaluated with respect to the original reference application.
Professor Jan Hillert vid Karolinska Institutet leder en forskargrupp som fokuserar på etiologin bakom multipel skleros (MS). Under flera årtionden har patientprover samlats in från kliniken och från dessa prover har stora mängder genetiska data genererats. De traditionella analysmetoderna blir allt mer ineffektiva då datamängderna öker. Det finns ett stort behov av nya tillvägagångssätt och metoder för att analysera dessa data. Denna uppsats ger en introduktion i relevant genetik och diskuterar olika tillvägagångssätt för att möjliggöra effektivare exekvering av befintliga analysverktyg, så väl som förbättring av en gen-miljö och gen-gen-interaktionsanalys. Olika etablerade beräkningsparadigmer presenteras, följt av en implementation av en beräkningsplattform som ett stöd i att tillgodose forskargruppens nuvarande och möjliga framtida behov. Den förbättrade interaktionsanalysen är sedan implementerad och exekverad i en virtuell instans av plattformen. Interaktionsanalysens prestanda utvärderas sedan och jämförs med ursprungsimplementationen.
APA, Harvard, Vancouver, ISO, and other styles
37

Hillman, Daniel Charles Alexander. "Improving coding and data management for discourse analysis : a case study in face-to-face and computer-mediated classroom interaction." Thesis, University of Cambridge, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ljungström, Erica. "ISAT : Interactive Scenario Analysis Tool for financial forecasting." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177128.

Full text
Abstract:
Målet för denna studie har varit att skapa en första version för ett verktyg i vilket analytiker inom finans kan skapa sina långsiktiga scenarion och väga olika rikser samt möjligheter mot varandra.Idén till ett sådant verktyg har funnits inom företaget i flera år, men de tidigare idéerna har varit för specifika för att kunna användas. Detta har främst berott på tidsbrist samt att de som utvecklat dem inte har haft tillgång till en bra utvecklingsmiljö.De enda begränsningarna för verktyget har varit 1) ”Det behöer kunna visa inverkan av manipuleringar”, 2) ”det behöer finnas såmycket funktionalitet som möjligt, utan att det finns knappar överallt” och 3) ”det få inte ädra nåot av input datan”.Eftersom dessa är abstrakta specificationer behövdes mock-ups, observationer och användbarhetstest för att kunna skapa ett verktyg som förenklar de mest använda manipulationerna. Verktyget måste också låta användaren enkelt ticka i och ur sina manipulationer så att dessa inte behöver göras om varje gång användaren vill testa ett nytt utfall.Observationerna och testerna har visat att användarna jobbar på olika sätt, och därför behövde verktyget vara flexibelt. Detta innebar också att det behövdes både generella samt specifika manipulationer. De visade också att verktyget behövde delas in i två delar, en för att skapa rapporter och en för att visa rapporten. Detta då rapporteringsprocessen ej får ändras.Fokus för denna studie har varit MDI, Människa-datorinteraktion. Detta innebär att den färdiga produkten bör vara intuitiv och väldigt enkel att lära sig att använda för användarna. Detta kan vara svårt då användarna arbetar på mycket olika sätt.Den färdiga produkten för denna studie har lyckats klara alla de mål som satts upp. En mock-up som gjorde användarna nyfikna på programmet, skapades i Java. Det bestämdes därmed att detta var det programmeringsspråk som skulle användas. Ett användargränssnitt som var enkel, men samtidigt hade en komplex funktionalitet lades till.Detta gjorde att användarna frågade sig själva ”Kan det verkligen vara såhä enkelt?” samt ”Varfö har vi inte gjort detta innan?”. Tillslut skapades en fungerande produkt som var både enkel att använda samt att den gjorde många av de enkla beräkningarna åt analytikern.Den enda del som ej blivit fullt implementerad innan slutet för denna studie är mallen för de Excel rapporter som ska skapas av verktyget. Denna del av verktyget utformades av en ekonom som vet vilka grafer som kan vara intressanta att ta med i en rapport. Nu, när verktyget genererar en rapport, genereras endast de grafer som syns i verktyget, totalen för scenariona (uppdelade i olika kategorier) samt alla de manipuleringsrader som skapats för de tre olika scenariona.
The goal with this study has been to create a first version for a tool in which financial analysts can create their long-term scenarios and weigh different risks and opportunities against each other.The idea to such a tool has been around for years within the company, but the earlier ideas were too specific to be usable. This has mainly been due to the lack of time and available tools to realize the ideas.The only restrictions for the tool have been 1) “It needs to show the impact of manipulations”, 2) “it needs as much functionality as possible without having buttons all over it” and 3) “it should not alter any of the input data”.Because these are quite abstract specifications, mock-ups, observations and usability tests have been used to create a tool that simplifies the most used manipulations and enables the user to tick in and out their manipulations so that the manipulation does not have to be redone every time the user wants to test a new outcome.The observations and tests have shown that the users work very differently from each other, and so, the tool needed to be very flexible. This meant that there needed to be both general and specific manipulations which are based on general formulas. It also showed that the tool needed to be split into two parts, one for creating and one for showing reports, because the reporting process should not be altered.The focus of this study has been HCI, Human Computer Interaction. This means that the finished product should be intuitive and also easy to learn how to operate by the users which could be difficult when the users do work in different ways. The resulting product of this study has reached all of the goals. A mock-up that got the users interested in the program was produced in Java, which decided the programming language. A GUI that was simple, yet had complex functionality was added. It made users ask themselves “Could it really be this easy?” and “Why have we not done this before?”. And, at last, a working product were produced, that was both simple to operate and at the same time did a lot of the calculations for the analyst.The only part of the product that has not been fully implemented before the endof this study is the template in which the Excel Report is supposed to be generated. This part of the tool was taken care of by an economist that knew which graphs that could be interesting to create a report of. Now, the tool generates a report with only the graphs that are shown in the tool, the totals for the scenarios (split into different categories) and all of the adjustment rows for the three scenarios.
APA, Harvard, Vancouver, ISO, and other styles
39

Decker, Gero. "Design and analysis of process choreographies." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2010/4076/.

Full text
Abstract:
With the rise of electronic integration between organizations, the need for a precise specification of interaction behavior increases. Information systems, replacing interaction previously carried out by humans via phone, faxes and emails, require a precise specification for handling all possible situations. Such interaction behavior is described in process choreographies. Choreographies enumerate the roles involved, the allowed interactions, the message contents and the behavioral dependencies between interactions. Choreographies serve as interaction contract and are the starting point for adapting existing business processes and systems or for implementing new software components. As a thorough analysis and comparison of choreography modeling languages is missing in the literature, this thesis introduces a requirements framework for choreography languages and uses it for comparing current choreography languages. Language proposals for overcoming the limitations are given for choreography modeling on the conceptual and on the technical level. Using an interconnection modeling style, behavioral dependencies are defined on a per-role basis and different roles are interconnected using message flow. This thesis reveals a number of modeling "anti-patterns" for interconnection modeling, motivating further investigations on choreography languages following the interaction modeling style. Here, interactions are seen as atomic building blocks and the behavioral dependencies between them are defined globally. Two novel language proposals are put forward for this modeling style which have already influenced industrial standardization initiatives. While avoiding many of the pitfalls of interconnection modeling, new anomalies can arise in interaction models. A choreography might not be realizable, i.e. there does not exist a set of interacting roles that collectively realize the specified behavior. This thesis investigates different dimensions of realizability.
Elektronische Integration zwischen Organisationen erfordert eine präzise Spezifikation des Interaktionsverhaltens: Informationssysteme, die Kommunikation per Telefon, Fax und Email ablösen, können nicht so flexibel und selbständig auf Ausnahmesituationen reagieren wie Menschen. Choreographien ermöglichen es, Interaktionsverhalten genau zu spezifizieren. Diese Modelle zählen die beteiligten Rollen, die erlaubten Interaktionen, Nachrichteninhalte und Verhaltensabhängigkeiten auf und dienen somit als Interaktionsvertrag zwischen den Organisationen. Auch als Ausgangspunkt für eine Anpassung existierender Prozesse und Systeme sowie für die Implementierung neuer Softwarekomponenten finden Choreographien Anwendung. Da ein Vergleich von Choreographiemodellierungssprachen in der Literatur bislang fehlt, präsentiert diese Arbeit einen Anforderungskatalog, der als Basis für eine Evaluierung existierender Sprachen angewandt wird. Im Kern führt diese Arbeit Spracherweiterungen ein, um die Schwächen existierender Sprachen zu überwinden. Die vorgestellten Erweiterungen adressieren dabei Modellierung auf konzeptioneller und auf technischer Ebene. Beim Verlinkungsmodellierungsstil werden Verhaltensabhängigkeiten innerhalb der beteiligten Rollen spezifiziert und das Interaktionsverhalten entsteht durch eine Verlinkung der Kommunikationsaktivitäten. Diese Arbeit stellt einige "Anti-Pattern" für die Verlinkungsmodellierung vor, welche wiederum Untersuchungen bzgl. Choreographiesprachen des Interaktionsmodellierungsstils motivieren. Hier werden Interaktionen als atomare Blöcke verstanden und Verhaltensabhängigkeiten werden global definiert. Diese Arbeit führt zwei neue Choreographiesprachen dieses zweiten Modellierungsstils ein, welche bereits in industrielle Standardisierungsinitiativen eingeflossen sind. Während auf der einen Seite zahlreiche Fallstricke der Verlinkungsmodellierung umgangen werden, können in Interaktionsmodellen allerdings neue Anomalien entstehen. Eine Choreographie kann z.B. "unrealisierbar" sein, d.h. es ist nicht möglich interagierende Rollen zu finden, die zusammen genommen das spezifizierte Verhalten abbilden. Dieses Phänomen wird in dieser Arbeit über verschiedene Dimensionen von Realisierbarkeit untersucht.
APA, Harvard, Vancouver, ISO, and other styles
40

Perera, Munasinhage Venura Lakshitha. "Metabolic profiling of plant disease : from data alignment to pathway predictions." Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3906.

Full text
Abstract:
Understanding the complex metabolic networks present in organisms, through the use of high throughput liquid chromatography coupled mass spectrometry, will give insight into the physiological changes responding to stress. However the lack of a proper work flow and robust methodology hinders verifiable biological interpretation of mass profiling data. In this study a novel workflow has been developed. A novel Kernel based feature alignment algorithm, which outperformed Agilent’s Mass profiler and showed roughly a 20% increase in alignment accuracy, is presented for the alignment of mass profiling data. Prior to statistical analysis post processing of data is carried out in two stages, noise filtering is applied to consensus features which were aligned at a 50% or higher rate. Followed by missing value imputation a method was developed that outperforms both at model recovery and false positive detection. The use of parametric methods for statistical analysis is inefficient and produces a large number of false positives. In order to tackle this three non-parametric methods were considered. The histogram method for statistical analysis was found to yield the lowest false positive rate. Data is presented which was analysed using these methods to reveal metabolomic changes during plant pathogenesis. A high resolution time series dataset was produced to explore the infection of Arabidopsis thaliana by the (hemi) biotroph Pseudomonas syringe pv tomato DC3000 and its disarmed mutant DC3000hrpA, which is incapable of causing infection. Approximately 2000 features were found to be significant through the time series. It was also found that by 4h the plants basal defence mechanism caused the significant ‘up-regulation’ of roughly 400 features, of which 240 were found to be at a 4-fold change. The identification of these features role in pathogenesis is supported by the fact that of those features found to discriminate between treatments a number of pathways were identified which have previously been documented to be active due to pathogenesis
APA, Harvard, Vancouver, ISO, and other styles
41

Luc, Françoise. "Contribution à l'étude du raisonnement et des stratégies dans le diagnostic de dépannage des systèmes réels et simulés." Doctoral thesis, Universite Libre de Bruxelles, 1991. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Henstam, Pontus. "How many participants are needed when usability testing physical products? : An analysis of data collected from usability tests conducted on physical products." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-147504.

Full text
Abstract:
Testing a product on users before releasing it on the market can be very rewarding but also costly for companies. Therefore testing products on just the right number of users, that will be enough to include the benefits of the tests while keeping down the costs, would be most beneficial. A common advice means that five participants are enough to include in such tests. This advice is based on research mainly from testing computer-based interfaces on users. Though, how well this advice can be applied when testing physical products on users is less investigated. This thesis has investigated how many participants that are needed when testing physical products on users. A literature study and an analysis of data collected from physical products tested on users were conducted. The results show that using five participants when testing physical products on users cannot be counted on to be enough. The results also show that the number of participants to use, when testing physical products on users, vary.
APA, Harvard, Vancouver, ISO, and other styles
43

Monteiro, Melo Kauã. "The impact of body-movementbased interaction on engagement of peripheral information displays : A case study." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276242.

Full text
Abstract:
With the growth of ubiquitous computing, people are getting familiar and receptive to the idea of using non personal devices in public spaces. An example of such a possible device are the large ambient displays which can be found in airports, subway stations, malls, and bus stops. While most of these devices are not interactive, some offer interaction through touchscreen. This study explored the impact of body-movement-based interaction on engagement of a peripheral information display. We developed and exhibited two versions of the same peripheral information display in a public space. The first version offered interactivity while no interaction was available in the second version. We counted the number of people who watched/interacted with the displays and timed how long they spent doing so. Qualitative data was also gathered through semi structured interviews and non-participant observations. The statistical analysis provide evidences that the mean time spent watching/interacting with the peripheral information display is higher when there is interaction. We are 95% confident that the interval from 0.25 to 13.71 seconds contains the difierence of mean time spent watching/interacting with the display between the two versions. The interviews and observations indicated that the interaction implemented is easily understandable by the public in few seconds without the need of instructions.
Med framväxten av ubik datateknik blir människor allt mer bekanta och mottagliga för att använda allmänt tillgängliga enheter i det offentliga rum. Ett exempel på en sådan enhet är dom stora bildskärmar som placerats på flygplatser, tunnelbanestationer, köpcenter och busshållplatser. Dom flesta av dessa är inte interaktiva, men några är interaktiva genom touch. Denna studie utforskar effekten av kroppsrörelse-baserat interaktion på människors engagemang med perifera informationsskärmar. Vi utvecklade och ställde ut två versioner av en informationsskärm i ett offentligt rum. Den första versionen var interaktiv medans den andra var statisk. Vi räknade antalet människor som uppmärksammade/interagerade med informationsskärmen och tog tiden på hur länge dom eventuellt stannade. Kvalitativ data samlades genom intervjuer genom semistrukturerade intervjuer och icke-deltagande observationer. Statistisk analys av data ger underlag för att säga att medelvärdet för tid spenderat framför informationsskärmen var högre när den var interaktiv. Med ett konfidensintervall på 95% slår vi fast att skillnaden i medelvärde för tid spenderat engagerad med informationsskärmen ligger mellan 0,25 och 13,71 sekunder för dom två versionerna. Intervjuer och observationer pekar mot att den implementerade interaktionen är lätt att förstå inom loppet av några få sekunder utan behov av instruktioner.
APA, Harvard, Vancouver, ISO, and other styles
44

Wandersman, Elie. "Transition vitreuse de nanoparticules magnétiques en interaction." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00193960.

Full text
Abstract:
Cette thèse est une étude expérimentale des transitions vitreuses de dispersions de nanoparticules magnétiques chargées (ferrofluides). - La transition vitreuse colloïdale, observée à forte concentration, conduit à un solide amorphe hors de l'équilibre thermodynamique. Les particules chargées sont alors en forte interaction, dans un potentiel dominé par les répulsions électrostatiques. Grâce aux propriétés originales des ferrofluides, nous considérons tous les degrés de liberté structuraux des particules. Ceux de position sont sondés par des mesures statiques et dynamiques de diffusion de rayonnement (rayons X et neutrons). Une dynamique vitreuse (non diffusive, vieillissante et intermittente) est observée à l'échelle nanométrique. En présence d'un champ magnétique, la structure des dispersions devient anisotrope, ainsi que la dynamique de translation des particules et son vieillissement. La dynamique de rotation des nanoparticules est sondée par des mesures de biréfringence magnéto-induite. Celle-ci se gèle à partir d'une fraction volumique qui dépend de l'intensité des répulsions entre particules. Son vieillissement est étudié sur des échelles de temps longues. Un âge effectif est introduit pour unir les propriétés de vieillissement à différentes concentrations. - A basse température, la dispersion gelée constitue un ensemble désordonné de spins géants, qui présente des analogies avec les verres de spins atomiques. En utilisant un magnétomètre SQUID, nous étudions la dynamique d'orientation de ces spins géants. Nous utilisons une méthode empruntée aux verres de spins pour extraire une longueur de corrélation dynamique; sa taille augmente au cours du vieilissement.
APA, Harvard, Vancouver, ISO, and other styles
45

Ljungren, Joakim. "Data-driven design for sustainable behavior : A case study in using data and conversational interfaces to influence corporate settlement." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-139199.

Full text
Abstract:
Interaction with digital products and interfaces concern more and more of human decision-making and the problems regarding environmental, financial and social sustainability are consequences much due to our behavior. The issues and goals of sustainable development therefore implies how we have to think differently about digital design. In this paper, we examine the adequacy of influencing sustainable behavior with a data-driven design approach, applying a conversational user interface. A case study regarding the United Nation’s goals of technological development and economic distribution was conducted, to see if a hypothetical business with a proof-of-concept digital product could be effective in influencing where companies base their operations. The test results showed a lack of usability and influence, but still suggested a potential with language-based interfaces. Even though the results could not prove anything, we argue that leveraging data analysis to design for sustainable behavior could be a very valuable strategy. A data-driven approach could enable ambitions of profit and user experience to coincide with those of sustainability, within a business organization.
APA, Harvard, Vancouver, ISO, and other styles
46

Leech, Andrea Dawn. ""What Does This Graph Mean?" Formative Assessment With Science Inquiry to Improve Data Analysis." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1537.

Full text
Abstract:
This study investigated the use of formative assessment to improve three specific data analysis skills within the context of a high school chemistry class: graph interpretation, pattern recognition, and making conclusions based on data. Students need to be able to collect data, analyze that data, and produce accurate scientific explanations (NRC, 2011) if they want to be ready for college and careers after high school. This mixed methods study, performed in a high school chemistry classroom, investigated the impact of the formative assessment process on data analysis skills that require higher order thinking. We hypothesized that the use of evaluative feedback within the formative assessment process would improve specific data analysis skills. The evaluative feedback was given to the one group and withheld from the other for the first part of the study. The treatment group had statistically better data analysis skills after evaluative feedback over the control. While these results are promising, they must be considered preliminary due to a number of limitations involved in this study.
APA, Harvard, Vancouver, ISO, and other styles
47

Iqbal, Sumaiya. "Machine Learning based Protein Sequence to (un)Structure Mapping and Interaction Prediction." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2379.

Full text
Abstract:
Proteins are the fundamental macromolecules within a cell that carry out most of the biological functions. The computational study of protein structure and its functions, using machine learning and data analytics, is elemental in advancing the life-science research due to the fast-growing biological data and the extensive complexities involved in their analyses towards discovering meaningful insights. Mapping of protein’s primary sequence is not only limited to its structure, we extend that to its disordered component known as Intrinsically Disordered Proteins or Regions in proteins (IDPs/IDRs), and hence the involved dynamics, which help us explain complex interaction within a cell that is otherwise obscured. The objective of this dissertation is to develop machine learning based effective tools to predict disordered protein, its properties and dynamics, and interaction paradigm by systematically mining and analyzing large-scale biological data. In this dissertation, we propose a robust framework to predict disordered proteins given only sequence information, using an optimized SVM with RBF kernel. Through appropriate reasoning, we highlight the structure-like behavior of IDPs in disease-associated complexes. Further, we develop a fast and effective predictor of Accessible Surface Area (ASA) of protein residues, a useful structural property that defines protein’s exposure to partners, using regularized regression with 3rd-degree polynomial kernel function and genetic algorithm. As a key outcome of this research, we then introduce a novel method to extract position specific energy (PSEE) of protein residues by modeling the pairwise thermodynamic interactions and hydrophobic effect. PSEE is found to be an effective feature in identifying the enthalpy-gain of the folded state of a protein and otherwise the neutral state of the unstructured proteins. Moreover, we study the peptide-protein transient interactions that involve the induced folding of short peptides through disorder-to-order conformational changes to bind to an appropriate partner. A suite of predictors is developed to identify the residue-patterns of Peptide-Recognition Domains from protein sequence that can recognize and bind to the peptide-motifs and phospho-peptides with post-translational-modifications (PTMs) of amino acid, responsible for critical human diseases, using the stacked generalization ensemble technique. The involved biologically relevant case-studies demonstrate possibilities of discovering new knowledge using the developed tools.
APA, Harvard, Vancouver, ISO, and other styles
48

Gu, Jinghua. "Novel Monte Carlo Approaches to Identify Aberrant Pathways in Cancer." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51950.

Full text
Abstract:
Recent breakthroughs in high-throughput biotechnology have promoted the integration of multi-platform data to investigate signal transduction pathways within a cell. In order to model complicated dynamics and heterogeneity of biological pathways, sophisticated computational models are needed to address unique properties of both the biological hypothesis and the data. In this dissertation work, we have proposed and developed methods using Markov Chain Monte Carlo (MCMC) techniques to solve complex modeling problems in human cancer research by integrating multi-platform data. We focus on two research topics: 1) identification of transcriptional regulatory networks and 2) uncovering of aberrant intracellular signal transduction pathways. We propose a robust method, called GibbsOS, to identify condition specific gene regulatory patterns between transcription factors and their target genes. A Gibbs sampler is employed to sample target genes from the marginal function of outlier sum of regression t statistic. Numerical simulation has demonstrated significant performance improvement of GibbsOS over existing methods against noise and false positive connections in binding data. We have applied GibbsOS to breast cancer cell line datasets and identified condition specific regulatory rewiring in human breast cancer. We also propose a novel method, namely Gibbs sampler to Infer Signal Transduction (GIST), to detect aberrant pathways that are highly associated with biological phenotypes or clinical information. By converting predefined potential functions into a Gibbs distribution, GIST estimates edge directions by learning the distribution of linear signaling pathway structures. Through the sampling process, the algorithm is able to infer signal transduction directions which are jointly determined by both gene expression and network topology. We demonstrate the advantage of the proposed algorithms on simulation data with respect to different settings of noise level in gene expression and false-positive connections in protein-protein interaction (PPI) network. Another major contribution of the dissertation work is that we have improved traditional perspective towards understanding aberrant signal transductions by further investigating structural linkage of signaling pathways. We develop a method called Structural Organization to Uncover pathway Landscape (SOUL), which emphasizes on modularized pathways structures from reconstructed pathway landscape. GIST and SOUL provide a very unique angle to computationally model alternative pathways and pathway crosstalk. The proposed new methods can bring insight to drug discovery research by targeting nodal proteins that oversee multiple signaling pathways, rather than treating individual pathways separately. A complete pathway identification protocol, namely Infer Modularization of PAthway CrossTalk (IMPACT), is developed to bridge downstream regulatory networks with upstream signaling cascades. We have applied IMPACT to breast cancer treated patient datasets to investigate how estrogen receptor (ER) signaling pathways are related to drug resistance. The identified pathway proteins from patient datasets are well supported by breast cancer cell line models. We hypothesize from computational results that HSP90AA1 protein is an important nodal protein that oversees multiple signaling pathways to drive drug resistance. Cell viability analysis has supported our hypothesis by showing a significant decrease in viability of endocrine resistant cells compared with non-resistant cells when 17-AAG (a drug that inhibits HSP90AA1) is applied. We believe that this dissertation work not only offers novel computational tools towards understanding complicated biological problems, but more importantly, it provides a valuable paradigm where systems biology connects data with hypotheses using computational modeling. Initial success of using microarray datasets to study endocrine resistance in breast cancer has shed light on translating results from high throughput datasets to biological discoveries in complicated human disease studies. As the next generation biotechnology becomes more cost-effective, the power of the proposed methods to untangle complicated aberrant signaling rewiring and pathway crosstalk will be finally unleashed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Ng, Hok-ling, and 伍學齡. "The effect of cooperative LOGO programming environment on the interaction between hearing impaired students." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B30257013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Jia, Rongfang. "Dynamic Mother-Infant and Father-Infant Interaction: Contribution of Parents’ and Infants’ Facial Affect and Prediction from Depression, Empathy and Temperament." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397809199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography